venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
ICLR | Title
Learning Higher-Order Dynamics in Video-Based Cardiac Measurement
Abstract
Computer vision methods typically optimize for first-order dynamics (e.g., optical flow). However, in many cases the properties of interest are subtle variations in higher-order changes, such as acceleration. This is true in the cardiac pulse, where the second derivative can be used as an indicator of blood pressure and arterial disease. Recent developments in camera-based vital sign measurement have shown that cardiac measurements can be recovered with impressive accuracy from videos; however, the majority of research has focused on extracting summary statistics such as heart rate. Less emphasis has been put on the accuracy of waveform morphology that is necessary for many clinically impactful scenarios. In this work, we provide evidence that higher-order dynamics are better estimated by neural models when explicitly optimized for in the loss function. Furthermore, adding second-derivative inputs also improves performance when estimating second-order dynamics. By incorporating the second derivative of both the input frames and the target vital sign signals into the training procedure, our model is better able to estimate left ventricle ejection time (LVET) intervals.
N/A
Computer vision methods typically optimize for first-order dynamics (e.g., optical flow). However, in many cases the properties of interest are subtle variations in higher-order changes, such as acceleration. This is true in the cardiac pulse, where the second derivative can be used as an indicator of blood pressure and arterial disease. Recent developments in camera-based vital sign measurement have shown that cardiac measurements can be recovered with impressive accuracy from videos; however, the majority of research has focused on extracting summary statistics such as heart rate. Less emphasis has been put on the accuracy of waveform morphology that is necessary for many clinically impactful scenarios. In this work, we provide evidence that higher-order dynamics are better estimated by neural models when explicitly optimized for in the loss function. Furthermore, adding second-derivative inputs also improves performance when estimating second-order dynamics. By incorporating the second derivative of both the input frames and the target vital sign signals into the training procedure, our model is better able to estimate left ventricle ejection time (LVET) intervals.
1 INTRODUCTION
Many of the properties of dynamical systems only become apparent when they move or change as the result of forces applied to them. In most applications we are interested in behavior in terms of positions, velocities, and accelerations, and in some cases the properties of interest may only be observed in subtle variations in the higher-order dynamics (e.g., acceleration). Whether monitoring the flight of a drone to create a control mechanism for stabilization or analyzing the fluid dynamics of the cardiovascular system in the human body, there can be a need to recover these dynamics accurately. However, most video-based systems are trained on lower-order signals, such as position in the case of landmark tracking or velocity/rate-of-change (optical flow) in the case of visual odometry (Nister et al., 2004). Thus, they optimize for lower (zeroth or first) order dynamics. Does this harm their ability to estimate higher order changes? We hypothesize that networks trained to predict temporal signals will benefit from combined multi-derivative learning objectives. To test this hypothesis, we explore video-based cardiac measurement as an example application with a complex dynamical system (the cardiovascular system) and introduce simple but effective changes to the inputs and outputs to significantly improve the measurement of clinically relevant parameters.
Photoplethysmography (PPG) is a low-cost and non-invasive method for measuring the cardiovascular blood volume pulse (BVP). There are many clinical applications for PPG as the signal contains substantial information about health state and risk of cardiovascular diseases (Elgendi et al., 2019; Reisner et al., 2008; Pereira et al., 2020). In the current world, an acutely relevant application of PPG is for pulse oximetry (i.e. measuring pulse rate and blood oxygen saturation) as it can be used to detect low blood oxygen levels associated with the onset of COVID-19 (Greenhalgh et al., 2021). The COVID-19 pandemic has accelerated the adoption of teleheath systems (Annis et al., 2020) with more and more clinical consultations being conducted virtually. Therefore, techniques for remotely monitoring physiological vital signs are becoming increasingly important (Gawałko et al., 2021; Rohmetra et al., 2021). As one might expect, with many clinical applications the precision with which the PPG signal can be recovered is of critical importance when it comes to accurate inference of downstream conditions and the confidence of practitioners in the technology.
To date, in video-based PPG measurement the primary focus of analysis and evaluation has been on features extracted from the raw waveform or its first derivative (Chen & McDuff, 2018; Liu et al., 2020; 2021; Poh et al., 2010a). However, the second derivative of the PPG signal highlights subtle features that can be difficult to discern from those in the lower derivatives. Since the second derivative reflects the acceleration (Takazawa, 1993) or the rate-of rate-of change of the blood volume, it is more closely related to the change in pressure applied by the heart on blood vessels and its relation to vascular health.
An example of a particular feature accentuated in the second-derivative (i.e. acceleration) PPG is the dicrotic notch (see Fig. 1), which occurs when the heart’s aortic valve closes due to the pressure gradient between the aorta and the left ventricle. The dicrotic notch may only manifest as an inflection in the raw PPG wave; however, in the second derivative this inflection is a maxima. Inoue et al. (2017) found that the second derivative of the PPG signal can be used as an indicator of arterial stiffness - which itself is an indicator of cardiac disease. Takazawa et al. (1998) evaluated the second derivative of the PPG waveform and found that its characteristic shape can be used to estimate vascular aging, which was higher in subjects with a history of diabetes mellitus, hypertension, hypercholesterolemia, and ischemic heart disease compared to age-matched subjects without.
While the second derivative of a signal can be a rich source of information, often the zeroth- or first-order dynamics are given priority. For example, Chen & McDuff (2018) observed that training video- or imaging-based PPG (iPPG) models using first-derivative (difference) frames as input with an objective function of minimizing the mean squared error between the prediction and the first derivative of the target BVP signal was effective. This approach was used because the authors were designing their system to measure systolic time intervals only, which are most prominent in the lower order signals. However, they did not combine this with higher-order derivatives nor did they do any systematic comparison across derivative objectives.
We argue that a model trained with an explicit second-derivative (acceleration) objective should produce feature representations that better preserve/recover these dynamics than methods that simply derive acceleration from velocity. We observe that providing the model with a second derivative input also helps the network to better predict both the first and second derivative signals.
Finally, as diverse labeled data for training supervised models for predicting dynamical signals is often difficult to come by, we build on promising work in simulation to obtain our training data. Since light is absorbed and reflected differently for different skin tones (Bent et al., 2020; Dasari et al., 2021) having a training set that represents the true diversity of the target population is crucial for sufficient generalization. Our results show that models trained with synthetic data can learn parameters that successfully generalize to real human subjects. While this is not a central focus of our paper, we believe that it presents a promising proof-of-concept for future work.
To summarize, in this paper, we 1) demonstrate that directly incorporating higher-order dynamics into the loss function improves the quality of the estimated higher-order signals in terms of waveform morphology, 2) show that adding second-derivative inputs additionally improves performance, and 3) we describe a novel deep learning architecture that incorporates the second derivative input frames and target signals and evaluate it against clinical-grade contact sensor measurements.
2 BACKGROUND
Learning Higher-Order Motion from Videos. Despite its significance in many tasks, acceleration is often not explicitly modeled in many computer vision methods. However, there is a small body of literature that has considered how to recover (Edison & Jiji, 2017) and amplify optical acceleration (Zhang et al., 2017; Takeda et al., 2018). Given that acceleration can be equally as important as position and velocity in understanding dynamical systems, we argue that this topic deserves further attention.
A particularly relevant problem to ours is identifying small changes in videos (Wu et al., 2012; Zhang et al., 2017; Chen & McDuff, 2020; Takeda et al., 2018), and specifically in acceleration in the presence of relatively large motion. As an example, in the iPPG prediction task the aim is to identify minor changes in skin coloring due to variation in blood flow patterns, while ignoring major pixel changes due to subject or background motion. One method proposed by Zhang et al. (2017) for overcoming this signal separation problem is Video Acceleration Magnification, in which
large motions are assumed to be linear on the temporal scale of small changes while small changes deviate from this linearity. An extension to this method focused on making it more robust to sudden motions (Takeda et al., 2018). In both cases, a combination of Eulerian and Lagrangian approaches was used, rather than utilizing a supervised learning paradigm. Of relevance here is also work magnifying subtle physiological changes using neural architectures (Chen & McDuff, 2020), which have been shown to effectively separate signal and noise in both the spatial and temporal domains.
Our work might be most closely related to prior research into feature descriptors for optical acceleration (Edison & Jiji, 2017). One example uses histograms of optical acceleration to effectively encode the motion information. However, this work also defined handcrafted features, rather than learning representations from data. Our work is also related conceptually to architectures such as SlowFast (Feichtenhofer et al., 2019) in that it utilizes multiple “pathways” to learn different properties of the dynamics within a video. We were inspired by this approach; however, unlike SlowFast, we focus specifically on higher-order pathways rather than slower and faster frame sequences.
Video-based Cardiac Measurement. Diffuse reflections from the body vary depending on how much light is absorbed in the peripheral layers of the skin and this is influenced by the volume of blood in the capillaries. Digital cameras can capture these very subtle changes in light which can then be used to recover the PPG signal (Wu et al., 2000; Takano & Ohta, 2007; Verkruysse et al., 2008; Poh et al., 2010a). The task then becomes separating pixel changes due to blood flow from those due to body motions, ambient lighting variation, and other environmental factors that we consider noise in this context. While earlier methods leveraged source separation algorithms (Wang et al., 2016), such as ICA (Poh et al., 2010a) or PCA (Lewandowska et al., 2011), neural models provide the current state-of-the-art in this domain (Chen & McDuff, 2018; Liu et al., 2020; 2021; Song et al., 2021; Lu et al., 2021). These architectures support learning spatial attention and sourcespecific temporal variations and separating these from various sources of noise. Typically, the input to these models are normalized video frames and the output is a 1-D time series prediction of the PPG waveform or the heart rate. A vast majority of work has evaluated these methods based errors in heart rate estimation, which considers the dominant or “systolic” frequency alone. Only a few papers have used more challenging evaluation criteria, such as the estimation of systolic to diastolic peaks (McDuff et al., 2014).
3 OPTICAL BASIS
We start by providing an optical basis for the measurement of the pulse wave using a camera and specifically the second derivative signal. Starting with Shafer’s Dichromatic Reflection Model (DRM)(Wang et al., 2016; Chen & McDuff, 2018; Liu et al., 2020), we want to understand how higher order changes in the blood volume pulse impact pixel intensities to motivate the design of our inputs and loss function. Based on the DRM model the RGB values captured by the cameras as given by:
Ck(t) = I(t) · (vs(t) + vd(t)) + vn(t) (1)
where I(t) is the luminance intensity level, modulated by the specular reflection vs(t) and the diffuse reflection vd(t). Quantization noise of the camera sensor is captured by vn(t). I(t) can be decomposed into stationary and time-varying parts vs(t) and vd(t) (Wang et al., 2016):
vd(t) = ud · d0 + up · p(t) (2)
where ud is the unit color vector of the skin-tissue; d0 is the stationary reflection strength; up is the relative pulsatile strengths caused by hemoglobin and melanin absorption; p(t) represents the physiological changes. Let us assume for simplicity in this case that the luminance, I (i.e., illumination in the video) is constant, not time varying, which is a reasonable assumption for short videos and those in which the subject can control their environment (e.g., indoors). Then differentiating twice with respect to time, t:
∂2Ck(t)
∂t2 = I · (∂
2vs(t)
∂t2 + ∂2ud ∂t2 + ∂2up(t) ∂t2 + ∂2vn(t) ∂t2 ) (3)
The non-time varying part ud · d0 becomes zero. Thus simplifying the equation to:
∂2Ck(t)
∂t2 = I · (∂
2vs(t)
∂t2 +
∂2up(t)
∂t2 +
∂2vn(t)
∂t2 ) (4)
Furthermore, if specular reflections do not vary over time (e.g., if the camera and subject are stationary), the vs(t) term will also become zero. This means that the second derivative changes in pixel intensities are a sum of second derivative changes in PPG and camera noise. With current camera technology, and little video compression, image noise is typically much smaller than the PPG signal. Therefore, we would expect the pixel changes to be dominated by second derivative variations in the blood volume pulse:
∂2Ck(t)
∂t2 = I · ∂
2up(t)
∂t2 (5)
As such, we can infer that when attempting to estimate the second derivative of the PPG signal from videos without very large motions or illumination changes, second derivative changes in the pixel space would appear helpful and that minimizing the loss between the second derivative prediction and ground truth will be the simplest learning task for the algorithm when the input is secondderivative pixel changes.
4 OUR MODEL
time (s)
BVP
First Derivative
Second Derivative
Left Ventricle Ejection Time (LVET) Systolic Peak
Dicrotic Norch
Systolic Foot
pant’s skin) and ignore noisy regions (e.g. background). These attention masks are shared between the first-derivative branch and the second-derivative branch as we expect the same spatial regions to contain first and second derivative information. After feature representations are extracted from frames within each derivative-input branch, the features are concatenated together for each time step and the target signals are then generated using recurrent neural network (RNN) layers. A diagram depicting the architecture used for our experimentation is shown in Fig. 2.
4.1 PREDICTING MULTI-DERIVATIVE TARGET SIGNALS
The goal of iPPG is to obtain an estimate of the underlying PPG signal p(t) (as in Eq. 2), while only observing video frames X(t) containing a subject’s skin (in this case the face). Mathematically, this can be described as learning a function: p̂(t) = f(X(t)) or, because we are interested in changes in blood volume changes, estimating the first derivative of the PPG signal: p̂′(t) = f(X(t), X ′(t)) , where the first derivative PPG signal is defined as: p′(t) = p(t)− p(t− 1). Using prior methods, to obtain an estimate of the PPG signal’s second derivative, one would either differentiate the predicted PPG signal twice, or differentiate the predicted first-derivative PPG once, rather than calculate the acceleration PPG directly. In contrast, we explicitly predict the acceleration PPG waveform as a target signal. We define the second derivative waveform as the difference between consecutive first-derivative time points: p′′(t) = p′(t) − p′(t − 1). Then we train our model to predict the second derivative waveform p̂′′(t) = f(X(t), X ′(t)) given a set of input video frames X(t) and the corresponding normalized difference frames X ′(t). To optimize our model parameters we minimize the mean squared difference between the true and predicted second derivative waveforms:
L = 1
T T∑ t=1 (p′′(t)− p̂′′(t))2 (6)
4.2 LEVERAGING MULTI-DERIVATIVE INPUTS
It has been previously shown that the normalized difference frames are useful for predicting the first derivative PPG waveforms. Therefore, we hypothesized that incorporating the second derivative of the raw video frames X ′′(t) = X ′(t) − X ′(t − 1) (i.e. the difference-of-difference frames) may also be useful for predicting the PPG signal and its derivatives. Similar to the difference frames, we added a separate convolutional attention branch, where the attention mask is shared between both branches (see Fig. 2). Sharing the attention mask is a reasonable assumption as we would expect skin regions to all exhibit the signal and similar dynamics. After the feature maps in each branch are pooled into a single value per feature at each time step, the learned representations are concatenated together. These concatenated features over time are used as input sequences to the recurrent layers that generate the target waveforms.
Given that difference frames X ′(t) are useful for predicting the first derivative PPG waveforms, features learned from the difference-of-difference frames X ′′(t) may be beneficial for predicting the second derivative PPG signal. In theory, if difference-of-difference features are indeed useful for predicting the acceleration PPG, then the CAN network should be able to learn those features
from the difference frames due to the 3D convolutional operations. However, manually adding the difference-of-difference frames could help guide the model. To examine the effect of combining higher-order inputs and target signals, we fit a model p̂′′(t) = f(X(t), X ′(t), X ′′(t)) to predict the second-derivative PPG.
5 EXPERIMENTS
In this section we will describe the data used to train and evaluate our method and perform a systematic ablation study in which we test different combinations of inputs and outputs.
5.1 DATA
Training To train our models using a large and diverse set of subjects, we leverage recent work that uses highly-parameterized synthetic avatars to generate videos containing simulated subjects with various movements and backgrounds (McDuff et al., 2020). To drive changes in the synthetic avatars’ appearance, the PPG signal is used to manipulate the base skin color and the subsurface radius (McDuff et al., 2020). The subsurface scattering is spatially weighted using an artist-created subsurface scattering radius texture that captures variations in the thickness of the skin across the face. Using physiological waveforms signals from the MIMIC Physionet (Goldberger Ary L. et al., 2000) database, we randomly sampled windows of PPG waveforms from real patients. The physiological waveform data were sampled to maximize examples from different patients. Using the synthetic avatar pipeline and MIMIC waveforms, we generated 2,800 6-second videos, where half of the videos were generated using hand-crafted facial motion/action signals, and the other half using facial motion/action signals extracted using landmark detection on real videos. Examples of the avatars can be found in Appendix A.1.1.
Testing Given that we are focusing on recovering very subtle changes in pixel intensities due to the blood volume pulse, we use a highly controlled and very accurately annotated dataset of real videos for evaluation. The AFRL dataset (Estepp et al., 2014) consists of 300 videos from 25 participants (17 male and 8 female). Each video in the dataset has a resolution of 658x492 pixels sampled at 30 Hz. Ground truth PPG signals were recorded using a contact reflective PPG sensor attached to the subject’s index finger. Each participant was instructed to perform three head motion tasks including rotating the head along the horizontal axis, rotating the head along the vertical axis, and rotating the head randomly once every second to one of nine predefined locations. Since our goal in this work was to compare methods for estimating subtle waveform dynamics, which can be more difficult to do in the presence of large motion, we focused here on the first two AFRL tasks where participant motion is minimal. Examples of AFRL participants can be found in Appendix A.1.1.
5.2 IMPLEMENTATION DETAILS
We trained our models using a large dataset of generated synthetic avatars and evaluated model performance on the AFRL dataset, which consists of real human subjects. For each video, we first cropped the video frames so that the face was approximately centered. Next, we reduced the resolution of the video to 36x36 pixels to reduce noise and computational requirements while maintaining useful spatial signal Verkruysse et al. (2008); Wang et al. (2017); Poh et al. (2010b). The input to the attention branch was T raw video frames. The input to the first-derivative branch was a set of T normalized difference frames, calculated by subtracting consecutive frames and normalizing by the sum. The input to the second-derivative branch was a set of T − 1 difference-of-difference frames (second derivative frames), calculated by subtracting consecutive normalized difference frames (i.e. the T frames used as input to the motion branch). In our experiments, we used a window size of T = 30 video frames to predict the target signals for the corresponding 30 time points. During training, a sliding window of 15 frames (i.e. 50% overlap between consecutive windows) was used to increase the total number of training examples. The model was implemented using Tensorflow (Abadi et al., 2016) and trained for eight epochs using the Adam (Kingma & Ba, 2017) optimizer with a learning rate of 0.001, and a batch size of 16.
5.3 SYSTEMATIC EVALUATION
To measure the effect of using multi-derivative inputs and outputs, we systematically removed the second-derivative parts of the model and used quantitative and qualitative methods to examine the change in model performance. To quantitatively measure the quality of the predicted signal, we calculated two clinically important parameters - heart rate (HR) and the left ventricular ejection time (LVET) interval (see Appendix A.1.3 for details). Video-based HR prediction has been a major focus of iPPG applications, with many methods showing highly-accurate results. HR can be determined through peak detection or by determining the dominant frequency in the signal (e.g. using fast Fourier transform). Since current iPPG methods are able to achieve sufficiently-low error rates on the HR estimation task, we believe that metrics that capture the quality of waveform morphology should also be considered.
The LVET interval is defined as the time between the opening and closing of the heart’s aortic valve, i.e. the systolic phase when the heart is contracting (see Fig. 1). In the PPG waveform, this interval begins at the diastolic point (i.e. the global minimum pressure within a heartbeat cycle) and ends with the dicrotic notch (i.e. local minimum occurring after systolic peak, marking the end of the systolic phase and the beginning of the diastolic phase). LVET typically is correlated with cardiac output (stroke volume × heart rate)(Hamada et al., 1990), and has been shown to be an indicator of future heart failure as the time interval decreases with left-ventricle dysfunction (Biering-Sørensen et al., 2018).
Calculating LVET requires identification of the diastolic point and the dicrotic notch. The diastolic point is a (global) minimum point within a heart beat, meaning it corresponds to a positive peak
in the second derivative signal according to the second-derivative test. Similarly, the dicrotic notch is a (local) minimum in the PPG signal, and appears as a positive peak in the second derivative following the diastolic peak in time. Because the dicrotic notch can often be a subtle feature, it is much easier to identify in the PPG’s second derivative compared to the raw signal. Therefore, it is a good example of clinically-important waveform morphology that is best captured by higher-order dynamics.
Removing the second-derivative frames In Table 1, quantitative evaluation metrics (HR and LVET) are shown for all experiments in our ablation study, using tasks 1 and 2 from the AFRL dataset. Removing the second-derivative (SD) frames results in the model configurations in the top three rows of Table 1. When SD frames are removed, the result is a general decrease in the HR error. However, there is also a general increase in LVET interval prediction error, which suggests that including the SD frames leads to improved estimation of waveform morphology.
Removing the first-derivative target signal Intuitively, models that are optimized using a loss function specifically focusing on a single objective will perform better in terms of that objective compared to models trained with loss functions containing multiple objectives. By removing the first-derivative target signal from the training objective, the model is focused to exclusively focus on the second-derivative (SD) objective. Empirically, this leads the SD-Optimized model to have the lowest LVET MAE of any model configuration (last row of Table 1). While the SD-Optimized model achieves the lowest LVET error, the HR error is the highest of any configuration. These results suggest that there are performance trade-offs to consider when designing a system for particular downstream tasks.
Removing the second-derivative target signal When the second-derivative target signal is removed from the model, the optimization procedure is purely focused on improving the prediction of the first derivative. The FD-Optimized model (first row of Table 1) serves as a form of baseline, since previous works have focused on using first-derivative (FD) frames to predict the first-derivative PPG signal. Fig. 4 shows a Bland-Altman plot (Martin Bland & Altman, 1986) comparing the FDOptimized and SD-Optimized error distributions as a function of the ground-truth values both HR and LVET intervals.
Perhaps unsurprisingly, our results show the FD-Optimized model achieves the lowest HR MAE (0.66 ± 2.07 BPM) of any model configuration examined and, in particular, improves HR estimation compared to models without the first derivative target signal. However, the FD-Optimized model also has the worst performance in terms of the LVET MAE (108.26 ± 56.19 ms) of any model configuration. This suggests that while the configuration provides an accurate assessment of the heartbeat frequency, the quality of predicted waveform morphology can be improved by incorporating second-derivative information. We observe similar results when evaluating the models on the UBFC (Bobbia et al., 2019) and PURE (Stricker et al., 2014) datasets (see Appendix Table 3).
Qualitative comparisons For a qualitative comparison, in Fig. 3 we plot the ground-truth, FDOptimized, and SD-Optimized PPG, first derivative, and second derivative. Additionally, in the bottom panel of Fig. 3 we overlay the true and predicted LVET intervals for each signal to demonstrate model performance. For additional qualitative comparisons, see Appendix A.2.
6 CONCLUSIONS
Using the task of video-based cardiac measurement we have shown that when learning representations for dynamical systems that appropriately designing inputs, and optimizing for derivatives of interest can make a significant difference in model performance. Specifically, there is a trade-off between optimizing for lower-order and higher-order dynamics. Given the importance of secondderivatives (i.e., acceleration) in this, and many other video understanding tasks, we believe it is important to understand the trade-off between optimizing for targets that capture different dynamic properties. In cardiac measurement in particular, the LVET is one of the more important clinical parameters and can be better estimated using higher-order information. While we have investigated the importance of higher-order dynamics in the context of video-based cardiac measurement, this paradigm is generally applicable. We believe future work will continue to showcase the importance of explicitly incorporating higher-order dynamics.
7 ETHICS STATEMENT
Camera-based cardiac measurement could help improve the quality of remote health care, as well as enable less invasive measurement of important physiological signals. The COVID-19 pandemic has revealed the importance of tools to support remote care. These needs are likely to be particularly acute in low-resource settings where distance, travel costs, and time are a great barrier to access quality healthcare. However, given the non-contact nature of the technology, it could also be used to measure personal data without the knowledge of the subject. Just as is the case with traditional contact sensors, it must be made transparent when these methods are being used, and subjects should be required to consent before physiological data is measured or recorded. There should be no penalty for individuals who decline to be measured. New bio-metrics laws can help protect people from unwanted physiological monitoring, or discrimination based on pre-existing health conditions detected via non-contact monitoring. However, social norms also need to be constructed around the use of this technology.
In this work, data were collected under informed consent from the participants.
A APPENDIX
A.1 SUPPLEMENTAL METHODS
A.1.1 EXAMPLE VIDEO FRAMES
A.1.2 MODEL ARCHITECTURE
The first two 3D convolutional layers in each branch each have 16 filters and the final two 3D convolutional layers in each branch each have 32 filters. Each convolutional layer has a filter size of 3x3x3 for all 3D convolutional layers in the network. All convolutional layers are padded such that they have the same height, width, and number of time steps in each consecutive layer. Convolutional layers use the hyperbolic tangent activation function, except for the convolutional layers used for the attention masks which use a sigmoid activation function for generating the soft masks. Attention masks (one per time step) are applied by applying an element-wise multiplication of the attention mask with each 3D convolutional feature map. Average pooling layers reduce the height and width of the frames by a factor of two, except for the final average pooling layer that pools over the entire frame (i.e. reduces each feature map to a single value per time step). Dropout (25% probability) is applied after every pooling layer to reduce overfitting.
After the final pooling layer, the learned features for each time step in a branch are concatenated together (i.e. combined across branches to share information). Each target signal uses its own set of (2) RNN layers to read the concatenated features over time and generate a target sequence. The first RNN layer is implemented as a bi-directional GRU (hyperbolic tangent activation function) with 64 total units (32 each direction). The second RNN layer is a GRU (linear activation function) layer with 1 output value per time step.
A.1.3 METRIC CALCULATION
Heart Rate (HR) estimation To estimate the heart rate, we use an fast Fourier transform (FFT)based method to calculate the dominant frequency in the signal, which corresponds to the heart rate. We first estimate power spectral density using the “periodogram” function from the scipy.signal (Virtanen et al., 2020) library. Then we band-pass filter the PPG signal, with cutoff frequencies of 0.75- 4.0 Hz (corresponding to a minimum HR of 45 BPM and maximum HR of 240 BPM). Finally, we select the frequency with the maximum power, and use this as our estimated HR.
Left Ventricle Ejection Time (LVET) estimation The LVET time is defined as the time interval between the diastolic peak and the dicrotic notch. To calculate this interval, we first identified the diastolic point in the second derivative (SD) of the PPG signal, which, because it is a “global” minima in the PPG heartbeat, appears as a “global” maxima (positive SD value) in the SD PPG. Then, in each predicted SD PPG waveform, we identified candidate dicrotic notch points. Since the dicrotic notch manifests as a “local” minima in the PPG signal, it appears as a “local” maxima in the PPG SD signal (positive SD value). Using peak detection (“find peaks” function in the scipy.signal library (Virtanen et al., 2020)) we identify candiadate dicrotic notch points by finding local peaks that occur after a diastolic point, and use the dicrotic notch candidate point that is closest in time to the reference diastolic point.
Because both the ground truth PPG (and therefore its derivatives) and, in particular, the predicted PPG (and its derivatives), contain signal artifacts and noise, the peak detection process is not perfect. To reduce variability in the LVET interval estimates due to noise, we apply a smoothing operation. Specifically, we estimate the mean LVET interval within a 10-second non-overlapping window and use this as our estimate of true/predicted LVET. See Appendix Fig. 7 for example LVET intervals over time, and the estimated LVET intervals after smoothing within windows.
A.2 SUPPLEMENTAL RESULTS | 1. What is the main contribution of the paper regarding computer vision algorithms and deep learning?
2. What are the strengths and weaknesses of the proposed methodology in terms of novelty, motivation, and performance evaluation?
3. Do you have any concerns about the use of synthetic datasets instead of real ones?
4. How does the attention mask affect the performance of the convolutional network, and what are the comparative results without the attention mask?
5. How does the proposed architecture compare with other architectures in terms of performance, complexity, and computational power consumption?
6. What is the impact of reducing video resolution on performance, and how does it affect the model's ability to generalize?
7. Why was the number of epochs chosen to be relatively low, and how does it affect the model's ability to generalize?
8. Are there any references that support the statement in the introduction section regarding the primary focus of analysis and evaluation in video-based PPG measurement?
9. Is there any effect of lighting conditions on the prediction, and how would natural or artificial light with different frequencies affect the results? | Summary Of The Paper
Review | Summary Of The Paper
Computer vision algorithms based on deep learning usually optimize first-order dynamics. But, in cases the properties of interest are small variations, which are best described in higher order, such as acceleration. A direct application of this is the heart pulse, where the second derivative can be used as an indicator of blood pressure and arterial disease. In this work, the author(s) propose a methodology for beat waveform prediction using convolutional attention networks, and considering the derivatives of the input images and optimizing for derivatives of interest. The results are very interesting and show that by appropriately incorporating higher-order dynamics, the performance of video understanding tasks can be greatly improved.
Review
Strong Points:
The paper is clear and very well written. In addition, the problem is clearly stated, and the methodology is presented with the corresponding metrics.
This work has a strong background motivation and the methodology is novel, since there is currently a need to improve the models whose tasks are represented in higher dynamics, as shown in the introduction.
The paper evaluates correctly the incorporation of the second derivative of the input frames to improve the performance of second order dynamics, which enriches the results of the work.
It is very important and beneficial that the authors have shared their work on Github, so that the work is reproducible and replicable. This enriches the work presented.
Weak Points:
There is a lack of clear support for the use of a synthetic dataset instead of a real one. There are differences that need to be clearly defined and justified as to how much it affects the performance of the convolutional network.
Although the use of the attention mask is a reasonable assumption, there is a lack of comparative results of the full model with and without the attention mask. Obtaining these differences is important to understand how it affects the use of care models. The comparison is not complicated and is suggested.
The proposed architecture is interesting and works well, however there is a lack of comparison with other architectures to evidence that the proposed architecture is the best. In this way, a comparison with simpler and more complex models could be shown to verify that the implemented model is the best one.
The paper mentions that in the implementation the video resolution is reduced to 36x36 pixels. But, it is not supported how the reduction of resolution affects performance. One way to present this could be through a comparison of performance, computational power consumption, and response time at different resolutions.
The number of epochs used in the training stage is eight, which is relatively low. In addition, there is no support as to why this low number of epochs was chosen. Making use of more epochs would help a model to generalize better.
Suggestions:
In the Introduction section, you mentioned that "To date, in video-based PPG measurement the primary focus of analysis and evaluation has been on features extracted from the raw waveform or its first derivative", however, no research works are cited or referenced. The references that support this statement should be cited.
At section 4.2, the paper states: “Given that difference frames X’(t) are useful for predicting the first derivative PPG waveforms, features learned from the difference-of-difference frames X’’(t) may be beneficial for predicting the second derivative PPG signal.” I think I should say: “Given that difference frames X(t) are useful for predicting the first derivative PPG waveforms, features learned from the difference-of-difference frames X’(t) may be beneficial for predicting the second derivative PPG signal.”
How much does the amount of light affect the prediction? It would be interesting to show a graph or table of the variation of results as the illumination is increased or decreased. Also, would it be better to perform the prediction on natural light? How would artificial light vary the results (e.g., light operating at a frequency of 60 Hz changes the illumination and the camera could capture at the moment of less illumination which would change the results)? |
ICLR | Title
Learning Higher-Order Dynamics in Video-Based Cardiac Measurement
Abstract
Computer vision methods typically optimize for first-order dynamics (e.g., optical flow). However, in many cases the properties of interest are subtle variations in higher-order changes, such as acceleration. This is true in the cardiac pulse, where the second derivative can be used as an indicator of blood pressure and arterial disease. Recent developments in camera-based vital sign measurement have shown that cardiac measurements can be recovered with impressive accuracy from videos; however, the majority of research has focused on extracting summary statistics such as heart rate. Less emphasis has been put on the accuracy of waveform morphology that is necessary for many clinically impactful scenarios. In this work, we provide evidence that higher-order dynamics are better estimated by neural models when explicitly optimized for in the loss function. Furthermore, adding second-derivative inputs also improves performance when estimating second-order dynamics. By incorporating the second derivative of both the input frames and the target vital sign signals into the training procedure, our model is better able to estimate left ventricle ejection time (LVET) intervals.
N/A
Computer vision methods typically optimize for first-order dynamics (e.g., optical flow). However, in many cases the properties of interest are subtle variations in higher-order changes, such as acceleration. This is true in the cardiac pulse, where the second derivative can be used as an indicator of blood pressure and arterial disease. Recent developments in camera-based vital sign measurement have shown that cardiac measurements can be recovered with impressive accuracy from videos; however, the majority of research has focused on extracting summary statistics such as heart rate. Less emphasis has been put on the accuracy of waveform morphology that is necessary for many clinically impactful scenarios. In this work, we provide evidence that higher-order dynamics are better estimated by neural models when explicitly optimized for in the loss function. Furthermore, adding second-derivative inputs also improves performance when estimating second-order dynamics. By incorporating the second derivative of both the input frames and the target vital sign signals into the training procedure, our model is better able to estimate left ventricle ejection time (LVET) intervals.
1 INTRODUCTION
Many of the properties of dynamical systems only become apparent when they move or change as the result of forces applied to them. In most applications we are interested in behavior in terms of positions, velocities, and accelerations, and in some cases the properties of interest may only be observed in subtle variations in the higher-order dynamics (e.g., acceleration). Whether monitoring the flight of a drone to create a control mechanism for stabilization or analyzing the fluid dynamics of the cardiovascular system in the human body, there can be a need to recover these dynamics accurately. However, most video-based systems are trained on lower-order signals, such as position in the case of landmark tracking or velocity/rate-of-change (optical flow) in the case of visual odometry (Nister et al., 2004). Thus, they optimize for lower (zeroth or first) order dynamics. Does this harm their ability to estimate higher order changes? We hypothesize that networks trained to predict temporal signals will benefit from combined multi-derivative learning objectives. To test this hypothesis, we explore video-based cardiac measurement as an example application with a complex dynamical system (the cardiovascular system) and introduce simple but effective changes to the inputs and outputs to significantly improve the measurement of clinically relevant parameters.
Photoplethysmography (PPG) is a low-cost and non-invasive method for measuring the cardiovascular blood volume pulse (BVP). There are many clinical applications for PPG as the signal contains substantial information about health state and risk of cardiovascular diseases (Elgendi et al., 2019; Reisner et al., 2008; Pereira et al., 2020). In the current world, an acutely relevant application of PPG is for pulse oximetry (i.e. measuring pulse rate and blood oxygen saturation) as it can be used to detect low blood oxygen levels associated with the onset of COVID-19 (Greenhalgh et al., 2021). The COVID-19 pandemic has accelerated the adoption of teleheath systems (Annis et al., 2020) with more and more clinical consultations being conducted virtually. Therefore, techniques for remotely monitoring physiological vital signs are becoming increasingly important (Gawałko et al., 2021; Rohmetra et al., 2021). As one might expect, with many clinical applications the precision with which the PPG signal can be recovered is of critical importance when it comes to accurate inference of downstream conditions and the confidence of practitioners in the technology.
To date, in video-based PPG measurement the primary focus of analysis and evaluation has been on features extracted from the raw waveform or its first derivative (Chen & McDuff, 2018; Liu et al., 2020; 2021; Poh et al., 2010a). However, the second derivative of the PPG signal highlights subtle features that can be difficult to discern from those in the lower derivatives. Since the second derivative reflects the acceleration (Takazawa, 1993) or the rate-of rate-of change of the blood volume, it is more closely related to the change in pressure applied by the heart on blood vessels and its relation to vascular health.
An example of a particular feature accentuated in the second-derivative (i.e. acceleration) PPG is the dicrotic notch (see Fig. 1), which occurs when the heart’s aortic valve closes due to the pressure gradient between the aorta and the left ventricle. The dicrotic notch may only manifest as an inflection in the raw PPG wave; however, in the second derivative this inflection is a maxima. Inoue et al. (2017) found that the second derivative of the PPG signal can be used as an indicator of arterial stiffness - which itself is an indicator of cardiac disease. Takazawa et al. (1998) evaluated the second derivative of the PPG waveform and found that its characteristic shape can be used to estimate vascular aging, which was higher in subjects with a history of diabetes mellitus, hypertension, hypercholesterolemia, and ischemic heart disease compared to age-matched subjects without.
While the second derivative of a signal can be a rich source of information, often the zeroth- or first-order dynamics are given priority. For example, Chen & McDuff (2018) observed that training video- or imaging-based PPG (iPPG) models using first-derivative (difference) frames as input with an objective function of minimizing the mean squared error between the prediction and the first derivative of the target BVP signal was effective. This approach was used because the authors were designing their system to measure systolic time intervals only, which are most prominent in the lower order signals. However, they did not combine this with higher-order derivatives nor did they do any systematic comparison across derivative objectives.
We argue that a model trained with an explicit second-derivative (acceleration) objective should produce feature representations that better preserve/recover these dynamics than methods that simply derive acceleration from velocity. We observe that providing the model with a second derivative input also helps the network to better predict both the first and second derivative signals.
Finally, as diverse labeled data for training supervised models for predicting dynamical signals is often difficult to come by, we build on promising work in simulation to obtain our training data. Since light is absorbed and reflected differently for different skin tones (Bent et al., 2020; Dasari et al., 2021) having a training set that represents the true diversity of the target population is crucial for sufficient generalization. Our results show that models trained with synthetic data can learn parameters that successfully generalize to real human subjects. While this is not a central focus of our paper, we believe that it presents a promising proof-of-concept for future work.
To summarize, in this paper, we 1) demonstrate that directly incorporating higher-order dynamics into the loss function improves the quality of the estimated higher-order signals in terms of waveform morphology, 2) show that adding second-derivative inputs additionally improves performance, and 3) we describe a novel deep learning architecture that incorporates the second derivative input frames and target signals and evaluate it against clinical-grade contact sensor measurements.
2 BACKGROUND
Learning Higher-Order Motion from Videos. Despite its significance in many tasks, acceleration is often not explicitly modeled in many computer vision methods. However, there is a small body of literature that has considered how to recover (Edison & Jiji, 2017) and amplify optical acceleration (Zhang et al., 2017; Takeda et al., 2018). Given that acceleration can be equally as important as position and velocity in understanding dynamical systems, we argue that this topic deserves further attention.
A particularly relevant problem to ours is identifying small changes in videos (Wu et al., 2012; Zhang et al., 2017; Chen & McDuff, 2020; Takeda et al., 2018), and specifically in acceleration in the presence of relatively large motion. As an example, in the iPPG prediction task the aim is to identify minor changes in skin coloring due to variation in blood flow patterns, while ignoring major pixel changes due to subject or background motion. One method proposed by Zhang et al. (2017) for overcoming this signal separation problem is Video Acceleration Magnification, in which
large motions are assumed to be linear on the temporal scale of small changes while small changes deviate from this linearity. An extension to this method focused on making it more robust to sudden motions (Takeda et al., 2018). In both cases, a combination of Eulerian and Lagrangian approaches was used, rather than utilizing a supervised learning paradigm. Of relevance here is also work magnifying subtle physiological changes using neural architectures (Chen & McDuff, 2020), which have been shown to effectively separate signal and noise in both the spatial and temporal domains.
Our work might be most closely related to prior research into feature descriptors for optical acceleration (Edison & Jiji, 2017). One example uses histograms of optical acceleration to effectively encode the motion information. However, this work also defined handcrafted features, rather than learning representations from data. Our work is also related conceptually to architectures such as SlowFast (Feichtenhofer et al., 2019) in that it utilizes multiple “pathways” to learn different properties of the dynamics within a video. We were inspired by this approach; however, unlike SlowFast, we focus specifically on higher-order pathways rather than slower and faster frame sequences.
Video-based Cardiac Measurement. Diffuse reflections from the body vary depending on how much light is absorbed in the peripheral layers of the skin and this is influenced by the volume of blood in the capillaries. Digital cameras can capture these very subtle changes in light which can then be used to recover the PPG signal (Wu et al., 2000; Takano & Ohta, 2007; Verkruysse et al., 2008; Poh et al., 2010a). The task then becomes separating pixel changes due to blood flow from those due to body motions, ambient lighting variation, and other environmental factors that we consider noise in this context. While earlier methods leveraged source separation algorithms (Wang et al., 2016), such as ICA (Poh et al., 2010a) or PCA (Lewandowska et al., 2011), neural models provide the current state-of-the-art in this domain (Chen & McDuff, 2018; Liu et al., 2020; 2021; Song et al., 2021; Lu et al., 2021). These architectures support learning spatial attention and sourcespecific temporal variations and separating these from various sources of noise. Typically, the input to these models are normalized video frames and the output is a 1-D time series prediction of the PPG waveform or the heart rate. A vast majority of work has evaluated these methods based errors in heart rate estimation, which considers the dominant or “systolic” frequency alone. Only a few papers have used more challenging evaluation criteria, such as the estimation of systolic to diastolic peaks (McDuff et al., 2014).
3 OPTICAL BASIS
We start by providing an optical basis for the measurement of the pulse wave using a camera and specifically the second derivative signal. Starting with Shafer’s Dichromatic Reflection Model (DRM)(Wang et al., 2016; Chen & McDuff, 2018; Liu et al., 2020), we want to understand how higher order changes in the blood volume pulse impact pixel intensities to motivate the design of our inputs and loss function. Based on the DRM model the RGB values captured by the cameras as given by:
Ck(t) = I(t) · (vs(t) + vd(t)) + vn(t) (1)
where I(t) is the luminance intensity level, modulated by the specular reflection vs(t) and the diffuse reflection vd(t). Quantization noise of the camera sensor is captured by vn(t). I(t) can be decomposed into stationary and time-varying parts vs(t) and vd(t) (Wang et al., 2016):
vd(t) = ud · d0 + up · p(t) (2)
where ud is the unit color vector of the skin-tissue; d0 is the stationary reflection strength; up is the relative pulsatile strengths caused by hemoglobin and melanin absorption; p(t) represents the physiological changes. Let us assume for simplicity in this case that the luminance, I (i.e., illumination in the video) is constant, not time varying, which is a reasonable assumption for short videos and those in which the subject can control their environment (e.g., indoors). Then differentiating twice with respect to time, t:
∂2Ck(t)
∂t2 = I · (∂
2vs(t)
∂t2 + ∂2ud ∂t2 + ∂2up(t) ∂t2 + ∂2vn(t) ∂t2 ) (3)
The non-time varying part ud · d0 becomes zero. Thus simplifying the equation to:
∂2Ck(t)
∂t2 = I · (∂
2vs(t)
∂t2 +
∂2up(t)
∂t2 +
∂2vn(t)
∂t2 ) (4)
Furthermore, if specular reflections do not vary over time (e.g., if the camera and subject are stationary), the vs(t) term will also become zero. This means that the second derivative changes in pixel intensities are a sum of second derivative changes in PPG and camera noise. With current camera technology, and little video compression, image noise is typically much smaller than the PPG signal. Therefore, we would expect the pixel changes to be dominated by second derivative variations in the blood volume pulse:
∂2Ck(t)
∂t2 = I · ∂
2up(t)
∂t2 (5)
As such, we can infer that when attempting to estimate the second derivative of the PPG signal from videos without very large motions or illumination changes, second derivative changes in the pixel space would appear helpful and that minimizing the loss between the second derivative prediction and ground truth will be the simplest learning task for the algorithm when the input is secondderivative pixel changes.
4 OUR MODEL
time (s)
BVP
First Derivative
Second Derivative
Left Ventricle Ejection Time (LVET) Systolic Peak
Dicrotic Norch
Systolic Foot
pant’s skin) and ignore noisy regions (e.g. background). These attention masks are shared between the first-derivative branch and the second-derivative branch as we expect the same spatial regions to contain first and second derivative information. After feature representations are extracted from frames within each derivative-input branch, the features are concatenated together for each time step and the target signals are then generated using recurrent neural network (RNN) layers. A diagram depicting the architecture used for our experimentation is shown in Fig. 2.
4.1 PREDICTING MULTI-DERIVATIVE TARGET SIGNALS
The goal of iPPG is to obtain an estimate of the underlying PPG signal p(t) (as in Eq. 2), while only observing video frames X(t) containing a subject’s skin (in this case the face). Mathematically, this can be described as learning a function: p̂(t) = f(X(t)) or, because we are interested in changes in blood volume changes, estimating the first derivative of the PPG signal: p̂′(t) = f(X(t), X ′(t)) , where the first derivative PPG signal is defined as: p′(t) = p(t)− p(t− 1). Using prior methods, to obtain an estimate of the PPG signal’s second derivative, one would either differentiate the predicted PPG signal twice, or differentiate the predicted first-derivative PPG once, rather than calculate the acceleration PPG directly. In contrast, we explicitly predict the acceleration PPG waveform as a target signal. We define the second derivative waveform as the difference between consecutive first-derivative time points: p′′(t) = p′(t) − p′(t − 1). Then we train our model to predict the second derivative waveform p̂′′(t) = f(X(t), X ′(t)) given a set of input video frames X(t) and the corresponding normalized difference frames X ′(t). To optimize our model parameters we minimize the mean squared difference between the true and predicted second derivative waveforms:
L = 1
T T∑ t=1 (p′′(t)− p̂′′(t))2 (6)
4.2 LEVERAGING MULTI-DERIVATIVE INPUTS
It has been previously shown that the normalized difference frames are useful for predicting the first derivative PPG waveforms. Therefore, we hypothesized that incorporating the second derivative of the raw video frames X ′′(t) = X ′(t) − X ′(t − 1) (i.e. the difference-of-difference frames) may also be useful for predicting the PPG signal and its derivatives. Similar to the difference frames, we added a separate convolutional attention branch, where the attention mask is shared between both branches (see Fig. 2). Sharing the attention mask is a reasonable assumption as we would expect skin regions to all exhibit the signal and similar dynamics. After the feature maps in each branch are pooled into a single value per feature at each time step, the learned representations are concatenated together. These concatenated features over time are used as input sequences to the recurrent layers that generate the target waveforms.
Given that difference frames X ′(t) are useful for predicting the first derivative PPG waveforms, features learned from the difference-of-difference frames X ′′(t) may be beneficial for predicting the second derivative PPG signal. In theory, if difference-of-difference features are indeed useful for predicting the acceleration PPG, then the CAN network should be able to learn those features
from the difference frames due to the 3D convolutional operations. However, manually adding the difference-of-difference frames could help guide the model. To examine the effect of combining higher-order inputs and target signals, we fit a model p̂′′(t) = f(X(t), X ′(t), X ′′(t)) to predict the second-derivative PPG.
5 EXPERIMENTS
In this section we will describe the data used to train and evaluate our method and perform a systematic ablation study in which we test different combinations of inputs and outputs.
5.1 DATA
Training To train our models using a large and diverse set of subjects, we leverage recent work that uses highly-parameterized synthetic avatars to generate videos containing simulated subjects with various movements and backgrounds (McDuff et al., 2020). To drive changes in the synthetic avatars’ appearance, the PPG signal is used to manipulate the base skin color and the subsurface radius (McDuff et al., 2020). The subsurface scattering is spatially weighted using an artist-created subsurface scattering radius texture that captures variations in the thickness of the skin across the face. Using physiological waveforms signals from the MIMIC Physionet (Goldberger Ary L. et al., 2000) database, we randomly sampled windows of PPG waveforms from real patients. The physiological waveform data were sampled to maximize examples from different patients. Using the synthetic avatar pipeline and MIMIC waveforms, we generated 2,800 6-second videos, where half of the videos were generated using hand-crafted facial motion/action signals, and the other half using facial motion/action signals extracted using landmark detection on real videos. Examples of the avatars can be found in Appendix A.1.1.
Testing Given that we are focusing on recovering very subtle changes in pixel intensities due to the blood volume pulse, we use a highly controlled and very accurately annotated dataset of real videos for evaluation. The AFRL dataset (Estepp et al., 2014) consists of 300 videos from 25 participants (17 male and 8 female). Each video in the dataset has a resolution of 658x492 pixels sampled at 30 Hz. Ground truth PPG signals were recorded using a contact reflective PPG sensor attached to the subject’s index finger. Each participant was instructed to perform three head motion tasks including rotating the head along the horizontal axis, rotating the head along the vertical axis, and rotating the head randomly once every second to one of nine predefined locations. Since our goal in this work was to compare methods for estimating subtle waveform dynamics, which can be more difficult to do in the presence of large motion, we focused here on the first two AFRL tasks where participant motion is minimal. Examples of AFRL participants can be found in Appendix A.1.1.
5.2 IMPLEMENTATION DETAILS
We trained our models using a large dataset of generated synthetic avatars and evaluated model performance on the AFRL dataset, which consists of real human subjects. For each video, we first cropped the video frames so that the face was approximately centered. Next, we reduced the resolution of the video to 36x36 pixels to reduce noise and computational requirements while maintaining useful spatial signal Verkruysse et al. (2008); Wang et al. (2017); Poh et al. (2010b). The input to the attention branch was T raw video frames. The input to the first-derivative branch was a set of T normalized difference frames, calculated by subtracting consecutive frames and normalizing by the sum. The input to the second-derivative branch was a set of T − 1 difference-of-difference frames (second derivative frames), calculated by subtracting consecutive normalized difference frames (i.e. the T frames used as input to the motion branch). In our experiments, we used a window size of T = 30 video frames to predict the target signals for the corresponding 30 time points. During training, a sliding window of 15 frames (i.e. 50% overlap between consecutive windows) was used to increase the total number of training examples. The model was implemented using Tensorflow (Abadi et al., 2016) and trained for eight epochs using the Adam (Kingma & Ba, 2017) optimizer with a learning rate of 0.001, and a batch size of 16.
5.3 SYSTEMATIC EVALUATION
To measure the effect of using multi-derivative inputs and outputs, we systematically removed the second-derivative parts of the model and used quantitative and qualitative methods to examine the change in model performance. To quantitatively measure the quality of the predicted signal, we calculated two clinically important parameters - heart rate (HR) and the left ventricular ejection time (LVET) interval (see Appendix A.1.3 for details). Video-based HR prediction has been a major focus of iPPG applications, with many methods showing highly-accurate results. HR can be determined through peak detection or by determining the dominant frequency in the signal (e.g. using fast Fourier transform). Since current iPPG methods are able to achieve sufficiently-low error rates on the HR estimation task, we believe that metrics that capture the quality of waveform morphology should also be considered.
The LVET interval is defined as the time between the opening and closing of the heart’s aortic valve, i.e. the systolic phase when the heart is contracting (see Fig. 1). In the PPG waveform, this interval begins at the diastolic point (i.e. the global minimum pressure within a heartbeat cycle) and ends with the dicrotic notch (i.e. local minimum occurring after systolic peak, marking the end of the systolic phase and the beginning of the diastolic phase). LVET typically is correlated with cardiac output (stroke volume × heart rate)(Hamada et al., 1990), and has been shown to be an indicator of future heart failure as the time interval decreases with left-ventricle dysfunction (Biering-Sørensen et al., 2018).
Calculating LVET requires identification of the diastolic point and the dicrotic notch. The diastolic point is a (global) minimum point within a heart beat, meaning it corresponds to a positive peak
in the second derivative signal according to the second-derivative test. Similarly, the dicrotic notch is a (local) minimum in the PPG signal, and appears as a positive peak in the second derivative following the diastolic peak in time. Because the dicrotic notch can often be a subtle feature, it is much easier to identify in the PPG’s second derivative compared to the raw signal. Therefore, it is a good example of clinically-important waveform morphology that is best captured by higher-order dynamics.
Removing the second-derivative frames In Table 1, quantitative evaluation metrics (HR and LVET) are shown for all experiments in our ablation study, using tasks 1 and 2 from the AFRL dataset. Removing the second-derivative (SD) frames results in the model configurations in the top three rows of Table 1. When SD frames are removed, the result is a general decrease in the HR error. However, there is also a general increase in LVET interval prediction error, which suggests that including the SD frames leads to improved estimation of waveform morphology.
Removing the first-derivative target signal Intuitively, models that are optimized using a loss function specifically focusing on a single objective will perform better in terms of that objective compared to models trained with loss functions containing multiple objectives. By removing the first-derivative target signal from the training objective, the model is focused to exclusively focus on the second-derivative (SD) objective. Empirically, this leads the SD-Optimized model to have the lowest LVET MAE of any model configuration (last row of Table 1). While the SD-Optimized model achieves the lowest LVET error, the HR error is the highest of any configuration. These results suggest that there are performance trade-offs to consider when designing a system for particular downstream tasks.
Removing the second-derivative target signal When the second-derivative target signal is removed from the model, the optimization procedure is purely focused on improving the prediction of the first derivative. The FD-Optimized model (first row of Table 1) serves as a form of baseline, since previous works have focused on using first-derivative (FD) frames to predict the first-derivative PPG signal. Fig. 4 shows a Bland-Altman plot (Martin Bland & Altman, 1986) comparing the FDOptimized and SD-Optimized error distributions as a function of the ground-truth values both HR and LVET intervals.
Perhaps unsurprisingly, our results show the FD-Optimized model achieves the lowest HR MAE (0.66 ± 2.07 BPM) of any model configuration examined and, in particular, improves HR estimation compared to models without the first derivative target signal. However, the FD-Optimized model also has the worst performance in terms of the LVET MAE (108.26 ± 56.19 ms) of any model configuration. This suggests that while the configuration provides an accurate assessment of the heartbeat frequency, the quality of predicted waveform morphology can be improved by incorporating second-derivative information. We observe similar results when evaluating the models on the UBFC (Bobbia et al., 2019) and PURE (Stricker et al., 2014) datasets (see Appendix Table 3).
Qualitative comparisons For a qualitative comparison, in Fig. 3 we plot the ground-truth, FDOptimized, and SD-Optimized PPG, first derivative, and second derivative. Additionally, in the bottom panel of Fig. 3 we overlay the true and predicted LVET intervals for each signal to demonstrate model performance. For additional qualitative comparisons, see Appendix A.2.
6 CONCLUSIONS
Using the task of video-based cardiac measurement we have shown that when learning representations for dynamical systems that appropriately designing inputs, and optimizing for derivatives of interest can make a significant difference in model performance. Specifically, there is a trade-off between optimizing for lower-order and higher-order dynamics. Given the importance of secondderivatives (i.e., acceleration) in this, and many other video understanding tasks, we believe it is important to understand the trade-off between optimizing for targets that capture different dynamic properties. In cardiac measurement in particular, the LVET is one of the more important clinical parameters and can be better estimated using higher-order information. While we have investigated the importance of higher-order dynamics in the context of video-based cardiac measurement, this paradigm is generally applicable. We believe future work will continue to showcase the importance of explicitly incorporating higher-order dynamics.
7 ETHICS STATEMENT
Camera-based cardiac measurement could help improve the quality of remote health care, as well as enable less invasive measurement of important physiological signals. The COVID-19 pandemic has revealed the importance of tools to support remote care. These needs are likely to be particularly acute in low-resource settings where distance, travel costs, and time are a great barrier to access quality healthcare. However, given the non-contact nature of the technology, it could also be used to measure personal data without the knowledge of the subject. Just as is the case with traditional contact sensors, it must be made transparent when these methods are being used, and subjects should be required to consent before physiological data is measured or recorded. There should be no penalty for individuals who decline to be measured. New bio-metrics laws can help protect people from unwanted physiological monitoring, or discrimination based on pre-existing health conditions detected via non-contact monitoring. However, social norms also need to be constructed around the use of this technology.
In this work, data were collected under informed consent from the participants.
A APPENDIX
A.1 SUPPLEMENTAL METHODS
A.1.1 EXAMPLE VIDEO FRAMES
A.1.2 MODEL ARCHITECTURE
The first two 3D convolutional layers in each branch each have 16 filters and the final two 3D convolutional layers in each branch each have 32 filters. Each convolutional layer has a filter size of 3x3x3 for all 3D convolutional layers in the network. All convolutional layers are padded such that they have the same height, width, and number of time steps in each consecutive layer. Convolutional layers use the hyperbolic tangent activation function, except for the convolutional layers used for the attention masks which use a sigmoid activation function for generating the soft masks. Attention masks (one per time step) are applied by applying an element-wise multiplication of the attention mask with each 3D convolutional feature map. Average pooling layers reduce the height and width of the frames by a factor of two, except for the final average pooling layer that pools over the entire frame (i.e. reduces each feature map to a single value per time step). Dropout (25% probability) is applied after every pooling layer to reduce overfitting.
After the final pooling layer, the learned features for each time step in a branch are concatenated together (i.e. combined across branches to share information). Each target signal uses its own set of (2) RNN layers to read the concatenated features over time and generate a target sequence. The first RNN layer is implemented as a bi-directional GRU (hyperbolic tangent activation function) with 64 total units (32 each direction). The second RNN layer is a GRU (linear activation function) layer with 1 output value per time step.
A.1.3 METRIC CALCULATION
Heart Rate (HR) estimation To estimate the heart rate, we use an fast Fourier transform (FFT)based method to calculate the dominant frequency in the signal, which corresponds to the heart rate. We first estimate power spectral density using the “periodogram” function from the scipy.signal (Virtanen et al., 2020) library. Then we band-pass filter the PPG signal, with cutoff frequencies of 0.75- 4.0 Hz (corresponding to a minimum HR of 45 BPM and maximum HR of 240 BPM). Finally, we select the frequency with the maximum power, and use this as our estimated HR.
Left Ventricle Ejection Time (LVET) estimation The LVET time is defined as the time interval between the diastolic peak and the dicrotic notch. To calculate this interval, we first identified the diastolic point in the second derivative (SD) of the PPG signal, which, because it is a “global” minima in the PPG heartbeat, appears as a “global” maxima (positive SD value) in the SD PPG. Then, in each predicted SD PPG waveform, we identified candidate dicrotic notch points. Since the dicrotic notch manifests as a “local” minima in the PPG signal, it appears as a “local” maxima in the PPG SD signal (positive SD value). Using peak detection (“find peaks” function in the scipy.signal library (Virtanen et al., 2020)) we identify candiadate dicrotic notch points by finding local peaks that occur after a diastolic point, and use the dicrotic notch candidate point that is closest in time to the reference diastolic point.
Because both the ground truth PPG (and therefore its derivatives) and, in particular, the predicted PPG (and its derivatives), contain signal artifacts and noise, the peak detection process is not perfect. To reduce variability in the LVET interval estimates due to noise, we apply a smoothing operation. Specifically, we estimate the mean LVET interval within a 10-second non-overlapping window and use this as our estimate of true/predicted LVET. See Appendix Fig. 7 for example LVET intervals over time, and the estimated LVET intervals after smoothing within windows.
A.2 SUPPLEMENTAL RESULTS | 1. What is the focus and contribution of the paper regarding motion signal estimation?
2. What are the strengths and weaknesses of the proposed approach, particularly in its technical novelty and ablation study?
3. Do you have any concerns or questions about the results, especially regarding the first-order and second-order motion performance?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, including the relevance of the references provided? | Summary Of The Paper
Review | Summary Of The Paper
Estimating the motion signal from the video is an important task with applications in computer vision and healthcare. In this paper, a multi-derivative convolutional attention network is used to estimate the high order derivatives of the 1D heart beating signals, with video data as input. A second-order loss is used and the results are evaluated in first-order(HR MAE) metrics and second-order metrics (LEFT MAE). The results show the second-order loss will improve the second-order metrics while harming the first-order performance.
Review
Strengths:
-Thorough ablation study
-Considering second-order motion is a novel topic
Weakness:
-Technical novelty seems to be limited. The convolutional attention network (CAN) paradigm was introduced before and it will be better to have some discussion on what is the new component added by comparison with the original CAN.
-Further discussion is required for some results. More discussion is appreciated for the results in Table 1. Intuitively, the first-order and second-order motion is highly correlated. If the first-order performance improves, the second-order performance should also improve and vice versa. On the other hand, doing both first and second-order estimation could be viewed as multi-task learning and they should benefit from each other. However, it shows that for this task, removing first-order loss will improve second-order performance. Also, if add both losses, the second-order performance decreased. This is a kind of strange phenomenon and more discussion is appreciated.
-Lack some references in PPG/cardiac motion estimation The strange results might be because the overfitting or generalization issues. The meta-learning has been proven to be useful for PPG estimation/cardiac motion estimation tasks. It will be interesting to add the following references and have some discussions:
Lee, Eugene, Evan Chen, and Chen-Yi Lee. "Meta-rppg: Remote heart rate estimation using a transductive meta-learner." European Conference on Computer Vision. Springer, Cham, 2020.
Yu, Hanchao, et al. "Foal: Fast online adaptive learning for cardiac motion estimation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. |
ICLR | Title
BC-IRL: Learning Generalizable Reward Functions from Demonstrations
Abstract
How well do reward functions learned with inverse reinforcement learning (IRL) generalize? We illustrate that state-of-the-art IRL algorithms, which maximize a maximum-entropy objective, learn rewards that overfit to the demonstrations. Such rewards struggle to provide meaningful rewards for states not covered by the demonstrations, a major detriment when using the reward to learn policies in new situations. We introduce BC-IRL, a new inverse reinforcement learning method that learns reward functions that generalize better when compared to maximum-entropy IRL approaches. In contrast to the MaxEnt framework, which learns to maximize rewards around demonstrations, BC-IRL updates reward parameters such that the policy trained with the new reward matches the expert demonstrations better. We show that BC-IRL learns rewards that generalize better on an illustrative simple task and two continuous robotic control tasks, achieving over twice the success rate of baselines in challenging generalization settings.
1 INTRODUCTION
Reinforcement learning has demonstrated success on a broad range of tasks from navigation Wijmans et al. (2019), locomotion Kumar et al. (2021); Iscen et al. (2018), and manipulation Kalashnikov et al. (2018). However, this success depends on specifying an accurate and informative reward signal to guide the agent towards solving the task. For instance, imagine designing a reward function for a robot window cleaning task. The reward should tell the robot how to grasp the cleaning rag, how to use the rag to clean the window, and to wipe hard enough to remove dirt, but not hard enough to break the window. Manually shaping such reward functions is difficult, non-intuitive, and time-consuming. Furthermore, the need for an expert to design a reward function for every new skill limits the ability of agents to autonomously acquire new skills. Inverse reinforcement learning (IRL) (Abbeel & Ng, 2004; Ziebart et al., 2008; Osa et al., 2018) is one way of addressing the challenge of acquiring rewards by learning reward functions from demonstrations and then using the learned rewards to learn policies via reinforcement learning. When compared to direct imitation learning, which learns policies from demonstrations directly, potential benefits of IRL are at least two-fold: first, IRL does not suffer from the compounding error problem that is often observed with policies directly learned from demonstrations (Ross et al., 2011; Barde et al., 2020); and second, a reward function could be a more abstract and parsimonious description of
the observed task that generalizes better to unseen task settings (Ng et al., 2000; Osa et al., 2018). This second potential benefit is appealing as it allows the agent to learn a reward function to train policies not only for the demonstrated task setting (e.g. specific start-goal configurations in a reaching task) but also for unseen settings (e.g. unseen start-goal configurations), autonomously without additional expert supervision.
However, thus far the generalization properties of reward functions learned via IRL are poorly understood. Here, we study the generalization of learned reward functions and find that prior IRL methods fail to learn generalizable rewards and instead overfit to the demonstrations. Figure 1 demonstrates this on a task where a point mass agent must navigate in a 2D space to a goal location at the center. An important reward characteristic for this task is that an agent, located anywhere in the state-space, should receive increasing rewards as it gets closer to the goal. Most recent prior work Fu et al. (2017); Ni et al. (2020); Finn et al. (2016c) developed IRL algorithms that optimize the maximum entropy objective (Ziebart et al., 2008) (Figure 1b), which fails to capture goal distance in the reward. Instead, the MaxEnt objective leads to rewards that separate non-expert from expert behavior by maximizing reward values along the expert demonstration. While useful for imitating the experts, the MaxEnt objective prevents the IRL algorithms from learning to assign meaningful rewards to other parts of the state space, thus limiting generalization of the reward function.
As a remedy to the reward generalization challenge in the maximum entropy IRL framework, we propose a new IRL framework called Behavioral Cloning Inverse Reinforcement Learning (BCIRL). In contrast to the MaxEnt framework, which learns to maximize rewards around demonstrations, the BC-IRL framework updates reward parameters such that the policy trained with the new reward matches the expert demonstrations better. This is akin to the model-agnostic meta-learning (Finn et al., 2017) and loss learning (Bechtle et al., 2021) frameworks where model or loss function parameters are learned such that the downstream task performs well when utilizing the meta-learned parameters. By using gradient-based bi-level optimization Grefenstette et al. (2019), BC-IRL can optimize the behavior cloning loss to learn the reward, rather than a separation objective like the maximum entropy objective. Importantly, to learn the reward, BC-IRL differentiates through the reinforcement learning policy optimization, which incorporates exploration and requires the reward to provide a meaningful reward throughout the state space to guide the policy to better match the expert. We find BC-IRL learns more generalizable rewards (Figure 1c), and achieves over twice the success rate of baseline IRL methods in challenging generalization settings.
Our contributions are as follows: 1) The general BC-IRL framework for learning more generalizable rewards from demonstrations, and a specific BC-IRL-PPO variant that uses PPO as the RL algorithm. 2) A quantitative and qualitative analysis of reward functions learned with BC-IRL and MaximumEntropy IRL variants on a simple task for easy analysis. 3) An evaluation of our novel BC-IRL algorithm on two continuous control tasks against state-of-the-art IRL and IL methods. Our method learns rewards that transfer better to novel task settings.
2 BACKGROUND AND RELATED WORK
We begin by reviewing Inverse Reinforcement Learning through the lense of bi-level optimization. We assume access to a rewardless Markov decision process (MDP) defined through the tuple M = (S,A,P, ρ0, γ,H) for state-space S, action space A, transition distribution P(s′|s, a), initial state distribution ρ0, discounting factor γ, and episode horizon H . We also have access to a set of expert demonstration trajectories De = {τei } N i=1 where each trajectory is a sequence of state, action tuples.
IRL learns a parameterized reward function Rψ(τi) which assigns a trajectory a scalar reward. Given the reward, a policy πθ(a|s) is learned which maps from states to a distribution over actions. The goal of IRL is to produce a reward Rψ , such that a policy trained to maximize the sum of (discounted) rewards under this reward function matches the behavior of the expert. This is captured through the following bi-level optimization problem:
min ψ
LIRL(Rψ;πθ) (outer obj.) (1a)
s.t. θ ∈ argmax θ g(Rψ, θ) (inner obj.) (1b)
where LIRL(Rψ;πθ) denotes the IRL loss and measures the performance of the learned reward Rψ and policy πθ; g(Rψ, θ) is the reinforcement learning objective used to optimize policy parameters θ. Algorithms for this bi-level optimization consist of an outer loop ((1a)) that optimizes the reward and an inner loop ((1b)) that optimizes the policy given the current reward.
Maximum Entropy IRL: Early work on IRL learns rewards by separating non-expert from expert trajectories (Ng et al., 2000; Abbeel & Ng, 2004; Abbeel et al., 2010). A primary challenge of these early IRL algorithms was the ambiguous nature of learning reward functions from demonstrations: many possible policies exist for a given demonstration, and thus many possible rewards exist. The Maximum Entropy (MaxEnt) IRL framework (Ziebart et al., 2008) seeks to address this ambiguity, by learning a reward (and policy) that is as non-committal (uncertain) as possible, while still explaining the demonstrations. More concretely, given reward parameters ψ, MaxEnt IRL optimizes the log probability of the expert trajectories τe from demonstration dataset De through the following loss,
LMaxEnt-IRL(Rψ) = −Eτe∼De [log p(τe|ψ)] = −Eτe∼De [ log exp (Rψ(τ e))
Z(ψ) ] = −Eτe∼De [Rψ(τe)] + logZ(ψ).
A key challenge of MaxEnt IRL is estimating the partition function Z(ψ) = ∫ expRψdτ . Ziebart et al. (2008) approximate Z in small discrete state spaces with dynamic programming. MaxEnt from the Bi-Level perspective: However, computing the partition functions becomes intractable for high-dimensional and continuous state spaces. Thus algorithms approximate Z using samples from a policy optimized via the current reward. This results in the partition function estimate being a function of the current policy log Ẑ(ψ;πθ). As a result, MaxEnt approaches end up following the bi-level optimization template by iterating between: 1) updating reward function parameters given current policy samples via the outer objective ((1a)); and 2) optimizing the policy parameters with the current reward parameters via an inner policy optimization objective and algorithm (1b). For instance, model-based IRL methods such as Wulfmeier et al. (2017); Levine & Koltun (2012); Englert et al. (2017) use model-based RL (or optimal control) methods to optimize a policy (or trajectory), while model-free IRL methods such as Kalakrishnan et al. (2013); Boularias et al. (2011); Finn et al. (2016b;a) learn policies via model-free RL in the inner loop. All of these methods use policy rollouts to approximate either the partition function of the maximum-entropy IRL objective or its gradient with respect to reward parameters in various ways (outer loop). For instance Finn et al. (2016b) learn a stochastic policy q(τ), and sample from that to estimate Z(ψ) ≈ 1M ∑ τi∼q(τ) expRψ(τi) q(τi) with M samples from q(τ). Fu et al. (2017) with adversarial IRL (AIRL) follow this idea and view the problem as an adversarial training process between policy πθ(a|s) and discriminator D(s) =
expRψ(s) expRψ(s)+πθ(a|s) . Ni et al. (2020) analytically compute the gradient of the f -divergence
between the expert state density and the MaxEnt state distribution, circumventing the need to directly compute the partition function. Meta-Learning and IRL: Like some prior work (Xu et al., 2019; Yu et al., 2019; Wang et al., 2021; Gleave & Habryka, 2018; Seyed Ghasemipour et al., 2019), BC-IRL combines meta-learning and inverse reinforcement learning. However, these works focus on fast adaptation of reward functions to new tasks for MaxEnt IRL through meta-learning. These works require demonstrations of the new task to adapt the reward function. BC-IRL algorithm is a fundamentally new way to learn reward functions and does not require demonstrations for new test settings. Most related to our work is Das et al. (2020), which also uses gradient-based bi-level optimization to match the expert. However, this approach requires a pre-trained dynamics model. Our work generalizes this idea since BC-IRL can optimize general policies, allowing any objective that is a function of the policy and any differentiable RL algorithm. We show our method, without an accurate dynamics model, outperforms Das et al. (2020) and scales to more complex tasks where Das et al. (2020) fails to learn. Generalization in IRL: Some prior works have explored how learned rewards can generalize to training policies in new situations. For instance, Fu et al. (2017) explored how rewards can generalize to training policies under changing dynamics. However, most prior work focuses on improving policy generalization to unseen task settings by addressing challenges introduced by the adversarial training objective of GAIL (Xu & Denil, 2019; Zolna et al., 2020; 2019; Lee et al., 2021; Barde et al., 2020; Jaegle et al., 2021; Dadashi et al., 2020). Finally, in contrast to most related work on generalization, our work focuses on analyzing and improving reward function transfer to new task settings.
3 LEARNING REWARDS VIA BEHAVIORAL CLONING INVERSE REINFORCEMENT LEARNING (BC-IRL)
We now present our algorithm for learning reward functions via behavioral cloning inverse reinforcement learning. We start by contrasting the maximum entropy and imitation loss objectives for
inverse reinforcement learning in Section 3.1. We then introduce a general formulation for BC-IRL in Section 3.2, and present an algorithmic instantiation that optimizes a BC objective to update the reward parameters via gradient-based bi-level optimization with a model-free RL algorithm in the inner loop in Section 3.3.
3.1 OUTER OBJECTIVES: MAX-ENT VS BEHAVIOR CLONING
In this work, we study an alternative IRL objective from the maximum entropy objective. While this maximum entropy IRL objective has led to impressive results, it is unclear how well this objective is suited for learning reward functions that generalize to new task settings, such as new start and goal distributions. Intuitively, assigning a high reward to demonstrated states (without task-specific hand-designed feature engineering) makes sense when you want to learn a reward function that can recover exactly the expert behavior, but it leads to reward landscapes that do not necessarily capture the essence of the task (e.g. to reach a goal, see Figure 1b). Instead of specifying an IRL objective that is directly a function of reward parameters (like maximum entropy), we aim to measure the reward function’s performance through the policy that results from optimizing the reward. With such an objective, we can optimize reward parameters for what we care about: for the resulting policy to match the behavior of the expert. The behavioral cloning (BC) loss measures how well the policy and expert actions match, defined for continuous actions as E(st,at)∼τe (πθ(st)− at)
2 where τe is an expert demonstration trajectory. Policy parameters θ are a result of using the current reward parameters ψ, which we can make explicit by making θ a function of ψ in the objective: LBC-IRL = E(st,at)∼τe(πθ(ψ)(st)− at)2. The IRL objective is now formulated in terms of the policy rollout “matching" the expert demonstration through the BC loss. We use the chain-rule to decompose the gradient of LBC-IRL with respect to reward parameters ψ. We also expand how the policy parameters θ(ψ) are updated via a REINFORCE update with learning rate α to optimize the current reward Rψ (but any differentiable policy update applies). ∂
∂ψ LBC-IRL =
∂
∂ψ
[ E
(st,at)∼τe
[( πθ(ψ)(st)− at )2]] = E
(st,at)∼τe
[ 2 ( πθ(ψ)(st)− at )] ∂ ∂ψ πθ(ψ)
where θ(ψ) = θold + α E (st,at)∼πθold
[( T∑
k=t+1
γk−t−1Rψ(sk) ) ∇ lnπθold(at|st) ] (2)
Computing the gradient for the reward update in Equation (2) includes samples from π collected in the reinforcement learning (RL) inner loop. This means the reward is trained on diverse states beyond the expert demonstrations through data collected via exploration in RL. As the agent explores during training, BC-IRL must provide a meaningful reward signal throughout the state-space to guide the policy to better match the expert. Note that this is a fundamentally different reward update rule as compared to current state-of-the-art methods that maximize a maximum entropy objective. We show in our experiments that this results in twice as high success rates compared to state-of-the-art MaxEnt IRL baselines in challenging generalization settings, demonstrating that BC-IRL learns more generalizable rewards that provide meaningful rewards beyond the expert demonstrations. The BC loss updates only the reward, as opposed to updating the policy as typical BC for imitation learning does Bain & Sammut (1995). BC-IRL is a IRL method that produces a reward, unlike regular BC that learns only a policy. Since BC-IRL uses RL, not BC, to update the policy, it avoids the pitfalls of BC for policy optimization such as compounding errors. Our experiments show that policies trained with rewards from BC-IRL generalize over twice as well to new settings as those trained with BC. In the following section, we show how to optimize this objective via bi-level optimization.
3.2 BC-IRL
We formulate the IRL problem as a gradient-based bi-level optimization problem, where the outer objective is optimized by differentiating through the optimization of the inner objective. We first describe how the policy is updated with a fixed reward, then how the reward is updated for the policy to better match the expert. Inner loop (policy optimization): The inner loop optimizes policy parameters θ given current reward function Rψ. The inner loop takes K gradient steps to optimize the policy given the current reward. Since the reward update will differentiate through this policy update, we require the policy update to be differentiable with respect to the reward function parameters. Thus, any reinforcement learning algorithm which is differentiable with respect to the reward function parameters can be plugged in here, which is the case for many policy gradient and model-based methods. However, this does not
include value-based methods such as DDPG Lillicrap et al. (2015) or SAC Haarnoja et al. (2018) that directly optimize value estimates since the reward function is not directly used in the policy update.
Algorithm 1 BC-IRL (general framework) 1: Initial reward Rψ , policy πθ 2: Policy updater POLICY_OPT(R, π) 3: Expert demonstrations De 4: for each epoch do 5: Policy Update: 6: θ′ ← POLICY_OPT(Rψ, πθ) 7: Sample demo batch τe ∼ De 8: Compute IRL loss 9: LBC-IRL = E(st,at)∼τe (πθ′(st)− at) 2 10: Compute gradient of IRL loss wrt reward 11: ∇ψLBC-IRL = ∂LBC-IRL∂θ′ ∂POLICY_OPT(Rψ,πθ) ∂ψ 12: ψ ← ψ −∇ψLBC-IRL 13: end for Outer loop (reward optimization): The outer loop optimization updates the reward parameters ψ via gradient descent. More concretely: after the inner loop, we compute the gradient of the outer loop objective ∇ψLBC-IRL wrt to reward parameters ψ by propagating through the inner loop. Intuitively, the new policy is a function of reward parameters since the old policy was updated to better maximize the reward. The gradient update on ψ tries to adjust reward function parameters such that the policy trained with this reward produces trajectories that match the demonstrations more closely. We use Grefenstette et al. (2019) for this higher-order optimization.
BC-IRL is summarized in Algorithm 1. Line 5 describes the inner loop update, where we update the policy πθ to maximize the current reward Rψ. Lines 6-7 compute the BC loss between the updated policy πθ′ and expert actions sampled from expert dataset De. The BC loss is then used in the outer loop to perform a gradient step on reward parameters in lines 8-9, where the gradient computation requires differentiating through the policy update in line 5.
3.3 BC-IRL-PPO
We now instantiate a specific version of the BC-IRL framework that uses proximal policy optimization (PPO) Schulman et al. (2017) to optimize the policy in the inner loop. This specific version, called BC-IRL-PPO, is summarized in Algorithm 2.
Algorithm 2 BC-IRL-PPO 1: Initial reward Rψ , policy πθ , value function Vν 2: Expert demonstrations De 3: for each epoch do 4: for k = 1→ K do 5: Run policy πθ in environment for T timesteps 6: Compute rewards r̂ψt for rollout with current Rψ 7: Compute advantages Âψ using r̂ψ and Vν 8: Compute LPPO using Âψ 9: Update πθ with∇θLPPO 10: end for 11: Sample demo batch τe ∼ De 12: Compute LBC-IRL = E(st,at)∼τe (πθ(st)− at) 2 13: Update reward Rψ with∇ψLBC-IRL 14: end for BC-IRL-PPO learns a state-only parameterized reward function Rψ(s), which assigns a state s ∈ S a scalar reward. The state-only reward has been shown to lead to rewards that generalize better Fu et al. (2017). BC-IRL-PPO begins by collecting a batch of rollouts in the environment from the current policy (line 5 of Algorithm 2). For each state s in this batch we evaluate the learned reward function Rψ(s) (line 6). From this sequence of rewards, we compute the advantage estimates Ât for each state (line 7). As is typical in PPO, we also utilize a learned value function Vν(st) to predict the value of the starting and ending state for partial episodes in the rollouts. This learned value function Vν is trained to predict the sum of future discounted rewards for the current reward function Rψ and policy πθ (part of LPPO in line 8). Using the advantages, we then compute the PPO update (line 9 of Algorithm 2) using the standard PPO loss in equation 8 of Schulman et al. (2017). Note the advantages are a function of the reward function parameters used to compute the rewards, so PPO is differentiable with respect to the reward function. Next, in the outer loop update, we update the reward parameters, by sampling a batch of demonstration transitions (line 11), computing the behavior cloning IRL objective LBC-IRL (line 12), and updating the reward parameters ψ via gradient descent on LBC-IRL (line 13). Finally, in this work, we perform one policy optimization step (K = 1) per reward function update. Furthermore, rather than re-train a policy from scratch for every reward function iteration, we initialize each inner loop from the previous πθ. This initialization is important in more complex domains where K would otherwise have to be large to acquire a good policy from scratch.
4 ILLUSTRATION & QUALITATIVE ANALYSIS OF LEARNED REWARDS
We first analyze the rewards learned by different IRL methods in a 2D point mass navigation task. The purpose of this analysis is to test our hypothesis that our method learns more generalizable rewards compared to maximum entropy baselines in simple low-dimensional settings amenable to intuitive visualizations. Specifically, we compare BC-IRL-PPO to the following baselines. Exact MaxEntIRL (MaxEnt) Ziebart et al. (2008): The exact MaxEntIRL method where the partition function is exactly computed by discretizing the state space. Guided Cost Learning (GCL) Finn et al. (2016b): Uses the maximum-entropy objective to update the reward. The partition function is approximated via adaptive sampling. Adversarial IRL (AIRL) Fu et al. (2017): An IRL method that uses a learned discriminator to distinguish expert and agent states. As described in Fu et al. (2017) we also use a shaping network h during reward training, but only visualize and transfer the reward approximator g. f-IRL Ni et al. (2021): Another MaxEntIRL based method, f-IRL computes the analytic gradient of the f-divergence between the agent and expert state distributions. We use the JS divergence version. Our method does not require demonstrations at test time, instead we transfer our learned rewards zero-shot. Thus we forego comparisons to other meta-learning methods, such as Xu et al. (2019), which require test time demonstrations. While a direct comparison with Das et al. (2020) is not possible because their method assumes access to a pre-trained dynamics model, we conduct a separate study comparing their method with an oracle dynamics model against BC-IRL in Appendix A.5. All baselines use PPO Schulman et al. (2017) for policy optimization as commonly done in prior work Orsini et al. (2021). All methods learn a state-dependent reward rψ(s), and a policy π(s), both parametrized as neural networks. Further details are described in Appendix C. The 2D point navigation tasks consist of a point agent policy that outputs a desired change in (x, y) position (velocity) (∆x,∆y) at every time step. The task has a trajectory length of T = 5 time steps with 4 demonstrations. Figure 2a visualizes the expert demonstrations where darker points are earlier time steps. The agent starting state distribution is centered around the starting state of each demonstration. Figure 2b,c visualize the rewards learned by BC-IRL and the AIRL baseline. Lighter regions indicate higher rewards. In Figure 2b, BC-IRL learns a reward that looks like a quadratic bowl centered at the origin, which models the distance to the goal across the entire state space. AIRL, the maximum entropy baseline, visualized in Figure 2c, learns a reward function where high rewards are placed on the demonstrations and low rewards elsewhere. Other baselines are visualized in Appendix Figure 4. To analyze the generalization capabilities of the learned rewards we use them to train policies on a new starting state distribution (visualized in Appendix Figure 9). Concretely, a newly initialized policy is trained from scratch to maximize the learned reward from the testing start state distribution. The policy is trained with 5 million environment steps, which is the same number of steps as for learning the reward. The testing starting state distribution has no overlap with the training start state distribution. Policy optimization at test time is also done with PPO. The Figure 2d,e display trajectories from the trained policies where darker points again correspond to earlier time steps. This qualitative evaluation shows that BC-IRL learns a meaningful reward for states not covered by the demonstrations. Thus at test time agent trajectories are guided towards the goal with the terminal states (lightest points) close to the goal. The X-shaped rewards learned by the baselines do not provide meaningful rewards in the testing setting as they assign uniformly low rewards to states not covered by the demonstration. This provides poor reward shaping which prevents the agent from reaching the goal within the 5M training interactions with the environment. This results in agent trajectories that do not end close to the goal by the end of training.
Next, we report quantitative results in Table 1. We evaluate the performance of the policy trained at test time by reporting the distance from the policy’s final trajectory state sT to the goal g: ∥sT − g∥22. We report the final train performance of the algorithm (“Train"), along with the performance of the policy trained from scratch with the learned reward in the train distribution “Eval (Train)" and testing distribution “Eval (Test)". These results confirm that BC-IRL learns more generalizable rewards than baselines. Specifically, BC-IRL achieves a lower distance on the testing starting state distribution at 0.04, compared to 0.53, 1.6, and 0.36 for AIRL, GCL, and MaxEnt respectively. Surprisingly, BC-IRL even performs better than exact MaxEnt, which uses privileged information about the state space to estimate the partition function. This fits with our hypothesis that our method learns more generalizable rewards than MaxEnt, even when the MaxEnt objective is exactly computed. We repeat this analysis for a version of the task with an obstacle blocking the path to the goal in Appendix A.2 and reach the same findings even when BC-IRL must learn an asymmetric reward function. We also compare learned rewards to manually defined rewards in Appendix A.3. Despite baselines learning rewards that do not generalize beyond the demonstrations, with enough environment interactions, policies trained under these rewards will eventually reach the high-rewards along the expert demonstrations. Since all demonstrations reach the goal in the point mass task, the X-shaped reward baselines learn have high-reward at the center. Despite the X-shaped providing little reward shaping off the X, with enough environment interactions, the agent eventually discovers the high-reward point at the goal. After training AIRL for 15M steps, 3x the number of steps for reward learning and the experiments in Table 1 and Figure 2, the policy eventually reaches 0.08 ± 0.01 distance to the goal. In the same setting, BC-IRL achieves 0.04± 0.01 distance to the goal in under 5M steps. The additional performance gap is due to BC-IRL learning a reward with a maximum reward value closer to the center (0.02 to the center) compared to AIRL (0.04 to the center).
5 EXPERIMENTS
In our experiments, we aim to answer the following questions: (1) Can BC-IRL learn reward functions that can train policies from scratch? (2) Does BC-IRL learn rewards that can generalize to unseen states and goals better than IRL baselines in complex environments? (3) Can learned rewards transfer better than policies learned directly with imitation learning? We show the first in Section 5.1 and the next two in Section 5.2. We evaluate on two continuous control tasks: 1) Fetch reaching task Szot et al. (2021) (Fig 3a), and the TriFinger reaching task Ahmed et al. (2021) (Fig 3b).
5.1 REWARD TRAINING PHASE: LEARNING REWARDS TO MATCH THE EXPERT
Experimental Setup and Evaluation Metrics In the Fetch reaching task, setup in the Habitat 2.0 simulator Szot et al. (2021), the robot must move its end-effector to a 3D goal location g which changes between episodes. The action space of the agent is the desired velocities for each of the 7 joints on the robot arm. The robot succeeds if the end-effector is within 0.1m of the target position by the 20 time step maximum episode length. During reward learning, the goal g is sampled from a 0.2 meter length unit cube in front of the robot, g ∼ U([0]3, [0.2]3). We provide 100 demonstrations.
BC-IRL-PPO AIRL Fetch Reach (Success) ↑ 1.00 ± 0.00 0.96 ± 0.00
Trifinger Reach (Goal Dist) ↓ 0.002 ± 0.0015 0.007 ± 0.0017
distance to the demonstrated goal, (g − gdemo)2 in meters.
Evaluation and Baselines We evaluate BC-IRL-PPO by how well the reward it can train new policies from scratch in the same start state and goal distribution as the demonstrations. Given the pointmass results Section 4, we compare BC-IRL-PPO to AIRL, the best performing baseline for reward learning. More details on baseline choice, policy and reward representation, and hyperparameters are described in the Appendix (D).
Results and Analysis As Table 2 confirms, our method and baselines are able to imitate the demonstrations when policies are evaluated in the same task setting as the expert. All methods are able to achieve a near 100% success rate and low distance to goal. Methods also learn with similar sample efficiency as shown in the learning curves in Figure 3d. These high-success rates indicate BC-IRL-PPO and AIRL learn rewards that capture the expert behavior and train policies to mimic the expert. When training policies in the same state/goal distribution as the expert, rewards from BC-IRL-PPO follow any constraints followed by the experts, just like the IRL baselines.
5.2 TEST PHASE: EVALUATING REWARD AND POLICY GENERALIZATION
In this section, we evaluate how learned rewards and policies can generalize to new task settings with increased starting state and goal sampling noise. We evaluate the generalization ability of rewards by evaluating how well they can train new policies to reach the goal in new start and goal distributions not seen in the demonstrations. This evaluation captures the reality that it is infeasible to collect demonstrations for every possible start/goal configuration. We thus aim to learn rewards from demonstrations that can generalize beyond the start/goal configurations present in those demonstrations. We quantify reward generalization ability by whether the reward can train a policy to perform the task in the new start/goal configurations. For the Fetch Reach task, we evaluate on three wider test goal sampling distributions g ∼ U([0]3, [gmax]3): Easy (gmax = 0.25), Medium (gmax = 0.4), and Hard (gmax = 0.55), all visualized in Figure 3c. Similarly, we evaluate on new state regions, which increase the starting and goal initial state distributions but exclude the regions from training, exposing the reward to only unseen initial states and goals. In Trifinger, we sample start configurations from around the start joint position in the demonstrations, with increasingly wider distributions (s0 ∼ N (sdemo0 , δ), with δ = 0.01, 0.03, 0.05). We evaluate reward function performance by how well the reward function can train new policies from scratch. However, now the reward must generalize to inferring rewards in the new start state and goal distributions. We additionally compare to two imitation learning baselines: Generative Adversarial Imitation Learning (GAIL) Ho & Ermon (2016) and Behavior Cloning (BC). We compare different methods of transferring the learned reward and policy to the test setting: 1) Reward: Transfer only the reward from the above training phase and train a newly initialized policy in the test setting.
2) Policy: Transfer only the policy from the above training phase and immediately evaluate the policy without further training in the test setting. This compares transferring learned rewards and transferring learned policies. We use this transfer strategy to compare against direct imitation learning methods. 3) Reward+Policy: Transfer the reward and policy and then fine-tune the policy using the learned reward in the test setting. Results for this setting are in Appendix B.2.
Results and Analysis The results in Table 3 show BC-IRL-PPO learns rewards that generalize better than IRL baselines to new settings. In the hardest generalization setting, BC-IRL-PPO achieves over twice the success rate of AIRL. AIRL struggles to transfer its learned reward to harder generalization settings, with performance decreasing as the goal sampling distribution becomes larger and has less overlap with the training goal distribution. In the “Hard" start region generalization setting, the performance of AIRL degrades to 34% success rate. On the other hand, BC-IRL-PPO learns a generalizable reward and performs well even in the “Hard" generalization strategy, achieving 76% success. This trend is true both for generalization to new start state distributions and for new start state regions. The results for Trifinger Reach in Table 4 support these findings with rewards learned via BC-IRL-PPO generalizing better to training policies from scratch in all three test distributions. All training curves for training policies from scratch with learned rewards are in Appendix B.1.
Furthermore, the results in Table 3 also demonstrate that transferring rewards “(Reward)" is more effective for generalization than transferring policies “(Policy)". Transferring the reward to train new policies typically outperforms transferring only the policy for all IRL approaches. Additionally, training from scratch with rewards learned via IRL outperforms non-reward learning imitation learning methods that only permit transferring
the policy zero-shot. The policies learned by GAIL and BC generalize worse than training new policies from scratch with the reward learned by BC-IRL-PPO, with BC and GAIL achieving 35% and 37% success rates in the “Hard" generalization setting while our method achieves 76% success. The superior performance of BC-IRL-PPO over BC highlights the important differences between the two methods with our method learning a reward and training the policy with PPO on the learned reward. In Appendix B.2, we also show the “Policy+Reward" transfer setting and demonstrate BC-IRL-PPO also outperforms baselines in this setting. In Appendix B we also analyze performance with the number of demos, different inner and outer loop learning rates, and number of inner loop updates.
6 DISCUSSION AND FUTURE WORK
We propose a new IRL framework for learning generalizable rewards with bi-level gradient-based optimization. By meta-learning rewards, our framework can optimize alternative outer-level objectives instead of the maximum entropy objective commonly used in prior work. We propose BC-IRL-PPO an instantiation of our new framework, which uses PPO for policy optimization in the inner loop and an action matching objective in the outer loop. We demonstrate that BC-IRL-PPO learns rewards that generalize better than baselines. Potential negative social impacts of this work are that learning reward functions from data could result in less interpretable rewards, leading to more opaque behaviors from agents that optimize the learned reward. Future work will explore alternative instantiations of the BC-IRL framework, such as utilizing sample efficient off-policy methods like SAC or model-based methods in the inner loop. Model-based methods are especially appealing because a single dynamics model could be shared between tasks and learning reward functions for new tasks could be achieved purely using the model. Finally, other outer loop objectives rather than action matching are also possible.
7 ACKNOWLEDGMENTS
The Georgia Tech effort was supported in part by NSF, ONR YIP, and ARO PECASE. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.
A FURTHER POINT MASS NAVIGATION RESULTS
A.1 QUALITATIVE RESULTS FOR ALL METHODS IN POINT MASS NAVIGATION
Visualizations of the reward functions from all methods for the regular pointmass task are displayed in Figure 4.
A.2 OBSTACLE POINT MASS NAVIGATION
The obstacle point mass navigation task incorporates asymmetric dynamics with an off-centered obstacle. This environment is the same as the point mass navigation task from Section 4, except there is an obstacle blocking the path to the center and the agent only spawns in the top-right hand corner. This task has a trajectory length of T = 50 time steps with 100 demonstrations. Figure 5a visualizes the expert demonstrations where darker points are earlier time steps.
The results in Table 5 are consistent with the non-obstacle point mass task where BC-IRL generalizes better than a variety of MaxEnt IRL baselines. In the train setting, BC-IRL learns rewards that match the expert behavior with avoiding the obstacle and even achieves better performance than baselines in this task with 0.08 distance to the goal versus 0.41 to the goal for the best performing baseline in the train setting, f-IRL. BC-IRL generalizes better than baselines achieving 0.79 distance to goal compared to the best performing baseline MaxEnt, which also has access to oracle information. The reward learned by BC-IRL visualized in Figure 5b shows BC-IRL learns a complex reward to account for the obstacle. Figure 6 visualizes the rewards for all methods.
A.3 COMPARISON TO MANUALLY DEFINED REWARDS
We compare the rewards learned by BC-IRL to two hand-coded rewards. We visualize how well the learned rewards can train policies from scratch in the evaluation distribution in the point navigation with obstacle task. The reward learned by BC-IRL therefore must generalize. On the other hand, the hand-coded rewards do not require any learning. We include a sparse reward for achieving the goal, which does not require domain knowledge when implementing the reward. We also implement a dense reward, defined as the change in Euclidean distance to the goal where rt = dt−1 − dt where dt is the distance of the agent to the goal at time t. Figure 7a shows policy training curves for the learned and hand-defined rewards. The sparse reward performs poorly and the policy fails to get closer to the goal. On the other hand, the rewards learned by BC-IRL guide the policy closer to the goal. The dense reward, which incoporates more domain knowledge about the task, performs better than the learned reward.
A.4 ANALYZING NUMBER OF INNER LOOP UPDATES
As described in Section 3.3, a hyperparameter in BC-IRL-PPO is the number of inner loop policy optimization steps K, for each reward function update. In our experiments, we selected K = 1. In Figure 7b we examine the training performance of BC-IRL-PPO in the point navigation task with no obstacle for various choices of K. We find that a wide variety of K values perform similarly. We,
therefore, selected K = 1 since it runs the fastest, with no need to track multiple policy updates in the meta optimization.
A.5 BC-IRL WITH MODEL-BASED POLICY OPTIMIZATION
We compare BC-IRL-PPO to a version of BC-IRL that uses model-based RL in the inner loop inspired by Das et al. (2020). A direct comparison to Das et al. (2020) is not possible because their method assumes access to a pre-trained dynamics model, while in our work, we do not assume access to a ground truth or pre-trained dynamics model. However, we compare to a version of Das et al. (2020) in the point mass navigation task with a ground truth dynamics model. Specifically, we use gradient-based MPC in the inner loop optimization as in Das et al. (2020), but the BC IRL outer loop objective. With the BC outer loop objective, it also learns generalizable rewards in the point mass navigation task achieving 0.06 ± 0.03 distance to goal in “Eval (Train)" and 0.07 ± 0.03 in “Eval (Test)". However, in the point mass navigation task with the obstacle, this method fails to learn a reward and struggles to minimize the outer loop objective. We hypothesize that in longer horizon tasks, the MPC inner loop optimization in [9] easily gets stuck in local minimas and struggles to differentiate through the entire MPC optimization.
B REACH TASK: FURTHER EXPERIMENT RESULTS
B.1 RL-TRAINING CURVES
In Figure 8 we visualize the training curves for the RL training used in Table 3. Figure 8a shows policy learning progress during the IRL training phase. In each setting, the performance is measured by using the current reward to train a policy and computing the success rate of the policy. Figure 8b to Figure 8d show the policy learning curves at test time, in the generalization settings, where the reward is frozen and must generalize to learn new policies on new goals (“Reward " transfer strategy). These plots show that all methods learn similarly during IRL training (Figure 8a). When transferring the learned rewards to test settings we see that BC-IRL-PPO performs better in training successful policies as the generalization difficulty increases with the most difficult generalization in Figure 8d.
B.2 TRANSFER REWARD+POLICY SETTING
Here, we evaluate the “Policy+Reward" transfer strategy to new environment settings where both the reward and policy are transferred. In the new setting, “Policy+Reward" uses the transferred reward to fine-tune the pre-trained transferred policy with RL. We show results in Table 6 for the “Policy+Reward" transfer strategy alongside the “Reward" transfer strategy from Table 3. We find that “Policy+Reward" performs slightly better than “Reward" in the Hard setting of generalization to new starting state distributions but otherwise performs similarly. Even in the “Policy+Reward" setting, AIRL struggles to learn a good policy in the Medium and Hard settings, achieving 38% and 81% success rate respectively.
B.3 ANALYZING THE NUMBER DEMONSTRATIONS
We analyze the effect of the number of demonstrations used for reward learning in Table 7. We find that using fewer demonstrations does not affect the training performance of BC-IRL-PPO and AIRL. We also find our method does just as well with 5 demos as 100 in the +75% noise setting, with any number of demonstrations achieving near-perfect success rates. On the other hand, the performance of AIRL degrades from 93% success rate with 100 demonstrations to 84% in the +75% noise setting. In the +100% noise setting, fewer demonstrations hurt performance for both methods, with our method dropping from 76% success to 69% success and AIRL from 38% success to 42% success.
B.4 BC-IRL HYPERPARARAMETER ANALYSIS
BC-IRL-PPO requires a learning rate for the policy optimization and a learning rate for the reward optimization. We compare the performance of our algorithm for various choices of policy and reward learning rates in Table 8. We find that across many different learning rate settings our method achieves high rates of success, but high policy learning rates have a detrimental effect. High reward learning rates have a slight negative impact but are not as severe.
C FURTHER 2D POINT NAVIGATION DETAILS
The start state distributions for the 2D point navigation task are illustrated in Figure 9. The reward is learned using the start distribution in red on 4 equally spaced points from the center. Four demonstrations are also provided in this train start state distribution from each of the four corners. The reward is then transferred and a new policy is trained with the start state distribution in the magenta color. This start state distribution has no overlap with the train distribution and is also equally spaced. The reward must therefore generalize to providing rewards in this new state distribution. The hyperparameters for the methods from the 2D point navigation task in Section 4 are detailed in Table 9 for the no obstacle version and Table 10 for the obstacle version of the task. The reward
function / discriminator for all methods was a neural network with 1 hidden layer and 128 hidden dimension size with tanh-activations between the layers. Adam Kingma & Ba (2014) was used for policy and reward optimization. All RL training used 5M steps of experience for the training and testing setting for the navigation no obstacle task. f-IRL uses the same optimization and neural network hyperparameters for the discriminator and reward function. Like in Ni et al. (2020), we clamp the output of the reward function within the range [−10, 10] and found this was beneficial for learning. In the navigation with obstacle task, training used 15M steps of experience and testing used 5M steps of experience. All experiments were run on a Intel(R) Core(TM) i9-9900X CPU @ 3.50GHz.
D FURTHER REACH TASK DETAILS
D.1 CHOICE OF BASELINES
The “Exact MaxEntIRL" approach is excluded because it cannot be computed exactly for highdimensional state spaces. GCL is excluded because of its poor performance on the toy task relative to other methods. We also compare to the following imitation learning methods which learn only policies and no transferable reward:
• Behavioral Cloning (BC) Bain & Sammut (1995): Train a policy using supervised learning to match the actions in the expert dataset.
• Generative Adversarial Imitation Learning (GAIL) Ho & Ermon (2016): Trains a discriminator to distinguish expert from agent transitions and then use the discriminator confusion score as the reward. This reward is coupled with the current policy Finn et al. (2016a) (referred to as a “pseudo-reward") and therefore cannot train policies from scratch.
D.2 POLICY+NETWORK REPRESENTATION
All methods use a neural network to represent the policy and reward with 1 hidden layer, 128 hidden units, and tanh-activation functions between the layers. We use PPO as the policy optimization method for all methods. All methods in all tasks use demonstrations obtained from a policy trained with PPO using a manually engineered reward.
D.3 HYPERPARAMETERS
The hyperparameters for all methods from the Reaching task are described in Table 11. The Adam optimizer Kingma & Ba (2014) was used for policy and reward optimization. All RL training used 1M steps of experience for the training and testing settings. The “Reward" and “Policy+Reward" transfer strategies trained policies with the same set of hyperparameters.
E TRIFINGER EXPERIMENT DETAILS
E.1 POLICY+NETWORK REPRESENTATION
All methods use a neural network to represent the policy and reward with 1 hidden layer, 128 hidden units, and tanh-activation functions between the layers. We use PPO as the policy optimization method for all methods. All methods in all tasks use demonstrations obtained from a policy trained with PPO using a manually engineered reward.
E.2 HYPERPARAMETERS
The hyperparameters for all methods for the Trifinger reaching task are described in Table 12. The Adam optimizer Kingma & Ba (2014) was used for policy and reward optimization. All RL training used 500k steps of experience for the reward training phase and 100k steps of experience for policy optimization in test settings. | 1. What is the focus and contribution of the paper on imitation learning?
2. What are the strengths of the proposed approach, particularly in terms of its ability to improve generalization and robustness?
3. What are the weaknesses of the paper, especially regarding its limitations and experimental scope?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The submission proposes a novel IRL algorithm.
Instead of prior methods optimizing maximum entropy objective or adversarial objective for reward learning, the proposed method (BC-IRL) seeks reward function that makes a policy updated from the reward be close to expert behavior. To optimize such an objective, the few-step updated policy parameters are bi-level optimized with respect to reward parameters. Learned reward function shows improved generalization ability than existing IRL methods and thus it can able to train more robust policy.
Strengths And Weaknesses
Strength
The proposed method meaningfully improves the performance of previous imitation learning methods. Both the generalization quality and robustness of learned policy are consistently outperforming prior imitation learning methods.
The qualitative results clearly depict the benefit of the method and how it can improve generalization power.
Paper is well written and easy to follow.
Weakness
As clearly described in the paper, BC-IRL should be used with a backbone RL algorithm that can update policy using reward parameters. This limitation will exclude a meaningful portion of conventional RL algorithms.
Evaluations are done in a relatively short-horizon and less diverse set of the domain. Few more results on other domains can strengthen the experimental support.
Clarity, Quality, Novelty And Reproducibility
This paper has high clarity and quality. All text and graphics are well-polished and structured. Code, hyperparameters, and experimental details are provided for reproduction. |
ICLR | Title
BC-IRL: Learning Generalizable Reward Functions from Demonstrations
Abstract
How well do reward functions learned with inverse reinforcement learning (IRL) generalize? We illustrate that state-of-the-art IRL algorithms, which maximize a maximum-entropy objective, learn rewards that overfit to the demonstrations. Such rewards struggle to provide meaningful rewards for states not covered by the demonstrations, a major detriment when using the reward to learn policies in new situations. We introduce BC-IRL, a new inverse reinforcement learning method that learns reward functions that generalize better when compared to maximum-entropy IRL approaches. In contrast to the MaxEnt framework, which learns to maximize rewards around demonstrations, BC-IRL updates reward parameters such that the policy trained with the new reward matches the expert demonstrations better. We show that BC-IRL learns rewards that generalize better on an illustrative simple task and two continuous robotic control tasks, achieving over twice the success rate of baselines in challenging generalization settings.
1 INTRODUCTION
Reinforcement learning has demonstrated success on a broad range of tasks from navigation Wijmans et al. (2019), locomotion Kumar et al. (2021); Iscen et al. (2018), and manipulation Kalashnikov et al. (2018). However, this success depends on specifying an accurate and informative reward signal to guide the agent towards solving the task. For instance, imagine designing a reward function for a robot window cleaning task. The reward should tell the robot how to grasp the cleaning rag, how to use the rag to clean the window, and to wipe hard enough to remove dirt, but not hard enough to break the window. Manually shaping such reward functions is difficult, non-intuitive, and time-consuming. Furthermore, the need for an expert to design a reward function for every new skill limits the ability of agents to autonomously acquire new skills. Inverse reinforcement learning (IRL) (Abbeel & Ng, 2004; Ziebart et al., 2008; Osa et al., 2018) is one way of addressing the challenge of acquiring rewards by learning reward functions from demonstrations and then using the learned rewards to learn policies via reinforcement learning. When compared to direct imitation learning, which learns policies from demonstrations directly, potential benefits of IRL are at least two-fold: first, IRL does not suffer from the compounding error problem that is often observed with policies directly learned from demonstrations (Ross et al., 2011; Barde et al., 2020); and second, a reward function could be a more abstract and parsimonious description of
the observed task that generalizes better to unseen task settings (Ng et al., 2000; Osa et al., 2018). This second potential benefit is appealing as it allows the agent to learn a reward function to train policies not only for the demonstrated task setting (e.g. specific start-goal configurations in a reaching task) but also for unseen settings (e.g. unseen start-goal configurations), autonomously without additional expert supervision.
However, thus far the generalization properties of reward functions learned via IRL are poorly understood. Here, we study the generalization of learned reward functions and find that prior IRL methods fail to learn generalizable rewards and instead overfit to the demonstrations. Figure 1 demonstrates this on a task where a point mass agent must navigate in a 2D space to a goal location at the center. An important reward characteristic for this task is that an agent, located anywhere in the state-space, should receive increasing rewards as it gets closer to the goal. Most recent prior work Fu et al. (2017); Ni et al. (2020); Finn et al. (2016c) developed IRL algorithms that optimize the maximum entropy objective (Ziebart et al., 2008) (Figure 1b), which fails to capture goal distance in the reward. Instead, the MaxEnt objective leads to rewards that separate non-expert from expert behavior by maximizing reward values along the expert demonstration. While useful for imitating the experts, the MaxEnt objective prevents the IRL algorithms from learning to assign meaningful rewards to other parts of the state space, thus limiting generalization of the reward function.
As a remedy to the reward generalization challenge in the maximum entropy IRL framework, we propose a new IRL framework called Behavioral Cloning Inverse Reinforcement Learning (BCIRL). In contrast to the MaxEnt framework, which learns to maximize rewards around demonstrations, the BC-IRL framework updates reward parameters such that the policy trained with the new reward matches the expert demonstrations better. This is akin to the model-agnostic meta-learning (Finn et al., 2017) and loss learning (Bechtle et al., 2021) frameworks where model or loss function parameters are learned such that the downstream task performs well when utilizing the meta-learned parameters. By using gradient-based bi-level optimization Grefenstette et al. (2019), BC-IRL can optimize the behavior cloning loss to learn the reward, rather than a separation objective like the maximum entropy objective. Importantly, to learn the reward, BC-IRL differentiates through the reinforcement learning policy optimization, which incorporates exploration and requires the reward to provide a meaningful reward throughout the state space to guide the policy to better match the expert. We find BC-IRL learns more generalizable rewards (Figure 1c), and achieves over twice the success rate of baseline IRL methods in challenging generalization settings.
Our contributions are as follows: 1) The general BC-IRL framework for learning more generalizable rewards from demonstrations, and a specific BC-IRL-PPO variant that uses PPO as the RL algorithm. 2) A quantitative and qualitative analysis of reward functions learned with BC-IRL and MaximumEntropy IRL variants on a simple task for easy analysis. 3) An evaluation of our novel BC-IRL algorithm on two continuous control tasks against state-of-the-art IRL and IL methods. Our method learns rewards that transfer better to novel task settings.
2 BACKGROUND AND RELATED WORK
We begin by reviewing Inverse Reinforcement Learning through the lense of bi-level optimization. We assume access to a rewardless Markov decision process (MDP) defined through the tuple M = (S,A,P, ρ0, γ,H) for state-space S, action space A, transition distribution P(s′|s, a), initial state distribution ρ0, discounting factor γ, and episode horizon H . We also have access to a set of expert demonstration trajectories De = {τei } N i=1 where each trajectory is a sequence of state, action tuples.
IRL learns a parameterized reward function Rψ(τi) which assigns a trajectory a scalar reward. Given the reward, a policy πθ(a|s) is learned which maps from states to a distribution over actions. The goal of IRL is to produce a reward Rψ , such that a policy trained to maximize the sum of (discounted) rewards under this reward function matches the behavior of the expert. This is captured through the following bi-level optimization problem:
min ψ
LIRL(Rψ;πθ) (outer obj.) (1a)
s.t. θ ∈ argmax θ g(Rψ, θ) (inner obj.) (1b)
where LIRL(Rψ;πθ) denotes the IRL loss and measures the performance of the learned reward Rψ and policy πθ; g(Rψ, θ) is the reinforcement learning objective used to optimize policy parameters θ. Algorithms for this bi-level optimization consist of an outer loop ((1a)) that optimizes the reward and an inner loop ((1b)) that optimizes the policy given the current reward.
Maximum Entropy IRL: Early work on IRL learns rewards by separating non-expert from expert trajectories (Ng et al., 2000; Abbeel & Ng, 2004; Abbeel et al., 2010). A primary challenge of these early IRL algorithms was the ambiguous nature of learning reward functions from demonstrations: many possible policies exist for a given demonstration, and thus many possible rewards exist. The Maximum Entropy (MaxEnt) IRL framework (Ziebart et al., 2008) seeks to address this ambiguity, by learning a reward (and policy) that is as non-committal (uncertain) as possible, while still explaining the demonstrations. More concretely, given reward parameters ψ, MaxEnt IRL optimizes the log probability of the expert trajectories τe from demonstration dataset De through the following loss,
LMaxEnt-IRL(Rψ) = −Eτe∼De [log p(τe|ψ)] = −Eτe∼De [ log exp (Rψ(τ e))
Z(ψ) ] = −Eτe∼De [Rψ(τe)] + logZ(ψ).
A key challenge of MaxEnt IRL is estimating the partition function Z(ψ) = ∫ expRψdτ . Ziebart et al. (2008) approximate Z in small discrete state spaces with dynamic programming. MaxEnt from the Bi-Level perspective: However, computing the partition functions becomes intractable for high-dimensional and continuous state spaces. Thus algorithms approximate Z using samples from a policy optimized via the current reward. This results in the partition function estimate being a function of the current policy log Ẑ(ψ;πθ). As a result, MaxEnt approaches end up following the bi-level optimization template by iterating between: 1) updating reward function parameters given current policy samples via the outer objective ((1a)); and 2) optimizing the policy parameters with the current reward parameters via an inner policy optimization objective and algorithm (1b). For instance, model-based IRL methods such as Wulfmeier et al. (2017); Levine & Koltun (2012); Englert et al. (2017) use model-based RL (or optimal control) methods to optimize a policy (or trajectory), while model-free IRL methods such as Kalakrishnan et al. (2013); Boularias et al. (2011); Finn et al. (2016b;a) learn policies via model-free RL in the inner loop. All of these methods use policy rollouts to approximate either the partition function of the maximum-entropy IRL objective or its gradient with respect to reward parameters in various ways (outer loop). For instance Finn et al. (2016b) learn a stochastic policy q(τ), and sample from that to estimate Z(ψ) ≈ 1M ∑ τi∼q(τ) expRψ(τi) q(τi) with M samples from q(τ). Fu et al. (2017) with adversarial IRL (AIRL) follow this idea and view the problem as an adversarial training process between policy πθ(a|s) and discriminator D(s) =
expRψ(s) expRψ(s)+πθ(a|s) . Ni et al. (2020) analytically compute the gradient of the f -divergence
between the expert state density and the MaxEnt state distribution, circumventing the need to directly compute the partition function. Meta-Learning and IRL: Like some prior work (Xu et al., 2019; Yu et al., 2019; Wang et al., 2021; Gleave & Habryka, 2018; Seyed Ghasemipour et al., 2019), BC-IRL combines meta-learning and inverse reinforcement learning. However, these works focus on fast adaptation of reward functions to new tasks for MaxEnt IRL through meta-learning. These works require demonstrations of the new task to adapt the reward function. BC-IRL algorithm is a fundamentally new way to learn reward functions and does not require demonstrations for new test settings. Most related to our work is Das et al. (2020), which also uses gradient-based bi-level optimization to match the expert. However, this approach requires a pre-trained dynamics model. Our work generalizes this idea since BC-IRL can optimize general policies, allowing any objective that is a function of the policy and any differentiable RL algorithm. We show our method, without an accurate dynamics model, outperforms Das et al. (2020) and scales to more complex tasks where Das et al. (2020) fails to learn. Generalization in IRL: Some prior works have explored how learned rewards can generalize to training policies in new situations. For instance, Fu et al. (2017) explored how rewards can generalize to training policies under changing dynamics. However, most prior work focuses on improving policy generalization to unseen task settings by addressing challenges introduced by the adversarial training objective of GAIL (Xu & Denil, 2019; Zolna et al., 2020; 2019; Lee et al., 2021; Barde et al., 2020; Jaegle et al., 2021; Dadashi et al., 2020). Finally, in contrast to most related work on generalization, our work focuses on analyzing and improving reward function transfer to new task settings.
3 LEARNING REWARDS VIA BEHAVIORAL CLONING INVERSE REINFORCEMENT LEARNING (BC-IRL)
We now present our algorithm for learning reward functions via behavioral cloning inverse reinforcement learning. We start by contrasting the maximum entropy and imitation loss objectives for
inverse reinforcement learning in Section 3.1. We then introduce a general formulation for BC-IRL in Section 3.2, and present an algorithmic instantiation that optimizes a BC objective to update the reward parameters via gradient-based bi-level optimization with a model-free RL algorithm in the inner loop in Section 3.3.
3.1 OUTER OBJECTIVES: MAX-ENT VS BEHAVIOR CLONING
In this work, we study an alternative IRL objective from the maximum entropy objective. While this maximum entropy IRL objective has led to impressive results, it is unclear how well this objective is suited for learning reward functions that generalize to new task settings, such as new start and goal distributions. Intuitively, assigning a high reward to demonstrated states (without task-specific hand-designed feature engineering) makes sense when you want to learn a reward function that can recover exactly the expert behavior, but it leads to reward landscapes that do not necessarily capture the essence of the task (e.g. to reach a goal, see Figure 1b). Instead of specifying an IRL objective that is directly a function of reward parameters (like maximum entropy), we aim to measure the reward function’s performance through the policy that results from optimizing the reward. With such an objective, we can optimize reward parameters for what we care about: for the resulting policy to match the behavior of the expert. The behavioral cloning (BC) loss measures how well the policy and expert actions match, defined for continuous actions as E(st,at)∼τe (πθ(st)− at)
2 where τe is an expert demonstration trajectory. Policy parameters θ are a result of using the current reward parameters ψ, which we can make explicit by making θ a function of ψ in the objective: LBC-IRL = E(st,at)∼τe(πθ(ψ)(st)− at)2. The IRL objective is now formulated in terms of the policy rollout “matching" the expert demonstration through the BC loss. We use the chain-rule to decompose the gradient of LBC-IRL with respect to reward parameters ψ. We also expand how the policy parameters θ(ψ) are updated via a REINFORCE update with learning rate α to optimize the current reward Rψ (but any differentiable policy update applies). ∂
∂ψ LBC-IRL =
∂
∂ψ
[ E
(st,at)∼τe
[( πθ(ψ)(st)− at )2]] = E
(st,at)∼τe
[ 2 ( πθ(ψ)(st)− at )] ∂ ∂ψ πθ(ψ)
where θ(ψ) = θold + α E (st,at)∼πθold
[( T∑
k=t+1
γk−t−1Rψ(sk) ) ∇ lnπθold(at|st) ] (2)
Computing the gradient for the reward update in Equation (2) includes samples from π collected in the reinforcement learning (RL) inner loop. This means the reward is trained on diverse states beyond the expert demonstrations through data collected via exploration in RL. As the agent explores during training, BC-IRL must provide a meaningful reward signal throughout the state-space to guide the policy to better match the expert. Note that this is a fundamentally different reward update rule as compared to current state-of-the-art methods that maximize a maximum entropy objective. We show in our experiments that this results in twice as high success rates compared to state-of-the-art MaxEnt IRL baselines in challenging generalization settings, demonstrating that BC-IRL learns more generalizable rewards that provide meaningful rewards beyond the expert demonstrations. The BC loss updates only the reward, as opposed to updating the policy as typical BC for imitation learning does Bain & Sammut (1995). BC-IRL is a IRL method that produces a reward, unlike regular BC that learns only a policy. Since BC-IRL uses RL, not BC, to update the policy, it avoids the pitfalls of BC for policy optimization such as compounding errors. Our experiments show that policies trained with rewards from BC-IRL generalize over twice as well to new settings as those trained with BC. In the following section, we show how to optimize this objective via bi-level optimization.
3.2 BC-IRL
We formulate the IRL problem as a gradient-based bi-level optimization problem, where the outer objective is optimized by differentiating through the optimization of the inner objective. We first describe how the policy is updated with a fixed reward, then how the reward is updated for the policy to better match the expert. Inner loop (policy optimization): The inner loop optimizes policy parameters θ given current reward function Rψ. The inner loop takes K gradient steps to optimize the policy given the current reward. Since the reward update will differentiate through this policy update, we require the policy update to be differentiable with respect to the reward function parameters. Thus, any reinforcement learning algorithm which is differentiable with respect to the reward function parameters can be plugged in here, which is the case for many policy gradient and model-based methods. However, this does not
include value-based methods such as DDPG Lillicrap et al. (2015) or SAC Haarnoja et al. (2018) that directly optimize value estimates since the reward function is not directly used in the policy update.
Algorithm 1 BC-IRL (general framework) 1: Initial reward Rψ , policy πθ 2: Policy updater POLICY_OPT(R, π) 3: Expert demonstrations De 4: for each epoch do 5: Policy Update: 6: θ′ ← POLICY_OPT(Rψ, πθ) 7: Sample demo batch τe ∼ De 8: Compute IRL loss 9: LBC-IRL = E(st,at)∼τe (πθ′(st)− at) 2 10: Compute gradient of IRL loss wrt reward 11: ∇ψLBC-IRL = ∂LBC-IRL∂θ′ ∂POLICY_OPT(Rψ,πθ) ∂ψ 12: ψ ← ψ −∇ψLBC-IRL 13: end for Outer loop (reward optimization): The outer loop optimization updates the reward parameters ψ via gradient descent. More concretely: after the inner loop, we compute the gradient of the outer loop objective ∇ψLBC-IRL wrt to reward parameters ψ by propagating through the inner loop. Intuitively, the new policy is a function of reward parameters since the old policy was updated to better maximize the reward. The gradient update on ψ tries to adjust reward function parameters such that the policy trained with this reward produces trajectories that match the demonstrations more closely. We use Grefenstette et al. (2019) for this higher-order optimization.
BC-IRL is summarized in Algorithm 1. Line 5 describes the inner loop update, where we update the policy πθ to maximize the current reward Rψ. Lines 6-7 compute the BC loss between the updated policy πθ′ and expert actions sampled from expert dataset De. The BC loss is then used in the outer loop to perform a gradient step on reward parameters in lines 8-9, where the gradient computation requires differentiating through the policy update in line 5.
3.3 BC-IRL-PPO
We now instantiate a specific version of the BC-IRL framework that uses proximal policy optimization (PPO) Schulman et al. (2017) to optimize the policy in the inner loop. This specific version, called BC-IRL-PPO, is summarized in Algorithm 2.
Algorithm 2 BC-IRL-PPO 1: Initial reward Rψ , policy πθ , value function Vν 2: Expert demonstrations De 3: for each epoch do 4: for k = 1→ K do 5: Run policy πθ in environment for T timesteps 6: Compute rewards r̂ψt for rollout with current Rψ 7: Compute advantages Âψ using r̂ψ and Vν 8: Compute LPPO using Âψ 9: Update πθ with∇θLPPO 10: end for 11: Sample demo batch τe ∼ De 12: Compute LBC-IRL = E(st,at)∼τe (πθ(st)− at) 2 13: Update reward Rψ with∇ψLBC-IRL 14: end for BC-IRL-PPO learns a state-only parameterized reward function Rψ(s), which assigns a state s ∈ S a scalar reward. The state-only reward has been shown to lead to rewards that generalize better Fu et al. (2017). BC-IRL-PPO begins by collecting a batch of rollouts in the environment from the current policy (line 5 of Algorithm 2). For each state s in this batch we evaluate the learned reward function Rψ(s) (line 6). From this sequence of rewards, we compute the advantage estimates Ât for each state (line 7). As is typical in PPO, we also utilize a learned value function Vν(st) to predict the value of the starting and ending state for partial episodes in the rollouts. This learned value function Vν is trained to predict the sum of future discounted rewards for the current reward function Rψ and policy πθ (part of LPPO in line 8). Using the advantages, we then compute the PPO update (line 9 of Algorithm 2) using the standard PPO loss in equation 8 of Schulman et al. (2017). Note the advantages are a function of the reward function parameters used to compute the rewards, so PPO is differentiable with respect to the reward function. Next, in the outer loop update, we update the reward parameters, by sampling a batch of demonstration transitions (line 11), computing the behavior cloning IRL objective LBC-IRL (line 12), and updating the reward parameters ψ via gradient descent on LBC-IRL (line 13). Finally, in this work, we perform one policy optimization step (K = 1) per reward function update. Furthermore, rather than re-train a policy from scratch for every reward function iteration, we initialize each inner loop from the previous πθ. This initialization is important in more complex domains where K would otherwise have to be large to acquire a good policy from scratch.
4 ILLUSTRATION & QUALITATIVE ANALYSIS OF LEARNED REWARDS
We first analyze the rewards learned by different IRL methods in a 2D point mass navigation task. The purpose of this analysis is to test our hypothesis that our method learns more generalizable rewards compared to maximum entropy baselines in simple low-dimensional settings amenable to intuitive visualizations. Specifically, we compare BC-IRL-PPO to the following baselines. Exact MaxEntIRL (MaxEnt) Ziebart et al. (2008): The exact MaxEntIRL method where the partition function is exactly computed by discretizing the state space. Guided Cost Learning (GCL) Finn et al. (2016b): Uses the maximum-entropy objective to update the reward. The partition function is approximated via adaptive sampling. Adversarial IRL (AIRL) Fu et al. (2017): An IRL method that uses a learned discriminator to distinguish expert and agent states. As described in Fu et al. (2017) we also use a shaping network h during reward training, but only visualize and transfer the reward approximator g. f-IRL Ni et al. (2021): Another MaxEntIRL based method, f-IRL computes the analytic gradient of the f-divergence between the agent and expert state distributions. We use the JS divergence version. Our method does not require demonstrations at test time, instead we transfer our learned rewards zero-shot. Thus we forego comparisons to other meta-learning methods, such as Xu et al. (2019), which require test time demonstrations. While a direct comparison with Das et al. (2020) is not possible because their method assumes access to a pre-trained dynamics model, we conduct a separate study comparing their method with an oracle dynamics model against BC-IRL in Appendix A.5. All baselines use PPO Schulman et al. (2017) for policy optimization as commonly done in prior work Orsini et al. (2021). All methods learn a state-dependent reward rψ(s), and a policy π(s), both parametrized as neural networks. Further details are described in Appendix C. The 2D point navigation tasks consist of a point agent policy that outputs a desired change in (x, y) position (velocity) (∆x,∆y) at every time step. The task has a trajectory length of T = 5 time steps with 4 demonstrations. Figure 2a visualizes the expert demonstrations where darker points are earlier time steps. The agent starting state distribution is centered around the starting state of each demonstration. Figure 2b,c visualize the rewards learned by BC-IRL and the AIRL baseline. Lighter regions indicate higher rewards. In Figure 2b, BC-IRL learns a reward that looks like a quadratic bowl centered at the origin, which models the distance to the goal across the entire state space. AIRL, the maximum entropy baseline, visualized in Figure 2c, learns a reward function where high rewards are placed on the demonstrations and low rewards elsewhere. Other baselines are visualized in Appendix Figure 4. To analyze the generalization capabilities of the learned rewards we use them to train policies on a new starting state distribution (visualized in Appendix Figure 9). Concretely, a newly initialized policy is trained from scratch to maximize the learned reward from the testing start state distribution. The policy is trained with 5 million environment steps, which is the same number of steps as for learning the reward. The testing starting state distribution has no overlap with the training start state distribution. Policy optimization at test time is also done with PPO. The Figure 2d,e display trajectories from the trained policies where darker points again correspond to earlier time steps. This qualitative evaluation shows that BC-IRL learns a meaningful reward for states not covered by the demonstrations. Thus at test time agent trajectories are guided towards the goal with the terminal states (lightest points) close to the goal. The X-shaped rewards learned by the baselines do not provide meaningful rewards in the testing setting as they assign uniformly low rewards to states not covered by the demonstration. This provides poor reward shaping which prevents the agent from reaching the goal within the 5M training interactions with the environment. This results in agent trajectories that do not end close to the goal by the end of training.
Next, we report quantitative results in Table 1. We evaluate the performance of the policy trained at test time by reporting the distance from the policy’s final trajectory state sT to the goal g: ∥sT − g∥22. We report the final train performance of the algorithm (“Train"), along with the performance of the policy trained from scratch with the learned reward in the train distribution “Eval (Train)" and testing distribution “Eval (Test)". These results confirm that BC-IRL learns more generalizable rewards than baselines. Specifically, BC-IRL achieves a lower distance on the testing starting state distribution at 0.04, compared to 0.53, 1.6, and 0.36 for AIRL, GCL, and MaxEnt respectively. Surprisingly, BC-IRL even performs better than exact MaxEnt, which uses privileged information about the state space to estimate the partition function. This fits with our hypothesis that our method learns more generalizable rewards than MaxEnt, even when the MaxEnt objective is exactly computed. We repeat this analysis for a version of the task with an obstacle blocking the path to the goal in Appendix A.2 and reach the same findings even when BC-IRL must learn an asymmetric reward function. We also compare learned rewards to manually defined rewards in Appendix A.3. Despite baselines learning rewards that do not generalize beyond the demonstrations, with enough environment interactions, policies trained under these rewards will eventually reach the high-rewards along the expert demonstrations. Since all demonstrations reach the goal in the point mass task, the X-shaped reward baselines learn have high-reward at the center. Despite the X-shaped providing little reward shaping off the X, with enough environment interactions, the agent eventually discovers the high-reward point at the goal. After training AIRL for 15M steps, 3x the number of steps for reward learning and the experiments in Table 1 and Figure 2, the policy eventually reaches 0.08 ± 0.01 distance to the goal. In the same setting, BC-IRL achieves 0.04± 0.01 distance to the goal in under 5M steps. The additional performance gap is due to BC-IRL learning a reward with a maximum reward value closer to the center (0.02 to the center) compared to AIRL (0.04 to the center).
5 EXPERIMENTS
In our experiments, we aim to answer the following questions: (1) Can BC-IRL learn reward functions that can train policies from scratch? (2) Does BC-IRL learn rewards that can generalize to unseen states and goals better than IRL baselines in complex environments? (3) Can learned rewards transfer better than policies learned directly with imitation learning? We show the first in Section 5.1 and the next two in Section 5.2. We evaluate on two continuous control tasks: 1) Fetch reaching task Szot et al. (2021) (Fig 3a), and the TriFinger reaching task Ahmed et al. (2021) (Fig 3b).
5.1 REWARD TRAINING PHASE: LEARNING REWARDS TO MATCH THE EXPERT
Experimental Setup and Evaluation Metrics In the Fetch reaching task, setup in the Habitat 2.0 simulator Szot et al. (2021), the robot must move its end-effector to a 3D goal location g which changes between episodes. The action space of the agent is the desired velocities for each of the 7 joints on the robot arm. The robot succeeds if the end-effector is within 0.1m of the target position by the 20 time step maximum episode length. During reward learning, the goal g is sampled from a 0.2 meter length unit cube in front of the robot, g ∼ U([0]3, [0.2]3). We provide 100 demonstrations.
BC-IRL-PPO AIRL Fetch Reach (Success) ↑ 1.00 ± 0.00 0.96 ± 0.00
Trifinger Reach (Goal Dist) ↓ 0.002 ± 0.0015 0.007 ± 0.0017
distance to the demonstrated goal, (g − gdemo)2 in meters.
Evaluation and Baselines We evaluate BC-IRL-PPO by how well the reward it can train new policies from scratch in the same start state and goal distribution as the demonstrations. Given the pointmass results Section 4, we compare BC-IRL-PPO to AIRL, the best performing baseline for reward learning. More details on baseline choice, policy and reward representation, and hyperparameters are described in the Appendix (D).
Results and Analysis As Table 2 confirms, our method and baselines are able to imitate the demonstrations when policies are evaluated in the same task setting as the expert. All methods are able to achieve a near 100% success rate and low distance to goal. Methods also learn with similar sample efficiency as shown in the learning curves in Figure 3d. These high-success rates indicate BC-IRL-PPO and AIRL learn rewards that capture the expert behavior and train policies to mimic the expert. When training policies in the same state/goal distribution as the expert, rewards from BC-IRL-PPO follow any constraints followed by the experts, just like the IRL baselines.
5.2 TEST PHASE: EVALUATING REWARD AND POLICY GENERALIZATION
In this section, we evaluate how learned rewards and policies can generalize to new task settings with increased starting state and goal sampling noise. We evaluate the generalization ability of rewards by evaluating how well they can train new policies to reach the goal in new start and goal distributions not seen in the demonstrations. This evaluation captures the reality that it is infeasible to collect demonstrations for every possible start/goal configuration. We thus aim to learn rewards from demonstrations that can generalize beyond the start/goal configurations present in those demonstrations. We quantify reward generalization ability by whether the reward can train a policy to perform the task in the new start/goal configurations. For the Fetch Reach task, we evaluate on three wider test goal sampling distributions g ∼ U([0]3, [gmax]3): Easy (gmax = 0.25), Medium (gmax = 0.4), and Hard (gmax = 0.55), all visualized in Figure 3c. Similarly, we evaluate on new state regions, which increase the starting and goal initial state distributions but exclude the regions from training, exposing the reward to only unseen initial states and goals. In Trifinger, we sample start configurations from around the start joint position in the demonstrations, with increasingly wider distributions (s0 ∼ N (sdemo0 , δ), with δ = 0.01, 0.03, 0.05). We evaluate reward function performance by how well the reward function can train new policies from scratch. However, now the reward must generalize to inferring rewards in the new start state and goal distributions. We additionally compare to two imitation learning baselines: Generative Adversarial Imitation Learning (GAIL) Ho & Ermon (2016) and Behavior Cloning (BC). We compare different methods of transferring the learned reward and policy to the test setting: 1) Reward: Transfer only the reward from the above training phase and train a newly initialized policy in the test setting.
2) Policy: Transfer only the policy from the above training phase and immediately evaluate the policy without further training in the test setting. This compares transferring learned rewards and transferring learned policies. We use this transfer strategy to compare against direct imitation learning methods. 3) Reward+Policy: Transfer the reward and policy and then fine-tune the policy using the learned reward in the test setting. Results for this setting are in Appendix B.2.
Results and Analysis The results in Table 3 show BC-IRL-PPO learns rewards that generalize better than IRL baselines to new settings. In the hardest generalization setting, BC-IRL-PPO achieves over twice the success rate of AIRL. AIRL struggles to transfer its learned reward to harder generalization settings, with performance decreasing as the goal sampling distribution becomes larger and has less overlap with the training goal distribution. In the “Hard" start region generalization setting, the performance of AIRL degrades to 34% success rate. On the other hand, BC-IRL-PPO learns a generalizable reward and performs well even in the “Hard" generalization strategy, achieving 76% success. This trend is true both for generalization to new start state distributions and for new start state regions. The results for Trifinger Reach in Table 4 support these findings with rewards learned via BC-IRL-PPO generalizing better to training policies from scratch in all three test distributions. All training curves for training policies from scratch with learned rewards are in Appendix B.1.
Furthermore, the results in Table 3 also demonstrate that transferring rewards “(Reward)" is more effective for generalization than transferring policies “(Policy)". Transferring the reward to train new policies typically outperforms transferring only the policy for all IRL approaches. Additionally, training from scratch with rewards learned via IRL outperforms non-reward learning imitation learning methods that only permit transferring
the policy zero-shot. The policies learned by GAIL and BC generalize worse than training new policies from scratch with the reward learned by BC-IRL-PPO, with BC and GAIL achieving 35% and 37% success rates in the “Hard" generalization setting while our method achieves 76% success. The superior performance of BC-IRL-PPO over BC highlights the important differences between the two methods with our method learning a reward and training the policy with PPO on the learned reward. In Appendix B.2, we also show the “Policy+Reward" transfer setting and demonstrate BC-IRL-PPO also outperforms baselines in this setting. In Appendix B we also analyze performance with the number of demos, different inner and outer loop learning rates, and number of inner loop updates.
6 DISCUSSION AND FUTURE WORK
We propose a new IRL framework for learning generalizable rewards with bi-level gradient-based optimization. By meta-learning rewards, our framework can optimize alternative outer-level objectives instead of the maximum entropy objective commonly used in prior work. We propose BC-IRL-PPO an instantiation of our new framework, which uses PPO for policy optimization in the inner loop and an action matching objective in the outer loop. We demonstrate that BC-IRL-PPO learns rewards that generalize better than baselines. Potential negative social impacts of this work are that learning reward functions from data could result in less interpretable rewards, leading to more opaque behaviors from agents that optimize the learned reward. Future work will explore alternative instantiations of the BC-IRL framework, such as utilizing sample efficient off-policy methods like SAC or model-based methods in the inner loop. Model-based methods are especially appealing because a single dynamics model could be shared between tasks and learning reward functions for new tasks could be achieved purely using the model. Finally, other outer loop objectives rather than action matching are also possible.
7 ACKNOWLEDGMENTS
The Georgia Tech effort was supported in part by NSF, ONR YIP, and ARO PECASE. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.
A FURTHER POINT MASS NAVIGATION RESULTS
A.1 QUALITATIVE RESULTS FOR ALL METHODS IN POINT MASS NAVIGATION
Visualizations of the reward functions from all methods for the regular pointmass task are displayed in Figure 4.
A.2 OBSTACLE POINT MASS NAVIGATION
The obstacle point mass navigation task incorporates asymmetric dynamics with an off-centered obstacle. This environment is the same as the point mass navigation task from Section 4, except there is an obstacle blocking the path to the center and the agent only spawns in the top-right hand corner. This task has a trajectory length of T = 50 time steps with 100 demonstrations. Figure 5a visualizes the expert demonstrations where darker points are earlier time steps.
The results in Table 5 are consistent with the non-obstacle point mass task where BC-IRL generalizes better than a variety of MaxEnt IRL baselines. In the train setting, BC-IRL learns rewards that match the expert behavior with avoiding the obstacle and even achieves better performance than baselines in this task with 0.08 distance to the goal versus 0.41 to the goal for the best performing baseline in the train setting, f-IRL. BC-IRL generalizes better than baselines achieving 0.79 distance to goal compared to the best performing baseline MaxEnt, which also has access to oracle information. The reward learned by BC-IRL visualized in Figure 5b shows BC-IRL learns a complex reward to account for the obstacle. Figure 6 visualizes the rewards for all methods.
A.3 COMPARISON TO MANUALLY DEFINED REWARDS
We compare the rewards learned by BC-IRL to two hand-coded rewards. We visualize how well the learned rewards can train policies from scratch in the evaluation distribution in the point navigation with obstacle task. The reward learned by BC-IRL therefore must generalize. On the other hand, the hand-coded rewards do not require any learning. We include a sparse reward for achieving the goal, which does not require domain knowledge when implementing the reward. We also implement a dense reward, defined as the change in Euclidean distance to the goal where rt = dt−1 − dt where dt is the distance of the agent to the goal at time t. Figure 7a shows policy training curves for the learned and hand-defined rewards. The sparse reward performs poorly and the policy fails to get closer to the goal. On the other hand, the rewards learned by BC-IRL guide the policy closer to the goal. The dense reward, which incoporates more domain knowledge about the task, performs better than the learned reward.
A.4 ANALYZING NUMBER OF INNER LOOP UPDATES
As described in Section 3.3, a hyperparameter in BC-IRL-PPO is the number of inner loop policy optimization steps K, for each reward function update. In our experiments, we selected K = 1. In Figure 7b we examine the training performance of BC-IRL-PPO in the point navigation task with no obstacle for various choices of K. We find that a wide variety of K values perform similarly. We,
therefore, selected K = 1 since it runs the fastest, with no need to track multiple policy updates in the meta optimization.
A.5 BC-IRL WITH MODEL-BASED POLICY OPTIMIZATION
We compare BC-IRL-PPO to a version of BC-IRL that uses model-based RL in the inner loop inspired by Das et al. (2020). A direct comparison to Das et al. (2020) is not possible because their method assumes access to a pre-trained dynamics model, while in our work, we do not assume access to a ground truth or pre-trained dynamics model. However, we compare to a version of Das et al. (2020) in the point mass navigation task with a ground truth dynamics model. Specifically, we use gradient-based MPC in the inner loop optimization as in Das et al. (2020), but the BC IRL outer loop objective. With the BC outer loop objective, it also learns generalizable rewards in the point mass navigation task achieving 0.06 ± 0.03 distance to goal in “Eval (Train)" and 0.07 ± 0.03 in “Eval (Test)". However, in the point mass navigation task with the obstacle, this method fails to learn a reward and struggles to minimize the outer loop objective. We hypothesize that in longer horizon tasks, the MPC inner loop optimization in [9] easily gets stuck in local minimas and struggles to differentiate through the entire MPC optimization.
B REACH TASK: FURTHER EXPERIMENT RESULTS
B.1 RL-TRAINING CURVES
In Figure 8 we visualize the training curves for the RL training used in Table 3. Figure 8a shows policy learning progress during the IRL training phase. In each setting, the performance is measured by using the current reward to train a policy and computing the success rate of the policy. Figure 8b to Figure 8d show the policy learning curves at test time, in the generalization settings, where the reward is frozen and must generalize to learn new policies on new goals (“Reward " transfer strategy). These plots show that all methods learn similarly during IRL training (Figure 8a). When transferring the learned rewards to test settings we see that BC-IRL-PPO performs better in training successful policies as the generalization difficulty increases with the most difficult generalization in Figure 8d.
B.2 TRANSFER REWARD+POLICY SETTING
Here, we evaluate the “Policy+Reward" transfer strategy to new environment settings where both the reward and policy are transferred. In the new setting, “Policy+Reward" uses the transferred reward to fine-tune the pre-trained transferred policy with RL. We show results in Table 6 for the “Policy+Reward" transfer strategy alongside the “Reward" transfer strategy from Table 3. We find that “Policy+Reward" performs slightly better than “Reward" in the Hard setting of generalization to new starting state distributions but otherwise performs similarly. Even in the “Policy+Reward" setting, AIRL struggles to learn a good policy in the Medium and Hard settings, achieving 38% and 81% success rate respectively.
B.3 ANALYZING THE NUMBER DEMONSTRATIONS
We analyze the effect of the number of demonstrations used for reward learning in Table 7. We find that using fewer demonstrations does not affect the training performance of BC-IRL-PPO and AIRL. We also find our method does just as well with 5 demos as 100 in the +75% noise setting, with any number of demonstrations achieving near-perfect success rates. On the other hand, the performance of AIRL degrades from 93% success rate with 100 demonstrations to 84% in the +75% noise setting. In the +100% noise setting, fewer demonstrations hurt performance for both methods, with our method dropping from 76% success to 69% success and AIRL from 38% success to 42% success.
B.4 BC-IRL HYPERPARARAMETER ANALYSIS
BC-IRL-PPO requires a learning rate for the policy optimization and a learning rate for the reward optimization. We compare the performance of our algorithm for various choices of policy and reward learning rates in Table 8. We find that across many different learning rate settings our method achieves high rates of success, but high policy learning rates have a detrimental effect. High reward learning rates have a slight negative impact but are not as severe.
C FURTHER 2D POINT NAVIGATION DETAILS
The start state distributions for the 2D point navigation task are illustrated in Figure 9. The reward is learned using the start distribution in red on 4 equally spaced points from the center. Four demonstrations are also provided in this train start state distribution from each of the four corners. The reward is then transferred and a new policy is trained with the start state distribution in the magenta color. This start state distribution has no overlap with the train distribution and is also equally spaced. The reward must therefore generalize to providing rewards in this new state distribution. The hyperparameters for the methods from the 2D point navigation task in Section 4 are detailed in Table 9 for the no obstacle version and Table 10 for the obstacle version of the task. The reward
function / discriminator for all methods was a neural network with 1 hidden layer and 128 hidden dimension size with tanh-activations between the layers. Adam Kingma & Ba (2014) was used for policy and reward optimization. All RL training used 5M steps of experience for the training and testing setting for the navigation no obstacle task. f-IRL uses the same optimization and neural network hyperparameters for the discriminator and reward function. Like in Ni et al. (2020), we clamp the output of the reward function within the range [−10, 10] and found this was beneficial for learning. In the navigation with obstacle task, training used 15M steps of experience and testing used 5M steps of experience. All experiments were run on a Intel(R) Core(TM) i9-9900X CPU @ 3.50GHz.
D FURTHER REACH TASK DETAILS
D.1 CHOICE OF BASELINES
The “Exact MaxEntIRL" approach is excluded because it cannot be computed exactly for highdimensional state spaces. GCL is excluded because of its poor performance on the toy task relative to other methods. We also compare to the following imitation learning methods which learn only policies and no transferable reward:
• Behavioral Cloning (BC) Bain & Sammut (1995): Train a policy using supervised learning to match the actions in the expert dataset.
• Generative Adversarial Imitation Learning (GAIL) Ho & Ermon (2016): Trains a discriminator to distinguish expert from agent transitions and then use the discriminator confusion score as the reward. This reward is coupled with the current policy Finn et al. (2016a) (referred to as a “pseudo-reward") and therefore cannot train policies from scratch.
D.2 POLICY+NETWORK REPRESENTATION
All methods use a neural network to represent the policy and reward with 1 hidden layer, 128 hidden units, and tanh-activation functions between the layers. We use PPO as the policy optimization method for all methods. All methods in all tasks use demonstrations obtained from a policy trained with PPO using a manually engineered reward.
D.3 HYPERPARAMETERS
The hyperparameters for all methods from the Reaching task are described in Table 11. The Adam optimizer Kingma & Ba (2014) was used for policy and reward optimization. All RL training used 1M steps of experience for the training and testing settings. The “Reward" and “Policy+Reward" transfer strategies trained policies with the same set of hyperparameters.
E TRIFINGER EXPERIMENT DETAILS
E.1 POLICY+NETWORK REPRESENTATION
All methods use a neural network to represent the policy and reward with 1 hidden layer, 128 hidden units, and tanh-activation functions between the layers. We use PPO as the policy optimization method for all methods. All methods in all tasks use demonstrations obtained from a policy trained with PPO using a manually engineered reward.
E.2 HYPERPARAMETERS
The hyperparameters for all methods for the Trifinger reaching task are described in Table 12. The Adam optimizer Kingma & Ba (2014) was used for policy and reward optimization. All RL training used 500k steps of experience for the reward training phase and 100k steps of experience for policy optimization in test settings. | 1. What is the focus and contribution of the paper on inverse reinforcement learning?
2. What are the strengths of the proposed approach, particularly in its novelty and simplicity?
3. What are the weaknesses of the paper, especially regarding the main tool-experiment?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Do you have any concerns or questions about the paper's methodology or results? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper introduces a new form of IRL in which the learned reward is not based on the state occupancy, but is instead computed in order to lead a policy-gradient learner at imitating the demonstration. This is done with a meta-learning approach, in which the inner loop updates the policy (with a policy-gradient objective) and the outer loop updates the reward (with a BC objective). They observe that such a reward that is not based on occupancy is better at generalising behaviours and is more robust to changes of the initial distribution.
Strengths And Weaknesses
The method is novel, easy to understand and implement and the results looks impressive.
I however have a concern regarding the main tool-experiment as shown in Figure 1 and 2. Even if the reward is only on the states that are occupied by the demonstration (resulting in a X-shape), the learned behaviour with such a reward should still learn to reach the highest rewarding states.
For example, a Value-function associated with such a reward would diffuse values outside of the X cross. In this experiment there is no value-function, but they use PPO that directs policies updates with generalized advantage estimators which behave exactly like a value-function (it is a non-biased estimator). Therefor the fact there is no reward outside the cross should not significantly affect the optimal behaviour.
I would understand that the reward obtained by BC-IRL is much more dense, and so accelerates the learning. But AIRL well implemented should not fail on this task (at least with enough learning steps). Does AIRL ends up learning the good behaviour after a (much) longer training?
Clarity, Quality, Novelty And Reproducibility
The paper is well written, the method is simple to understand and well described. I have never seen such an approach for IRL before. |
ICLR | Title
BC-IRL: Learning Generalizable Reward Functions from Demonstrations
Abstract
How well do reward functions learned with inverse reinforcement learning (IRL) generalize? We illustrate that state-of-the-art IRL algorithms, which maximize a maximum-entropy objective, learn rewards that overfit to the demonstrations. Such rewards struggle to provide meaningful rewards for states not covered by the demonstrations, a major detriment when using the reward to learn policies in new situations. We introduce BC-IRL, a new inverse reinforcement learning method that learns reward functions that generalize better when compared to maximum-entropy IRL approaches. In contrast to the MaxEnt framework, which learns to maximize rewards around demonstrations, BC-IRL updates reward parameters such that the policy trained with the new reward matches the expert demonstrations better. We show that BC-IRL learns rewards that generalize better on an illustrative simple task and two continuous robotic control tasks, achieving over twice the success rate of baselines in challenging generalization settings.
1 INTRODUCTION
Reinforcement learning has demonstrated success on a broad range of tasks from navigation Wijmans et al. (2019), locomotion Kumar et al. (2021); Iscen et al. (2018), and manipulation Kalashnikov et al. (2018). However, this success depends on specifying an accurate and informative reward signal to guide the agent towards solving the task. For instance, imagine designing a reward function for a robot window cleaning task. The reward should tell the robot how to grasp the cleaning rag, how to use the rag to clean the window, and to wipe hard enough to remove dirt, but not hard enough to break the window. Manually shaping such reward functions is difficult, non-intuitive, and time-consuming. Furthermore, the need for an expert to design a reward function for every new skill limits the ability of agents to autonomously acquire new skills. Inverse reinforcement learning (IRL) (Abbeel & Ng, 2004; Ziebart et al., 2008; Osa et al., 2018) is one way of addressing the challenge of acquiring rewards by learning reward functions from demonstrations and then using the learned rewards to learn policies via reinforcement learning. When compared to direct imitation learning, which learns policies from demonstrations directly, potential benefits of IRL are at least two-fold: first, IRL does not suffer from the compounding error problem that is often observed with policies directly learned from demonstrations (Ross et al., 2011; Barde et al., 2020); and second, a reward function could be a more abstract and parsimonious description of
the observed task that generalizes better to unseen task settings (Ng et al., 2000; Osa et al., 2018). This second potential benefit is appealing as it allows the agent to learn a reward function to train policies not only for the demonstrated task setting (e.g. specific start-goal configurations in a reaching task) but also for unseen settings (e.g. unseen start-goal configurations), autonomously without additional expert supervision.
However, thus far the generalization properties of reward functions learned via IRL are poorly understood. Here, we study the generalization of learned reward functions and find that prior IRL methods fail to learn generalizable rewards and instead overfit to the demonstrations. Figure 1 demonstrates this on a task where a point mass agent must navigate in a 2D space to a goal location at the center. An important reward characteristic for this task is that an agent, located anywhere in the state-space, should receive increasing rewards as it gets closer to the goal. Most recent prior work Fu et al. (2017); Ni et al. (2020); Finn et al. (2016c) developed IRL algorithms that optimize the maximum entropy objective (Ziebart et al., 2008) (Figure 1b), which fails to capture goal distance in the reward. Instead, the MaxEnt objective leads to rewards that separate non-expert from expert behavior by maximizing reward values along the expert demonstration. While useful for imitating the experts, the MaxEnt objective prevents the IRL algorithms from learning to assign meaningful rewards to other parts of the state space, thus limiting generalization of the reward function.
As a remedy to the reward generalization challenge in the maximum entropy IRL framework, we propose a new IRL framework called Behavioral Cloning Inverse Reinforcement Learning (BCIRL). In contrast to the MaxEnt framework, which learns to maximize rewards around demonstrations, the BC-IRL framework updates reward parameters such that the policy trained with the new reward matches the expert demonstrations better. This is akin to the model-agnostic meta-learning (Finn et al., 2017) and loss learning (Bechtle et al., 2021) frameworks where model or loss function parameters are learned such that the downstream task performs well when utilizing the meta-learned parameters. By using gradient-based bi-level optimization Grefenstette et al. (2019), BC-IRL can optimize the behavior cloning loss to learn the reward, rather than a separation objective like the maximum entropy objective. Importantly, to learn the reward, BC-IRL differentiates through the reinforcement learning policy optimization, which incorporates exploration and requires the reward to provide a meaningful reward throughout the state space to guide the policy to better match the expert. We find BC-IRL learns more generalizable rewards (Figure 1c), and achieves over twice the success rate of baseline IRL methods in challenging generalization settings.
Our contributions are as follows: 1) The general BC-IRL framework for learning more generalizable rewards from demonstrations, and a specific BC-IRL-PPO variant that uses PPO as the RL algorithm. 2) A quantitative and qualitative analysis of reward functions learned with BC-IRL and MaximumEntropy IRL variants on a simple task for easy analysis. 3) An evaluation of our novel BC-IRL algorithm on two continuous control tasks against state-of-the-art IRL and IL methods. Our method learns rewards that transfer better to novel task settings.
2 BACKGROUND AND RELATED WORK
We begin by reviewing Inverse Reinforcement Learning through the lense of bi-level optimization. We assume access to a rewardless Markov decision process (MDP) defined through the tuple M = (S,A,P, ρ0, γ,H) for state-space S, action space A, transition distribution P(s′|s, a), initial state distribution ρ0, discounting factor γ, and episode horizon H . We also have access to a set of expert demonstration trajectories De = {τei } N i=1 where each trajectory is a sequence of state, action tuples.
IRL learns a parameterized reward function Rψ(τi) which assigns a trajectory a scalar reward. Given the reward, a policy πθ(a|s) is learned which maps from states to a distribution over actions. The goal of IRL is to produce a reward Rψ , such that a policy trained to maximize the sum of (discounted) rewards under this reward function matches the behavior of the expert. This is captured through the following bi-level optimization problem:
min ψ
LIRL(Rψ;πθ) (outer obj.) (1a)
s.t. θ ∈ argmax θ g(Rψ, θ) (inner obj.) (1b)
where LIRL(Rψ;πθ) denotes the IRL loss and measures the performance of the learned reward Rψ and policy πθ; g(Rψ, θ) is the reinforcement learning objective used to optimize policy parameters θ. Algorithms for this bi-level optimization consist of an outer loop ((1a)) that optimizes the reward and an inner loop ((1b)) that optimizes the policy given the current reward.
Maximum Entropy IRL: Early work on IRL learns rewards by separating non-expert from expert trajectories (Ng et al., 2000; Abbeel & Ng, 2004; Abbeel et al., 2010). A primary challenge of these early IRL algorithms was the ambiguous nature of learning reward functions from demonstrations: many possible policies exist for a given demonstration, and thus many possible rewards exist. The Maximum Entropy (MaxEnt) IRL framework (Ziebart et al., 2008) seeks to address this ambiguity, by learning a reward (and policy) that is as non-committal (uncertain) as possible, while still explaining the demonstrations. More concretely, given reward parameters ψ, MaxEnt IRL optimizes the log probability of the expert trajectories τe from demonstration dataset De through the following loss,
LMaxEnt-IRL(Rψ) = −Eτe∼De [log p(τe|ψ)] = −Eτe∼De [ log exp (Rψ(τ e))
Z(ψ) ] = −Eτe∼De [Rψ(τe)] + logZ(ψ).
A key challenge of MaxEnt IRL is estimating the partition function Z(ψ) = ∫ expRψdτ . Ziebart et al. (2008) approximate Z in small discrete state spaces with dynamic programming. MaxEnt from the Bi-Level perspective: However, computing the partition functions becomes intractable for high-dimensional and continuous state spaces. Thus algorithms approximate Z using samples from a policy optimized via the current reward. This results in the partition function estimate being a function of the current policy log Ẑ(ψ;πθ). As a result, MaxEnt approaches end up following the bi-level optimization template by iterating between: 1) updating reward function parameters given current policy samples via the outer objective ((1a)); and 2) optimizing the policy parameters with the current reward parameters via an inner policy optimization objective and algorithm (1b). For instance, model-based IRL methods such as Wulfmeier et al. (2017); Levine & Koltun (2012); Englert et al. (2017) use model-based RL (or optimal control) methods to optimize a policy (or trajectory), while model-free IRL methods such as Kalakrishnan et al. (2013); Boularias et al. (2011); Finn et al. (2016b;a) learn policies via model-free RL in the inner loop. All of these methods use policy rollouts to approximate either the partition function of the maximum-entropy IRL objective or its gradient with respect to reward parameters in various ways (outer loop). For instance Finn et al. (2016b) learn a stochastic policy q(τ), and sample from that to estimate Z(ψ) ≈ 1M ∑ τi∼q(τ) expRψ(τi) q(τi) with M samples from q(τ). Fu et al. (2017) with adversarial IRL (AIRL) follow this idea and view the problem as an adversarial training process between policy πθ(a|s) and discriminator D(s) =
expRψ(s) expRψ(s)+πθ(a|s) . Ni et al. (2020) analytically compute the gradient of the f -divergence
between the expert state density and the MaxEnt state distribution, circumventing the need to directly compute the partition function. Meta-Learning and IRL: Like some prior work (Xu et al., 2019; Yu et al., 2019; Wang et al., 2021; Gleave & Habryka, 2018; Seyed Ghasemipour et al., 2019), BC-IRL combines meta-learning and inverse reinforcement learning. However, these works focus on fast adaptation of reward functions to new tasks for MaxEnt IRL through meta-learning. These works require demonstrations of the new task to adapt the reward function. BC-IRL algorithm is a fundamentally new way to learn reward functions and does not require demonstrations for new test settings. Most related to our work is Das et al. (2020), which also uses gradient-based bi-level optimization to match the expert. However, this approach requires a pre-trained dynamics model. Our work generalizes this idea since BC-IRL can optimize general policies, allowing any objective that is a function of the policy and any differentiable RL algorithm. We show our method, without an accurate dynamics model, outperforms Das et al. (2020) and scales to more complex tasks where Das et al. (2020) fails to learn. Generalization in IRL: Some prior works have explored how learned rewards can generalize to training policies in new situations. For instance, Fu et al. (2017) explored how rewards can generalize to training policies under changing dynamics. However, most prior work focuses on improving policy generalization to unseen task settings by addressing challenges introduced by the adversarial training objective of GAIL (Xu & Denil, 2019; Zolna et al., 2020; 2019; Lee et al., 2021; Barde et al., 2020; Jaegle et al., 2021; Dadashi et al., 2020). Finally, in contrast to most related work on generalization, our work focuses on analyzing and improving reward function transfer to new task settings.
3 LEARNING REWARDS VIA BEHAVIORAL CLONING INVERSE REINFORCEMENT LEARNING (BC-IRL)
We now present our algorithm for learning reward functions via behavioral cloning inverse reinforcement learning. We start by contrasting the maximum entropy and imitation loss objectives for
inverse reinforcement learning in Section 3.1. We then introduce a general formulation for BC-IRL in Section 3.2, and present an algorithmic instantiation that optimizes a BC objective to update the reward parameters via gradient-based bi-level optimization with a model-free RL algorithm in the inner loop in Section 3.3.
3.1 OUTER OBJECTIVES: MAX-ENT VS BEHAVIOR CLONING
In this work, we study an alternative IRL objective from the maximum entropy objective. While this maximum entropy IRL objective has led to impressive results, it is unclear how well this objective is suited for learning reward functions that generalize to new task settings, such as new start and goal distributions. Intuitively, assigning a high reward to demonstrated states (without task-specific hand-designed feature engineering) makes sense when you want to learn a reward function that can recover exactly the expert behavior, but it leads to reward landscapes that do not necessarily capture the essence of the task (e.g. to reach a goal, see Figure 1b). Instead of specifying an IRL objective that is directly a function of reward parameters (like maximum entropy), we aim to measure the reward function’s performance through the policy that results from optimizing the reward. With such an objective, we can optimize reward parameters for what we care about: for the resulting policy to match the behavior of the expert. The behavioral cloning (BC) loss measures how well the policy and expert actions match, defined for continuous actions as E(st,at)∼τe (πθ(st)− at)
2 where τe is an expert demonstration trajectory. Policy parameters θ are a result of using the current reward parameters ψ, which we can make explicit by making θ a function of ψ in the objective: LBC-IRL = E(st,at)∼τe(πθ(ψ)(st)− at)2. The IRL objective is now formulated in terms of the policy rollout “matching" the expert demonstration through the BC loss. We use the chain-rule to decompose the gradient of LBC-IRL with respect to reward parameters ψ. We also expand how the policy parameters θ(ψ) are updated via a REINFORCE update with learning rate α to optimize the current reward Rψ (but any differentiable policy update applies). ∂
∂ψ LBC-IRL =
∂
∂ψ
[ E
(st,at)∼τe
[( πθ(ψ)(st)− at )2]] = E
(st,at)∼τe
[ 2 ( πθ(ψ)(st)− at )] ∂ ∂ψ πθ(ψ)
where θ(ψ) = θold + α E (st,at)∼πθold
[( T∑
k=t+1
γk−t−1Rψ(sk) ) ∇ lnπθold(at|st) ] (2)
Computing the gradient for the reward update in Equation (2) includes samples from π collected in the reinforcement learning (RL) inner loop. This means the reward is trained on diverse states beyond the expert demonstrations through data collected via exploration in RL. As the agent explores during training, BC-IRL must provide a meaningful reward signal throughout the state-space to guide the policy to better match the expert. Note that this is a fundamentally different reward update rule as compared to current state-of-the-art methods that maximize a maximum entropy objective. We show in our experiments that this results in twice as high success rates compared to state-of-the-art MaxEnt IRL baselines in challenging generalization settings, demonstrating that BC-IRL learns more generalizable rewards that provide meaningful rewards beyond the expert demonstrations. The BC loss updates only the reward, as opposed to updating the policy as typical BC for imitation learning does Bain & Sammut (1995). BC-IRL is a IRL method that produces a reward, unlike regular BC that learns only a policy. Since BC-IRL uses RL, not BC, to update the policy, it avoids the pitfalls of BC for policy optimization such as compounding errors. Our experiments show that policies trained with rewards from BC-IRL generalize over twice as well to new settings as those trained with BC. In the following section, we show how to optimize this objective via bi-level optimization.
3.2 BC-IRL
We formulate the IRL problem as a gradient-based bi-level optimization problem, where the outer objective is optimized by differentiating through the optimization of the inner objective. We first describe how the policy is updated with a fixed reward, then how the reward is updated for the policy to better match the expert. Inner loop (policy optimization): The inner loop optimizes policy parameters θ given current reward function Rψ. The inner loop takes K gradient steps to optimize the policy given the current reward. Since the reward update will differentiate through this policy update, we require the policy update to be differentiable with respect to the reward function parameters. Thus, any reinforcement learning algorithm which is differentiable with respect to the reward function parameters can be plugged in here, which is the case for many policy gradient and model-based methods. However, this does not
include value-based methods such as DDPG Lillicrap et al. (2015) or SAC Haarnoja et al. (2018) that directly optimize value estimates since the reward function is not directly used in the policy update.
Algorithm 1 BC-IRL (general framework) 1: Initial reward Rψ , policy πθ 2: Policy updater POLICY_OPT(R, π) 3: Expert demonstrations De 4: for each epoch do 5: Policy Update: 6: θ′ ← POLICY_OPT(Rψ, πθ) 7: Sample demo batch τe ∼ De 8: Compute IRL loss 9: LBC-IRL = E(st,at)∼τe (πθ′(st)− at) 2 10: Compute gradient of IRL loss wrt reward 11: ∇ψLBC-IRL = ∂LBC-IRL∂θ′ ∂POLICY_OPT(Rψ,πθ) ∂ψ 12: ψ ← ψ −∇ψLBC-IRL 13: end for Outer loop (reward optimization): The outer loop optimization updates the reward parameters ψ via gradient descent. More concretely: after the inner loop, we compute the gradient of the outer loop objective ∇ψLBC-IRL wrt to reward parameters ψ by propagating through the inner loop. Intuitively, the new policy is a function of reward parameters since the old policy was updated to better maximize the reward. The gradient update on ψ tries to adjust reward function parameters such that the policy trained with this reward produces trajectories that match the demonstrations more closely. We use Grefenstette et al. (2019) for this higher-order optimization.
BC-IRL is summarized in Algorithm 1. Line 5 describes the inner loop update, where we update the policy πθ to maximize the current reward Rψ. Lines 6-7 compute the BC loss between the updated policy πθ′ and expert actions sampled from expert dataset De. The BC loss is then used in the outer loop to perform a gradient step on reward parameters in lines 8-9, where the gradient computation requires differentiating through the policy update in line 5.
3.3 BC-IRL-PPO
We now instantiate a specific version of the BC-IRL framework that uses proximal policy optimization (PPO) Schulman et al. (2017) to optimize the policy in the inner loop. This specific version, called BC-IRL-PPO, is summarized in Algorithm 2.
Algorithm 2 BC-IRL-PPO 1: Initial reward Rψ , policy πθ , value function Vν 2: Expert demonstrations De 3: for each epoch do 4: for k = 1→ K do 5: Run policy πθ in environment for T timesteps 6: Compute rewards r̂ψt for rollout with current Rψ 7: Compute advantages Âψ using r̂ψ and Vν 8: Compute LPPO using Âψ 9: Update πθ with∇θLPPO 10: end for 11: Sample demo batch τe ∼ De 12: Compute LBC-IRL = E(st,at)∼τe (πθ(st)− at) 2 13: Update reward Rψ with∇ψLBC-IRL 14: end for BC-IRL-PPO learns a state-only parameterized reward function Rψ(s), which assigns a state s ∈ S a scalar reward. The state-only reward has been shown to lead to rewards that generalize better Fu et al. (2017). BC-IRL-PPO begins by collecting a batch of rollouts in the environment from the current policy (line 5 of Algorithm 2). For each state s in this batch we evaluate the learned reward function Rψ(s) (line 6). From this sequence of rewards, we compute the advantage estimates Ât for each state (line 7). As is typical in PPO, we also utilize a learned value function Vν(st) to predict the value of the starting and ending state for partial episodes in the rollouts. This learned value function Vν is trained to predict the sum of future discounted rewards for the current reward function Rψ and policy πθ (part of LPPO in line 8). Using the advantages, we then compute the PPO update (line 9 of Algorithm 2) using the standard PPO loss in equation 8 of Schulman et al. (2017). Note the advantages are a function of the reward function parameters used to compute the rewards, so PPO is differentiable with respect to the reward function. Next, in the outer loop update, we update the reward parameters, by sampling a batch of demonstration transitions (line 11), computing the behavior cloning IRL objective LBC-IRL (line 12), and updating the reward parameters ψ via gradient descent on LBC-IRL (line 13). Finally, in this work, we perform one policy optimization step (K = 1) per reward function update. Furthermore, rather than re-train a policy from scratch for every reward function iteration, we initialize each inner loop from the previous πθ. This initialization is important in more complex domains where K would otherwise have to be large to acquire a good policy from scratch.
4 ILLUSTRATION & QUALITATIVE ANALYSIS OF LEARNED REWARDS
We first analyze the rewards learned by different IRL methods in a 2D point mass navigation task. The purpose of this analysis is to test our hypothesis that our method learns more generalizable rewards compared to maximum entropy baselines in simple low-dimensional settings amenable to intuitive visualizations. Specifically, we compare BC-IRL-PPO to the following baselines. Exact MaxEntIRL (MaxEnt) Ziebart et al. (2008): The exact MaxEntIRL method where the partition function is exactly computed by discretizing the state space. Guided Cost Learning (GCL) Finn et al. (2016b): Uses the maximum-entropy objective to update the reward. The partition function is approximated via adaptive sampling. Adversarial IRL (AIRL) Fu et al. (2017): An IRL method that uses a learned discriminator to distinguish expert and agent states. As described in Fu et al. (2017) we also use a shaping network h during reward training, but only visualize and transfer the reward approximator g. f-IRL Ni et al. (2021): Another MaxEntIRL based method, f-IRL computes the analytic gradient of the f-divergence between the agent and expert state distributions. We use the JS divergence version. Our method does not require demonstrations at test time, instead we transfer our learned rewards zero-shot. Thus we forego comparisons to other meta-learning methods, such as Xu et al. (2019), which require test time demonstrations. While a direct comparison with Das et al. (2020) is not possible because their method assumes access to a pre-trained dynamics model, we conduct a separate study comparing their method with an oracle dynamics model against BC-IRL in Appendix A.5. All baselines use PPO Schulman et al. (2017) for policy optimization as commonly done in prior work Orsini et al. (2021). All methods learn a state-dependent reward rψ(s), and a policy π(s), both parametrized as neural networks. Further details are described in Appendix C. The 2D point navigation tasks consist of a point agent policy that outputs a desired change in (x, y) position (velocity) (∆x,∆y) at every time step. The task has a trajectory length of T = 5 time steps with 4 demonstrations. Figure 2a visualizes the expert demonstrations where darker points are earlier time steps. The agent starting state distribution is centered around the starting state of each demonstration. Figure 2b,c visualize the rewards learned by BC-IRL and the AIRL baseline. Lighter regions indicate higher rewards. In Figure 2b, BC-IRL learns a reward that looks like a quadratic bowl centered at the origin, which models the distance to the goal across the entire state space. AIRL, the maximum entropy baseline, visualized in Figure 2c, learns a reward function where high rewards are placed on the demonstrations and low rewards elsewhere. Other baselines are visualized in Appendix Figure 4. To analyze the generalization capabilities of the learned rewards we use them to train policies on a new starting state distribution (visualized in Appendix Figure 9). Concretely, a newly initialized policy is trained from scratch to maximize the learned reward from the testing start state distribution. The policy is trained with 5 million environment steps, which is the same number of steps as for learning the reward. The testing starting state distribution has no overlap with the training start state distribution. Policy optimization at test time is also done with PPO. The Figure 2d,e display trajectories from the trained policies where darker points again correspond to earlier time steps. This qualitative evaluation shows that BC-IRL learns a meaningful reward for states not covered by the demonstrations. Thus at test time agent trajectories are guided towards the goal with the terminal states (lightest points) close to the goal. The X-shaped rewards learned by the baselines do not provide meaningful rewards in the testing setting as they assign uniformly low rewards to states not covered by the demonstration. This provides poor reward shaping which prevents the agent from reaching the goal within the 5M training interactions with the environment. This results in agent trajectories that do not end close to the goal by the end of training.
Next, we report quantitative results in Table 1. We evaluate the performance of the policy trained at test time by reporting the distance from the policy’s final trajectory state sT to the goal g: ∥sT − g∥22. We report the final train performance of the algorithm (“Train"), along with the performance of the policy trained from scratch with the learned reward in the train distribution “Eval (Train)" and testing distribution “Eval (Test)". These results confirm that BC-IRL learns more generalizable rewards than baselines. Specifically, BC-IRL achieves a lower distance on the testing starting state distribution at 0.04, compared to 0.53, 1.6, and 0.36 for AIRL, GCL, and MaxEnt respectively. Surprisingly, BC-IRL even performs better than exact MaxEnt, which uses privileged information about the state space to estimate the partition function. This fits with our hypothesis that our method learns more generalizable rewards than MaxEnt, even when the MaxEnt objective is exactly computed. We repeat this analysis for a version of the task with an obstacle blocking the path to the goal in Appendix A.2 and reach the same findings even when BC-IRL must learn an asymmetric reward function. We also compare learned rewards to manually defined rewards in Appendix A.3. Despite baselines learning rewards that do not generalize beyond the demonstrations, with enough environment interactions, policies trained under these rewards will eventually reach the high-rewards along the expert demonstrations. Since all demonstrations reach the goal in the point mass task, the X-shaped reward baselines learn have high-reward at the center. Despite the X-shaped providing little reward shaping off the X, with enough environment interactions, the agent eventually discovers the high-reward point at the goal. After training AIRL for 15M steps, 3x the number of steps for reward learning and the experiments in Table 1 and Figure 2, the policy eventually reaches 0.08 ± 0.01 distance to the goal. In the same setting, BC-IRL achieves 0.04± 0.01 distance to the goal in under 5M steps. The additional performance gap is due to BC-IRL learning a reward with a maximum reward value closer to the center (0.02 to the center) compared to AIRL (0.04 to the center).
5 EXPERIMENTS
In our experiments, we aim to answer the following questions: (1) Can BC-IRL learn reward functions that can train policies from scratch? (2) Does BC-IRL learn rewards that can generalize to unseen states and goals better than IRL baselines in complex environments? (3) Can learned rewards transfer better than policies learned directly with imitation learning? We show the first in Section 5.1 and the next two in Section 5.2. We evaluate on two continuous control tasks: 1) Fetch reaching task Szot et al. (2021) (Fig 3a), and the TriFinger reaching task Ahmed et al. (2021) (Fig 3b).
5.1 REWARD TRAINING PHASE: LEARNING REWARDS TO MATCH THE EXPERT
Experimental Setup and Evaluation Metrics In the Fetch reaching task, setup in the Habitat 2.0 simulator Szot et al. (2021), the robot must move its end-effector to a 3D goal location g which changes between episodes. The action space of the agent is the desired velocities for each of the 7 joints on the robot arm. The robot succeeds if the end-effector is within 0.1m of the target position by the 20 time step maximum episode length. During reward learning, the goal g is sampled from a 0.2 meter length unit cube in front of the robot, g ∼ U([0]3, [0.2]3). We provide 100 demonstrations.
BC-IRL-PPO AIRL Fetch Reach (Success) ↑ 1.00 ± 0.00 0.96 ± 0.00
Trifinger Reach (Goal Dist) ↓ 0.002 ± 0.0015 0.007 ± 0.0017
distance to the demonstrated goal, (g − gdemo)2 in meters.
Evaluation and Baselines We evaluate BC-IRL-PPO by how well the reward it can train new policies from scratch in the same start state and goal distribution as the demonstrations. Given the pointmass results Section 4, we compare BC-IRL-PPO to AIRL, the best performing baseline for reward learning. More details on baseline choice, policy and reward representation, and hyperparameters are described in the Appendix (D).
Results and Analysis As Table 2 confirms, our method and baselines are able to imitate the demonstrations when policies are evaluated in the same task setting as the expert. All methods are able to achieve a near 100% success rate and low distance to goal. Methods also learn with similar sample efficiency as shown in the learning curves in Figure 3d. These high-success rates indicate BC-IRL-PPO and AIRL learn rewards that capture the expert behavior and train policies to mimic the expert. When training policies in the same state/goal distribution as the expert, rewards from BC-IRL-PPO follow any constraints followed by the experts, just like the IRL baselines.
5.2 TEST PHASE: EVALUATING REWARD AND POLICY GENERALIZATION
In this section, we evaluate how learned rewards and policies can generalize to new task settings with increased starting state and goal sampling noise. We evaluate the generalization ability of rewards by evaluating how well they can train new policies to reach the goal in new start and goal distributions not seen in the demonstrations. This evaluation captures the reality that it is infeasible to collect demonstrations for every possible start/goal configuration. We thus aim to learn rewards from demonstrations that can generalize beyond the start/goal configurations present in those demonstrations. We quantify reward generalization ability by whether the reward can train a policy to perform the task in the new start/goal configurations. For the Fetch Reach task, we evaluate on three wider test goal sampling distributions g ∼ U([0]3, [gmax]3): Easy (gmax = 0.25), Medium (gmax = 0.4), and Hard (gmax = 0.55), all visualized in Figure 3c. Similarly, we evaluate on new state regions, which increase the starting and goal initial state distributions but exclude the regions from training, exposing the reward to only unseen initial states and goals. In Trifinger, we sample start configurations from around the start joint position in the demonstrations, with increasingly wider distributions (s0 ∼ N (sdemo0 , δ), with δ = 0.01, 0.03, 0.05). We evaluate reward function performance by how well the reward function can train new policies from scratch. However, now the reward must generalize to inferring rewards in the new start state and goal distributions. We additionally compare to two imitation learning baselines: Generative Adversarial Imitation Learning (GAIL) Ho & Ermon (2016) and Behavior Cloning (BC). We compare different methods of transferring the learned reward and policy to the test setting: 1) Reward: Transfer only the reward from the above training phase and train a newly initialized policy in the test setting.
2) Policy: Transfer only the policy from the above training phase and immediately evaluate the policy without further training in the test setting. This compares transferring learned rewards and transferring learned policies. We use this transfer strategy to compare against direct imitation learning methods. 3) Reward+Policy: Transfer the reward and policy and then fine-tune the policy using the learned reward in the test setting. Results for this setting are in Appendix B.2.
Results and Analysis The results in Table 3 show BC-IRL-PPO learns rewards that generalize better than IRL baselines to new settings. In the hardest generalization setting, BC-IRL-PPO achieves over twice the success rate of AIRL. AIRL struggles to transfer its learned reward to harder generalization settings, with performance decreasing as the goal sampling distribution becomes larger and has less overlap with the training goal distribution. In the “Hard" start region generalization setting, the performance of AIRL degrades to 34% success rate. On the other hand, BC-IRL-PPO learns a generalizable reward and performs well even in the “Hard" generalization strategy, achieving 76% success. This trend is true both for generalization to new start state distributions and for new start state regions. The results for Trifinger Reach in Table 4 support these findings with rewards learned via BC-IRL-PPO generalizing better to training policies from scratch in all three test distributions. All training curves for training policies from scratch with learned rewards are in Appendix B.1.
Furthermore, the results in Table 3 also demonstrate that transferring rewards “(Reward)" is more effective for generalization than transferring policies “(Policy)". Transferring the reward to train new policies typically outperforms transferring only the policy for all IRL approaches. Additionally, training from scratch with rewards learned via IRL outperforms non-reward learning imitation learning methods that only permit transferring
the policy zero-shot. The policies learned by GAIL and BC generalize worse than training new policies from scratch with the reward learned by BC-IRL-PPO, with BC and GAIL achieving 35% and 37% success rates in the “Hard" generalization setting while our method achieves 76% success. The superior performance of BC-IRL-PPO over BC highlights the important differences between the two methods with our method learning a reward and training the policy with PPO on the learned reward. In Appendix B.2, we also show the “Policy+Reward" transfer setting and demonstrate BC-IRL-PPO also outperforms baselines in this setting. In Appendix B we also analyze performance with the number of demos, different inner and outer loop learning rates, and number of inner loop updates.
6 DISCUSSION AND FUTURE WORK
We propose a new IRL framework for learning generalizable rewards with bi-level gradient-based optimization. By meta-learning rewards, our framework can optimize alternative outer-level objectives instead of the maximum entropy objective commonly used in prior work. We propose BC-IRL-PPO an instantiation of our new framework, which uses PPO for policy optimization in the inner loop and an action matching objective in the outer loop. We demonstrate that BC-IRL-PPO learns rewards that generalize better than baselines. Potential negative social impacts of this work are that learning reward functions from data could result in less interpretable rewards, leading to more opaque behaviors from agents that optimize the learned reward. Future work will explore alternative instantiations of the BC-IRL framework, such as utilizing sample efficient off-policy methods like SAC or model-based methods in the inner loop. Model-based methods are especially appealing because a single dynamics model could be shared between tasks and learning reward functions for new tasks could be achieved purely using the model. Finally, other outer loop objectives rather than action matching are also possible.
7 ACKNOWLEDGMENTS
The Georgia Tech effort was supported in part by NSF, ONR YIP, and ARO PECASE. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.
A FURTHER POINT MASS NAVIGATION RESULTS
A.1 QUALITATIVE RESULTS FOR ALL METHODS IN POINT MASS NAVIGATION
Visualizations of the reward functions from all methods for the regular pointmass task are displayed in Figure 4.
A.2 OBSTACLE POINT MASS NAVIGATION
The obstacle point mass navigation task incorporates asymmetric dynamics with an off-centered obstacle. This environment is the same as the point mass navigation task from Section 4, except there is an obstacle blocking the path to the center and the agent only spawns in the top-right hand corner. This task has a trajectory length of T = 50 time steps with 100 demonstrations. Figure 5a visualizes the expert demonstrations where darker points are earlier time steps.
The results in Table 5 are consistent with the non-obstacle point mass task where BC-IRL generalizes better than a variety of MaxEnt IRL baselines. In the train setting, BC-IRL learns rewards that match the expert behavior with avoiding the obstacle and even achieves better performance than baselines in this task with 0.08 distance to the goal versus 0.41 to the goal for the best performing baseline in the train setting, f-IRL. BC-IRL generalizes better than baselines achieving 0.79 distance to goal compared to the best performing baseline MaxEnt, which also has access to oracle information. The reward learned by BC-IRL visualized in Figure 5b shows BC-IRL learns a complex reward to account for the obstacle. Figure 6 visualizes the rewards for all methods.
A.3 COMPARISON TO MANUALLY DEFINED REWARDS
We compare the rewards learned by BC-IRL to two hand-coded rewards. We visualize how well the learned rewards can train policies from scratch in the evaluation distribution in the point navigation with obstacle task. The reward learned by BC-IRL therefore must generalize. On the other hand, the hand-coded rewards do not require any learning. We include a sparse reward for achieving the goal, which does not require domain knowledge when implementing the reward. We also implement a dense reward, defined as the change in Euclidean distance to the goal where rt = dt−1 − dt where dt is the distance of the agent to the goal at time t. Figure 7a shows policy training curves for the learned and hand-defined rewards. The sparse reward performs poorly and the policy fails to get closer to the goal. On the other hand, the rewards learned by BC-IRL guide the policy closer to the goal. The dense reward, which incoporates more domain knowledge about the task, performs better than the learned reward.
A.4 ANALYZING NUMBER OF INNER LOOP UPDATES
As described in Section 3.3, a hyperparameter in BC-IRL-PPO is the number of inner loop policy optimization steps K, for each reward function update. In our experiments, we selected K = 1. In Figure 7b we examine the training performance of BC-IRL-PPO in the point navigation task with no obstacle for various choices of K. We find that a wide variety of K values perform similarly. We,
therefore, selected K = 1 since it runs the fastest, with no need to track multiple policy updates in the meta optimization.
A.5 BC-IRL WITH MODEL-BASED POLICY OPTIMIZATION
We compare BC-IRL-PPO to a version of BC-IRL that uses model-based RL in the inner loop inspired by Das et al. (2020). A direct comparison to Das et al. (2020) is not possible because their method assumes access to a pre-trained dynamics model, while in our work, we do not assume access to a ground truth or pre-trained dynamics model. However, we compare to a version of Das et al. (2020) in the point mass navigation task with a ground truth dynamics model. Specifically, we use gradient-based MPC in the inner loop optimization as in Das et al. (2020), but the BC IRL outer loop objective. With the BC outer loop objective, it also learns generalizable rewards in the point mass navigation task achieving 0.06 ± 0.03 distance to goal in “Eval (Train)" and 0.07 ± 0.03 in “Eval (Test)". However, in the point mass navigation task with the obstacle, this method fails to learn a reward and struggles to minimize the outer loop objective. We hypothesize that in longer horizon tasks, the MPC inner loop optimization in [9] easily gets stuck in local minimas and struggles to differentiate through the entire MPC optimization.
B REACH TASK: FURTHER EXPERIMENT RESULTS
B.1 RL-TRAINING CURVES
In Figure 8 we visualize the training curves for the RL training used in Table 3. Figure 8a shows policy learning progress during the IRL training phase. In each setting, the performance is measured by using the current reward to train a policy and computing the success rate of the policy. Figure 8b to Figure 8d show the policy learning curves at test time, in the generalization settings, where the reward is frozen and must generalize to learn new policies on new goals (“Reward " transfer strategy). These plots show that all methods learn similarly during IRL training (Figure 8a). When transferring the learned rewards to test settings we see that BC-IRL-PPO performs better in training successful policies as the generalization difficulty increases with the most difficult generalization in Figure 8d.
B.2 TRANSFER REWARD+POLICY SETTING
Here, we evaluate the “Policy+Reward" transfer strategy to new environment settings where both the reward and policy are transferred. In the new setting, “Policy+Reward" uses the transferred reward to fine-tune the pre-trained transferred policy with RL. We show results in Table 6 for the “Policy+Reward" transfer strategy alongside the “Reward" transfer strategy from Table 3. We find that “Policy+Reward" performs slightly better than “Reward" in the Hard setting of generalization to new starting state distributions but otherwise performs similarly. Even in the “Policy+Reward" setting, AIRL struggles to learn a good policy in the Medium and Hard settings, achieving 38% and 81% success rate respectively.
B.3 ANALYZING THE NUMBER DEMONSTRATIONS
We analyze the effect of the number of demonstrations used for reward learning in Table 7. We find that using fewer demonstrations does not affect the training performance of BC-IRL-PPO and AIRL. We also find our method does just as well with 5 demos as 100 in the +75% noise setting, with any number of demonstrations achieving near-perfect success rates. On the other hand, the performance of AIRL degrades from 93% success rate with 100 demonstrations to 84% in the +75% noise setting. In the +100% noise setting, fewer demonstrations hurt performance for both methods, with our method dropping from 76% success to 69% success and AIRL from 38% success to 42% success.
B.4 BC-IRL HYPERPARARAMETER ANALYSIS
BC-IRL-PPO requires a learning rate for the policy optimization and a learning rate for the reward optimization. We compare the performance of our algorithm for various choices of policy and reward learning rates in Table 8. We find that across many different learning rate settings our method achieves high rates of success, but high policy learning rates have a detrimental effect. High reward learning rates have a slight negative impact but are not as severe.
C FURTHER 2D POINT NAVIGATION DETAILS
The start state distributions for the 2D point navigation task are illustrated in Figure 9. The reward is learned using the start distribution in red on 4 equally spaced points from the center. Four demonstrations are also provided in this train start state distribution from each of the four corners. The reward is then transferred and a new policy is trained with the start state distribution in the magenta color. This start state distribution has no overlap with the train distribution and is also equally spaced. The reward must therefore generalize to providing rewards in this new state distribution. The hyperparameters for the methods from the 2D point navigation task in Section 4 are detailed in Table 9 for the no obstacle version and Table 10 for the obstacle version of the task. The reward
function / discriminator for all methods was a neural network with 1 hidden layer and 128 hidden dimension size with tanh-activations between the layers. Adam Kingma & Ba (2014) was used for policy and reward optimization. All RL training used 5M steps of experience for the training and testing setting for the navigation no obstacle task. f-IRL uses the same optimization and neural network hyperparameters for the discriminator and reward function. Like in Ni et al. (2020), we clamp the output of the reward function within the range [−10, 10] and found this was beneficial for learning. In the navigation with obstacle task, training used 15M steps of experience and testing used 5M steps of experience. All experiments were run on a Intel(R) Core(TM) i9-9900X CPU @ 3.50GHz.
D FURTHER REACH TASK DETAILS
D.1 CHOICE OF BASELINES
The “Exact MaxEntIRL" approach is excluded because it cannot be computed exactly for highdimensional state spaces. GCL is excluded because of its poor performance on the toy task relative to other methods. We also compare to the following imitation learning methods which learn only policies and no transferable reward:
• Behavioral Cloning (BC) Bain & Sammut (1995): Train a policy using supervised learning to match the actions in the expert dataset.
• Generative Adversarial Imitation Learning (GAIL) Ho & Ermon (2016): Trains a discriminator to distinguish expert from agent transitions and then use the discriminator confusion score as the reward. This reward is coupled with the current policy Finn et al. (2016a) (referred to as a “pseudo-reward") and therefore cannot train policies from scratch.
D.2 POLICY+NETWORK REPRESENTATION
All methods use a neural network to represent the policy and reward with 1 hidden layer, 128 hidden units, and tanh-activation functions between the layers. We use PPO as the policy optimization method for all methods. All methods in all tasks use demonstrations obtained from a policy trained with PPO using a manually engineered reward.
D.3 HYPERPARAMETERS
The hyperparameters for all methods from the Reaching task are described in Table 11. The Adam optimizer Kingma & Ba (2014) was used for policy and reward optimization. All RL training used 1M steps of experience for the training and testing settings. The “Reward" and “Policy+Reward" transfer strategies trained policies with the same set of hyperparameters.
E TRIFINGER EXPERIMENT DETAILS
E.1 POLICY+NETWORK REPRESENTATION
All methods use a neural network to represent the policy and reward with 1 hidden layer, 128 hidden units, and tanh-activation functions between the layers. We use PPO as the policy optimization method for all methods. All methods in all tasks use demonstrations obtained from a policy trained with PPO using a manually engineered reward.
E.2 HYPERPARAMETERS
The hyperparameters for all methods for the Trifinger reaching task are described in Table 12. The Adam optimizer Kingma & Ba (2014) was used for policy and reward optimization. All RL training used 500k steps of experience for the reward training phase and 100k steps of experience for policy optimization in test settings. | 1. What is the focus and contribution of the paper on reinforcement learning?
2. What are the strengths of the proposed approach, particularly in its novel algorithm for learning rewards?
3. What are the weaknesses of the paper regarding its experimental evaluation and comparison with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Providing informative rewards is crucial for effective reinforcement learning. Prior IRL methods overfit to demonstrations and fail to learn generalizable rewards. To combat this issue, this paper proposes BC-IRL, which uses gradient-based bi-level optimization to learn the reward. The authors evaluate on two continuous control tasks against IRL and imitation learning methods.
Strengths And Weaknesses
Strengths:
The paper proposes a novel algorithm for learning rewards with bi-level gradient-based optimization.
Weaknesses:
Unclear in what sense the authors are referring to when they use the term “generalization”. In the experiments, the generalization seems like it is limited to slightly different start and goal distributions. This should be made clearer.
GAIL is a method that trains a policy that matches the state-action distribution of the expert data, it should be clarified how the goal here is different since this method also aims to learn a policy that matches the behavior of the expert.
Experimental evaluation is very limited. Only 2 tasks of a similar nature are evaluated, and the tasks are reaching, which is very simple. Also, the environments have a low-dimensional state space, despite the simplicity of the tasks.
How does this compare to AIRL or GAIL where the discriminator learned uses mixup regularization or spectral norm? These are 2 common techniques (among others) for making the discriminator less brittle. Much more experimentation is needed to conclude how much more generalizable the rewards learned by BC-IRL are.
Clarity, Quality, Novelty And Reproducibility
Clarity and quality need to be greatly improved, see above for more detailed discussion of weaknesses. |
ICLR | Title
Self-Supervised Prime-Dual Networks for Few-Shot Image Classification
Abstract
We construct a prime-dual network structure for few-shot learning which establishes a commutative relationship between the support set and the query set, as well as a new self-supervision constraint for highly effective few-shot learning. Specifically, the prime network performs the forward label prediction of the query set from the support set, while the dual network performs the reverse label prediction of the support set from the query set. This forward and reserve prediction process with commutated support and query sets forms a label prediction loop and establishes a self-supervision constraint between the ground-truth labels and their predicted values. This unique constraint can be used to significantly improve the training performance of few-shot learning through coupled prime and dual network training. It can be also used as an objective function for optimization during the testing stage to refine the query label prediction results. Our extensive experimental results demonstrate that the proposed self-supervised commutative learning and optimization outperforms existing state-of-the-art few-shot learning methods by large margins on various benchmark datasets.
1 INTRODUCTION
Few-shot image classification aims to classify images from novel categories (query samples) based on very few labeled samples from each class (support images) (Hong et al., 2020a; Sun et al., 2021). During the training stage, the few-shot learning (FSL) is given a set of support-query set pairs with class labels. Once successfully trained, the model needs to be tested on unseen classes. The major challenge here is that the number of available support samples N is very small, often N ≤ 5. In an extreme case, N = 1 where it is called one-shot learning. In order to achieve this so-called learnto-learn capability, the FSL needs to capture the inherent visual or semantic relationship between the support samples and query samples, and more importantly, this learned relationship or prediction should be able to generalize well onto unseen classes (Liu et al., 2020d).
A fundamental challenge in prediction is that: if we know entity A and are trying to predict entity B, how do we know if the prediction of B, denoted by Φ(B), is accurate or not? Is there any way that we can verify the accuracy of the prediction Φ(B)? As we know, this is impossible since B has no ground-truth for us to evaluate or verify its prediction accuracy. If we can come up an indirect approach to effectively evaluate the prediction accuracy, it is expected that the learning and prediction performance can be significantly improved.
In this work, we propose to explore a prime-dual commutative network design for effective prediction, specifically for few-shot image classification. As illustrated in Figure 1, the prime network Φ is the original network that learns the forward prediction from A to B̂ = Φ(A). The dual network Γ performs the reverse prediction from B to  = Γ(B). If we cascade these two networks together which establishes a prediction loop from A to B and then back to A, we have
 = Γ(B̂) = Γ(Φ(A)). (1)
Since A is given, which has the ground-truth value, the difference between A and its prime-dual loop prediction result  forms a self-supervision loss
LS = d(A, Â) = d(A,Γ(Φ(A))), (2)
where d is a distance metric function. This self-supervision loss LS can be used to improve the training performance based on the coupling between the prime and dual networks. Furthermore, it can used to verify and adjust the prediction result by minimizing the self-supervision loss.
In this work, we propose to study this prime-dual network design with self-supervision for few-shot learning by exploiting the commutative relationship between the support set (entityA) and the query set (entity B). Specifically, the prime network learns to predict the labels of query samples using the support set with ground-truth labels as training samples. Meanwhile, the dual network learns to predict the labels of the support samples using the query set with ground-truth labels as training samples. For example, in 5-way 1-shot learning, the support set consists of 5 images from 5 classes with only one image per class. The query set also has 5 images from 5 classes. When training the prime and dual networks, the support set and the query set are switched for training samples. This forward and reserve prediction process with commutative support and query sets forms a label prediction loop and establishes a self-supervision constraint between the ground-truth labels and their predicted values. The prime-dual networks are jointly trained with the help from the self-supervision loss. This loss is also used during the testing stage to adjust and optimize the prediction results. Our extensive experimental results demonstrate that the proposed self-supervised commutative learning and optimization method outperforms existing state-of-the-art few-shot learning methods by a large margin on various benchmark datasets.
2 RELATED WORK AND UNIQUE CONTRIBUTIONS
Few-shot learning (FSL) aims to recognize instances from unseen categories with few labeled samples. There are three major categories of methods that have been developed for FSL. (1) Data Augmentation is the most direct method for few-shot learning, which explores different approaches to synthesize images to address the issue of few training samples. For example, self-training jigsaw augmentation (Chen et al., 2019) is able to synthesize new images by segmenting and reorganizing labeled and unlabeled gallery images. Mangla et al. (2020) apply self-supervision algorithms augmented with manifold mixup (Verma et al., 2019) for few-shot classification tasks. The F2GAN (Hong et al., 2020b) and MatchingGAN methods (Hong et al., 2020a) use generative adversarial networks (GANs) to construct high-quality samples for new image categories. (2) Optimization-based methods aim to learn a good initial network model for the classifier. This learned model can be then quickly adapted to novel classes using a few labeled samples. MAML (Finn et al., 2017) proposes to train a set of initialization models based on second-order gradients and meta-optimization. TAML (Jamal & Qi, 2019) reduces the bias introduced by the MAML algorithm to enforce equity between the tasks. In the Latent Embedding Optimization (LEO) method (Rusu et al., 2018), gradient-based optimization is performed in a low-dimensional latent space instead of the original high-dimensional parameter space. (3) Metric-based methods aim to learn a good metric space so that samples from novel categories can be effectively distinguished and correctly classified. For example, MatchingNet (Vinyals et al., 2016) applies a recurrent network to calculate the cosine similarity between samples. ProtoNet (Snell et al., 2017) compares features between samples in the Euclidean space. RelationNet (Sung et al., 2018) uses a CNN model and (Garcia & Bruna, 2017) uses the graph convolution network (GNN) to learn the metric relationship.
In this work, we also consider cross-domain FSL. For the cross-domain classification task, the model needs to generalize well from the source domain to a new or unseen target domain without accessing samples from the unseen domain during the training stage. Sun et al. (2021) propose a modelagnostic explanation-guided training method that dynamically finds and emphasizes the features which are important for the predictions. This improves the model generalization capability. To characterize the variation of image feature distribution across different domains, the LFT method (Tseng et al., 2020) learns the noise distribution by adding feature-wise transformation layers to the
image encoder. To avoid over-fitting on the source domain and increase the generalization capability to the target domain, the batch spectral regularization (BSR) method (Liu et al., 2020b) attempts to suppress all singular values of the batch feature matrices during pre-training. Another set of methods (Shankar et al., 2018; Volpi et al., 2018) learn to augment the input data with adversarial learning (Yang et al., 2020b) in order to generalize the task from the source domain to the unseen target domain.
In this work, we propose a commutative prime-dual network design for few-shot learning. In the literature, the mutual dependency and reciprocal relationship between multiple modules have been explored to achieve better performance. For example, (Xu et al., 2020) has developed a reciprocal cross-task architecture for image segmentation, which improves the learning efficiency and generation accuracy by exploiting the commonalities and differences across tasks. Sun et al. (2020) design a reciprocal learning network for human trajectory prediction, which consists of forward and backward prediction neural networks. The reciprocal learning enforces consistency between the forward and backward trajectory prediction, which helps each other to improve the learning performance and achieve higher accuracy. Zhu et al. (2017) design the CycleGAN contains two GANs forming a cycle network that can translate the images of the two domains into each other to achieve style transfer. Liu et al. (2021) develop a Temporal Reciprocal Learning (TRL) approach to fully explore the discriminative information from the disentangled features. Zhang et al. (2021b) design a support-query mutual guidance architecture for few-shot object detection.
Unique Contributions. Compared to existing work in the literature, the major contributions of this work include: (1) We propose a new prime-dual network design to explore the commutative relationship between support and query sets and establish a unique self-supervision constraint for few-shot learning. (2) We incorporate the self-supervision loss into the coupled prime-dual network training to improve the few-shot learning performance. (3) During the test stage, using the dual network to map the prediction results back to the support set domain and using the self-supervision constraint as an objective function, we develop an optimization-based scheme to verify and optimize the performance few-shot learning. (4) Our proposed method has significantly advanced the stateof-the-art performance of few-shot image classification.
3 METHOD
In this section, we present our method of self-supervised prime-dual network (SPDN) learning and optimization for few-shot image classification.
3.1 SELF-SUPERVISED COMMUTATIVE LEARNING
Figure 2 provides an overview of our proposed method of self-supervised commutative learning and optimization for few-shot image classification. In a typical setting of K-way N -shot learning, N labeled image samples from each of the K classes form the support set. For example, in a 5-way 1-shot learning, K = 5 and N = 1. Given a very small support set S = {Skn|1 ≤ k ≤ K, 1 ≤ n ≤ N}, the objective of the FSL is to predict the labels of the query images Q = {Qkm|1 ≤ k ≤ K, 1 ≤ m ≤ M} from the same K classes in M batches During the training stage, the labels of both support and query samples are available. The prime network ΦS→Q for few-shot classification is trained on these support-query sets, aiming to learn and represent the inherent visual
or semantic relationship between the support and query images. Once successfully learned, we will apply this network to unseen classes. Specifically, in the test stage, given a labeled support set S′ = {S′kn|1 ≤ k ≤ K, 1 ≤ n ≤ N} from these K unseen classes, we need to predict the labels for the query set Q′ = {Q′km|1 ≤ k ≤ K, 1 ≤ m ≤M} also from these unseen classes. Therefore, the fundamental challenge of FSL is to characterize and learn the inherent relationship between the support set S and the query set Q. Once learned, we can then shift or transfer this relationship to S′ and Q′ of unseen classes to infer the labels of Q′. In this work, as discussed in the following section, we propose to establish a graph neural network (GNN) to characterize and learn this relationship.
We recognize that, within the framework of few-shot learning, the support set and the query set are in an equal and symmetric position to each other. More specifically, if we can learn to predict the labels of query set Q from support set S, certainly, we can switch their order, predicting the labels of the support set S from the query set Q using the same network architecture. This observation leads to an interesting commutative prime-dual network design for few-shot learning. As illustrated in Figure 2, we introduce a dual network ΓQ→S, which performs the reverse label prediction of the support set S from the query set Q. Let L(S) and L(Q) be the label vectors of S and Q, respectively. Let L̂(S) and L̂(Q) be the predicted labels. The forward prediction by the prime network can be written as
L̂(Q) = ΦS→Q[L(S)], (3)
while the reverse prediction by the dual network can be written as
L̂(S) = ΓQ→S[L(Q)], (4)
If both networks Φ and Γ are well trained, and if we pass the label prediction output of the prime network as input to the dual network, then, we expect that the predicted labels for the support set should be close to its ground-truth. This leads to the following self-supervision loss
LSS = ||L(S)− L̂(S)||2 = ||L(S)− ΓQ→S[L̂(Q)] ||2 (5) = ||L(S)− ΓQ→S[ΦS→Q[L(S)]] ||2.
This self-supervision constraint can be established on both support set and query set, resulting in a coupled prime-dual network training. Figure 3 (a) and (b) shows the training processes for the prime network and the dual network, respectively. Specifically, from the support set S, the prime network learns to predict the labels of the query set Q. As in existing few-shot learning, we have the loss LPQ = ||L̂(Q)−L(Q)||2 between the predicted query labels and their ground-truth values. We then use the query samples and their predicted labels as input to the dual network ΓQ→S, we can predict the labels of the support set L̂(S) and compute the self-supervision loss LPS = ||L̂(S) − L(S)||2. These two losses are combined to form the loss function for training the prime network
LP = ||L̂Φ(Q)− L(Q)||2 + α · ||L̂Φ,Γ(S)− L(S)||2. (6)
α is a weighting parameter whose default value is set to be 0.5 in our experiments. Similarly, for the training of the dual network, as shown in Figure 3(b), its loss function is given by
LD = ||L̂Γ(S)− L(S)||2 + α · ||L̂Γ,Φ(Q)− L(Q)||2. (7)
3.2 GRAPH NEURAL NETWORK FOR FEW-SHOT IMAGE CLASSIFICATION
The proposed prime and dual networks share the same network design, which will be discussed in this section. The only difference between these two networks is that their support and query samples are switched. In the following, we use the prime network as an example to explain its design.
The central task of few-shot learning is to characterize the inherent relationship between the query and support samples, based on which we can infer the labels of the query samples from the support samples (Tseng et al., 2020; Liu et al., 2020b). In this work, we propose to use a graph neural network (GNN) to model and analyze this relationship. InK-wayN -shot learning, givenK classes, each with N support samples {Skn}, we need to learn the prime network to predict the labels for K query samples {Qk}. This implies, in each of the total M training batch, we have K × (N + 1) support samples and query samples. As illustrated in Figure 4(a), we use a backbone network, for example, Resnet-10 or Resnet-12, to extract feature for each of these support and query samples. We denote their features by S = {stkn} and Q = {qtk} where t represents the update iteration index in the GNN. Initially, t = 0. These support-query sample features form the nodes for the GNN, denoted by {xtj |1 ≤ j ≤ J}, J = K × (N + 1), for the simplicity of notations. The edge between two graph nodes represents the correlation ψ(xti,x t j) between nodes x t i and x t j . Note that our GNN has two groups of nodes: support sample nodes and query sample nodes. The support samples nodes have labels while the labels of the query samples need to be predicted by the prime network. If xti and xtj are both support nodes, we have
ψ(xti,x t j) =
{ 1 if L(xti) = L(x t j),
0 if L(xti) 6= L(xtj). (8)
Here, L(·) represents the label of the corresponding support sample. Since the labels for the query nodes are unknown, the correlation for edges linked to these query nodes need to be learned by the GNN. Initially, we set them to be random values between 0 and 1.
Each node of the GNN combines features from these neighboring nodes with the corresponding correlation as weights and updates its own feature by learning a multi-layer perceptron (MLP) network Go[·] as follows
xt+1j = Go
[ J∑
i=1
xtj · ψ(xti,xtj)
] . (9)
At each edge, another MLP network Ge[·, ·] is learned to predict the correlation between two graph nodes,
ψ(xti,x t j) = Ge[xti,xtj ], (10)
whose ground-truth values are obtained using the scheme discussed in the above. The feature generated by the prime GNN is then passed to a classification network to predict the query labels. Both the prime and dual GNNs are jointly trained with their final classification networks.
3.3 SELF-SUPERVISED OPTIMIZATION OF FEW-SHOT IMAGE CLASSIFICATION
Besides improving the training performance through mutual enforcement, the proposed selfsupervised prime-dual network design can be also used in the testing stage to optimize the label prediction of query samples. Specifically, we can use the dual network to refine and optimize the label prediction results obtained by the prime network. As illustrated in Figure 4(b), given a support
set S and a query set Q, the support set has class labels L(S). Let L̂(Q) be the prediction result, the output of the softmax layer of the classification network. In existing approaches of few-shot learning or other network prediction scenarios, we are not able to verify if the prediction is accurate or not since the ground-truth is not available for test samples. However, in this work, with the dual network ΓQ→S being successfully trained, we can use the prediction result L̂(Q) as input to the dual network to predict the class labels of the original support samples
L̂(S) = ΓQ→S[L̂(Q)]. (11)
Note that these support samples DO have ground-truth labels L(S). Define the label prediction error by
El(S) = ||L(S)− L̂(S)||2. (12) We assume that the correct query sample labels L∗(Q) is within the neighborhood of the prediction result L̂(Q). Let Ω be the set of candidate assignments of query labels which are within the neighborhood of L̂(Q). For example,
Ω = {L̃(Q) : ||L̃(Q)− L̂(Q)||2 ≤ ∆}, (13)
where ∆ is a given threshold for the label vector distance. We then search the candidate query labels L̃(Q) within the neighborhood set Ω to minimize the support label prediction error El(S) in (12). The optimized prediction of the query samples is given by
L∗(Q) = arg min L̃(Q)∈Ω || L(S)− ΓQ→S[L̃(Q)] ||2. (14)
From the experimental results, we will see that this unique self-supervised optimization of the query label prediction is able to significantly improve the few-shot image classification performance.
4 EXPERIMENTAL RESULTS
In this section, we provide experimental results on various benchmark datasets to demonstrate the performance of our proposed SPDN method for few-shot learning.
4.1 IMPLEMENTATION DETAILS
We use ResNet-10 as the backbone of our feature encoder. The input images are resized to 224×224 and the output feature vector size is 1 × 1 × 512. We choose the Adam optimizer with a learning rate of 0.01 and a batch size of 64 for training of 400 epochs. In the episodic meta-training stage, we use the graph neural network (GNN) discussed in the above section to generate the feature embedding for query samples. The prime network ΦS→Q and the dual network ΓQ→S are jointly trained. These two networks are both trained for 400 epochs with 100 episodes per epoch. In each episode, we randomly select K categories (K=5, 5-way) from the training set. Then, we randomly select N samples (N=1 or 5 for 1-shot or 5-shot) from each category to compose support set and query set, respectively. In the test stage, we use the average of 1000 trials as the final result for all the experiments. For each trial, we randomly select K categories from the test set. Similar to the training stage, N (1 or 5) samples are randomly selected as the support set and 15 samples as the query set from each category.
4.2 DATASETS
Five benchmark datasets are used for performance evaluation and comparison with other methods in the literature, Mini-ImageNet (Ravi & Larochelle, 2016), CUB (Wah et al., 2011), Cars (Krause et al., 2013), Places (Zhou et al., 2017) and Plantae (Van Horn et al., 2018). More details about dataset settings are presented in Appendix A.1.
4.3 RESULTS
To demonstrate the performance of our SPDN method, we conduct a series of experiments under different few-shot classification settings. In the literature, there are two major scenarios for testing the FSL methods: (1) intra-domain learning where the training classes and test classes are from the same object domain, for example, both from the Mini-ImageNet classess, and (2) cross-domain learning where the FSL is trained on one dataset (e.g., Mini-ImageNet) and the testing is performed on another dataset (e.g., CUB). Certainly, the cross-domain scenario is more challenging.
4.3.1 INTRA-DOMAIN FSL RESULTS.
First, we conduct intra-domain FSL experiments on the Mini-ImageNet. Table 1 summarizes the performance comparison with state-of-the-art FSL methods mainly developed in the past two years. We also list the backbone network used for extracting the features for the input images. We can see that, for the 5-way 1-shot image classification task, our method (with ResNet-10 backbone) outperform the current best method (with ResNet-12 backbone) from (Zhang et al., 2021a) by 5.42%. Another method which uses the same ResNet-10 backbone is the GNN+FT method (Tseng et al., 2020). Our method outperforms this method by 12.23%. For the 5-way 5-shot classification task, our method outperforms the current best by more than 5%, which is quite significant.
Second, we evaluate our method on intra-domain fine-grained image classification tasks on the CUB dataset. In this case, the FSL needs to learn subtle features to distinguish objects from close categories. Table 2 summarizes the performance results on 5-way 1-shot and 5-way 5-shot classification tasks. We can see that, for the one-shot classification task, our method outperforms the current best method, FRN (Wertheimer et al., 2021) by 6.72%. For the 5-shot classification task, our method improves the classification accuracy by 2.80%.
4.3.2 CROSS-DOMAIN FSL RESULTS.
The cross-domain few-shot learning is more challenging. Following existing methods, we train the model on the Mini-ImageNet object domain and test the trained model on other domains, including the CUB, Cars, Places and Plantae datasets. Table 3 summarizes the results for 5-way 1-shot classification (top) and 5-way 5-shot classification (bottom). We can see that our SPDN method has dramatically improved the classification accuracy on these cross-domain FSL tasks. For example, on the Cars dataset, our method outperforms the current best TPN+ATA (Wang & Deng, 2021) by 4.15%. On the Plantae dataset, the performance gain is 5.59%, which is quite significant. For the 5-way 5-shot classification task, the performance gains on these datasets are also very significant,
between 0.37-8.68%. This demonstrates that our SPDN method is able to learn the inherent visual relationship between the support and query samples and can generalize very well onto unseen classes in new object domains.
4.4 ABLATION STUDIES
In this section, we conduct ablation studies to further understand the proposed SPDN method and analyze the contributions of major algorithm components.
From algorithm design perspective, our SPDN method has two major components: self-supervised learning (SSL) of the prime and dual networks, and the self-supervised optimization (SSO) of the predicted query labels. We adopt the single GNN-based model (Tseng et al., 2020) as the baseline of our method and the SSL and SSO algorithm components are added onto this baseline method. To understand the performance of these two algorithm components, in the following experiment, we train the SPDN method using training samples from the Mini-ImageNet. We conduct intra-domain few-shot image classification on the Mini-ImageNet and cross-domain few-shot image classification on the CUB, Cars, Places, and Plantae datasets. Table 4 summarizes the results for 5-way 1-shot and 5-way 5-shot image classification. The second column shows the intra-domain few-shot image classification results on the Mini-ImageNet. The rest columns show the results for the cross-domain classification results. We can see that the self-supervised prime-dual network training is able to improve the classification accuracy by up to 1.8%. The performance gain achieved by the selfsupervised optimization of the predicted query labels is much more significant, ranging from 7-10%. This dramatic performance gain is a surprise to us. In the following, we will provide additional ablation studies to further understand this SSO algorithm module. Compared to the SSO module, the performance improvement by the first SSL module is relatively small. This is because the major new contribution of the SSL module is the self-supervised loss which aims to further improve the learning on the baseline GNN. However, it has successfully trained a dual network, which plays a very important role in the second SSO module. It is used to search and optimize the predicted labels of the query samples, resulting in major performance gain. We discuss the specific optimization results of our self-supervised optimization (SSO) module through an experiment in Appendix A.3.
In the following experiments, we attempt to further understand the behavior and performance of the SSO algorithm module. First, we conduct an experiment to understand the search and optimization process of SSO. Suppose L(Q) is the true label of the query samples. Let
L̃(Q) = L(Q) + λ ·∆L, (15)
be a label vector within the neighborhood of L(Q). Here, ∆L is a pre-generated random vector and λ is a disturbance coefficient to control the amount of variation. With the label vector L̃(Q) and the query samples, we can predict the labels of the support vector using the dual network. Then, we can compute the prediction error El(S) as in (12). Figure 5(a) shows the label prediction error El(S) as a function of λ. This experiment was performed on 5-way 1-shot image classification on the CUB dataset. We can see that the minimum error is achieved at λ = 0. This implies the groundtruth labels of the query samples have the minimum self-supervised label error El(S). This is a very important property of our SSO method. It suggests that, when the predicted query labels are not correct, and the ground-truth labels are within its neighborhood, we can use the SSO method to search for these ground-truth labels using the minimum self-supervised support label error criteria.
During our self-supervised optimization of the predicted query labels, we choose a small neighborhood Ω within the neighborhood of the predicted query labels L̂(Q) with a maximum distance ∆. This ∆controls the number of search positions in the label space. If we search more positions or candidate query labels, we can obtain smaller self-supervised label errors of the support samples El(S). Figure 5(b) plots the value of El(S) as a function of the number of search positions. This experiment was performed on 5-way 1-shot image classification on the CUB dataset. We can see that the error drops significantly with the number of searched positions. We recognize that, for each search position, we need to run the dual network once. This does introduce extra computational complexity. But, the amount of performance gain is very appealing. In our experiments, we limited the number of search positions to the 5, i.e., the nearest 5 label vectors (integer vectors) to the predicted query label.
5 CONCLUSION
In this work, we have successfully developed a novel prime-dual network structure for few-shot learning which explores the commutative relationship between the support set and the query set. The prime network performs the forward label prediction from the support set to the query set, while the dual network performs the reverse label prediction from the query set to the support set. This forward and reserve prediction process with commutative support and query sets forms a label prediction loop and establishes a self-supervision constraint between the ground-truth labels and their predicted values. We have established a self-supervised support error metric and used the learned dual network to optimize the predicted query labels during the testing stage. Our extensive experimental results on both intra-domain and cross-domain few-shot image classificaiton have demonstrated that the proposed self-supervised prime-dual network learning and optimization have significantly improved the performance of few-shot learning, especially for cross-domain few-shot learning tasks. We have also conducted detailed ablation studies to provide in-depth understanding of the significant performance gain achieved by the self-supervised optimization process. The self-supervised primedual network design is general and can be naturally incorporated into other prediction and learning methods.
A APPENDIX
In this appendix, we provide more details of experimental settings and additional results to further understand the performance of our proposed method.
A.1 DATASET
In our experiments, the following 5 datasets are used for performance evaluations and comparisons.
(1) Mini-ImageNet has randomly selected 100 categories from the ImageNet (Deng et al., 2009) and each category has 600 samples of size 84 × 84. The 100 categories are divided into a training set with 64 categories, a validation set with 16 categories, and a testing set with 20 categories. (2) CUB is a fine-grained dataset with 200 bird species mainly living in North America (Wah et al., 2011). We randomly split the dataset into 100, 50, 50 classes for training, validation and testing, respectively. (3) Cars contains 16,185 images of 207 fine-grained car types, which consist of 10 BMW models and 197 other car types (Krause et al., 2013). We randomly selected 196 categories include 98 training, 49 validation and 49 testing for the experiment. (4) Places is a dataset of scene images (Zhou et al., 2017), containing 73,000 training images from 365 scene categories, which are divided into 183 categories for training, 91 for validation and 91 for testing. (5) Plantae is a sub-set of the iNat2017 dataset (Van Horn et al., 2018), which contains 200 types of plants and a total of 47,242 images. We split them into 100 classes for training, 50 for validation, and 50 for testing.
The Mini-ImageNet is the most popular benchmark for few-shot classification. It is usually used as a baseline dataset for model training. The CUB dataset is more frequently used for few-shot fine-grained classification tasks. The Cars, Places and Plantae datasets are used for model testing in cross-domain few-shot classification tasks.
A.2 THE VISUALIZATION OF FEATURE IN SELF-SUPERVISED LEARNING.
The proposed SPDN method incorporates the self-supervised constraint into the training process, aiming to improve the quality of learned features and the generalization capability of the few-shot learning. Figure. 6 shows the tSNE visualization of the learned features of 100 samples from the mini-ImageNet dataset for each class in a 5-way 5-shot setting. We can see that, with the selfsupervised learning, the features of each class are more concentrated into clusters.
A.3 SELF-SUPERVISED OPTIMIZATION (SSO) MODULES
The proposed self-supervised optimization (SSO) modules aim to correct the predicted query labels. In the following experiment, we are trying to understand how many incorrect label prediction of the query labels have been successfully corrected by the SSO module. Table 5 shows the results from the 5-way 1-shot on the CUB dataset. We keep track of 75 randomly selected query samples. If we predict the query labels only using the prime network without using the SSO (before SSO), the number of query samples with incorrect labels is 57, and the number of correct ones is 18, which
is very low. After we apply the SSO, the number of query samples with incorrect labels is reduced to 45, the number of correct ones increases to 30. We can examine this correction process in more detail. The SSO module has corrected the labels for 15 samples, as shown in the third row (Incorrect → Correct Label) of the table. However, it has also mis-corrected the labels for 3 samples, as shown in the last row (Correct → Incorrect Label) of the table. In our experiments, we have observed that the SSO module is able to correct the labels for much more query samples than those miscorrected one. This implies that the dual network and the self-supervision constraint are working very well for few-shot learning. This explains the significant performance achieved by the proposed self-supervised prime-dual network method.
A.4 EXTENSION TO N -SHOT IMAGE CLASSIFICATION
In the main paper, we have used the 5-way 1-shot image classification as an example to present our method of self-supervised prime-dual network (SPDN) and optimization for few-shot image classification. This method can be naturally extended to genericK-wayN -shot image classification. Figure 7 illustrates an example of extension to 5-way 5-shot. In this case, each class, in both training and test stages, has 5 support samples and one query sample. In the prime network, we use these 5 support samples to predict the label of the query sample. To ensure that the dual network shares the same network structure as the prime network, for the reverse prediction, we randomly select one sample (denoted by s0) from the support set and switch it with the query sample q0. During the training and inference of the dual network, this updated support set is used to predict the label of s0, which is then compared to its ground-truth label to compute the self-supervised loss. This loss is used for joint prime-dual network training, as well as the self-supervised optimization of the label prediction for the query sample q0.
A.5 FURTHER UNDERSTANDING OF THE SELF-SUPERVISED OPTIMIZATION OF QUERY LABEL PREDICTION
In our proposed SPDN method, the self-supervised optimization of the query label prediction plays an important role and improves the performance significantly. In this section, we provide more experimental results to demonstrate and further under the performance of this algorithm module. Figure 8 shows 6 examples of 5-way 1-shot image classification. Initially, the predicted label for these query samples are incorrect. Then, we perform self-supervised search of the query labels within the neighborhood of the predicted label. We use this predicted labels as input to the dual network to predict the labels of the support samples. The label prediction error of the support
samples is used as the optimization objective. In Figure 8, under each query sample, we show the decreasing of the optimization objective (support label error) with the number of searched candidate query labels. These results show that it is sufficient to search 5-8 candidate query label vectors.
It should be noted that the self-supervised optimization query label prediction can correct the incorrect label prediction, adjusting incorrect label prediction into correct ones. Certainly, it will make mistake or mis-correct the query label prediction, adjusting correct label predictions into incorrect ones. However, the probability of the mis-correction is much lower. For example, Table 6 shows percentages of correct adjustment and incorrect adjustment by the optimization module on the Cars dataset. Specifically, the percentage of correct adjustment from incorrect query labels into correct ones is 21.6%. In the meantime, the percentage of incorrect adjustment is 5.7%. This result in a performance improvement of 15.8% in the overall few-shot image classification, from 32.8% to 48.6%, which is quite significant.
A.6 FURTHER DISCUSSION OF THE PROPOSED METHOD
The key idea and motivation behind our dual network design is as follows: one central challenge in network prediction is that we have no ways to check if the prediction is accurate or not, since we do not have the ground truth. To address this issue, we develop the prime-dual network structure, where the successfully learned dual network is used as a verification module to verify if the prediction results are good enough or not. It maps the prediction results back to the current known data. We establish the self-supervised loss defined on the current known data, use it as the objective function to perform local search and refinement of the prediction results. This process is unique and contributes significantly to the overall performance. The prime network is the baseline GNN+FT network using support samples to predict query samples. The dual network is another GNN+FT network (in opposite direction) using query samples to predict support samples. These two networks form a prediction loop and a self-supervised loss is then derived. We implement this new idea on the the GNN+FT few-shot learning method to demonstrate its performance. The proposed idea is generic and can be applied to other methods, even in other prediction and learning problems, which will be studied in our future work. Our proposed idea is new. However, it does introduce additional complexity. According to our estimation, it will add about 40-60% extra complexity on top of the existing baseline since a majority of computation, such as feature extraction, does not need to recomputed during the search process. In our future work, we plan to develop schemes to reduce the complexity of the self-supervised optimization, for example by merging multiple search steps into one execution cycle. | 1. What is the focus and contribution of the paper on few-shot learning?
2. What are the strengths of the proposed approach, particularly in terms of the dual network and self-supervised learning framework?
3. What are the weaknesses of the paper, especially regarding the experiment section and notation usage?
4. Do you have any concerns about the separation of self-supervised optimization and self-supervised learning in the ablation study?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposed a self-supervised learning framework and a dual network to improve the performance of few-shot learning. The dual network is based on GNN architectures and learn the relationship between the features from the support set and query set. Experiment shows improvement over the baselines on several few-shot learning benchmarks.
Review
Although predicting based a GNN structure is not novel in few-shot learning. The idea of self-supervision is interesting.
The notations need improvement. For example, the prediction L in equation (6) - (7) are not consistent equation (11) - (14). And i would suggest use other notation because of the confusion of loss function.
For figure 2, what is the difference between two types of samples with labels? what is the difference between two types of samples without labels? Although I can understand through the text but the figure can be improved a bit.
The ablation study, the authors divide it into two parts, i.e., self-supervised optimization and self-supervised learning. I do not understand why these two can be divided? Are there any details for how to do that?
To my understanding, the essence of the prediction from the GNN is still based on feature similarity. In this way, is there any feature representation visualization for the learned model please? I am interested in how the representation would change by using the self-supervised learning. |
ICLR | Title
Self-Supervised Prime-Dual Networks for Few-Shot Image Classification
Abstract
We construct a prime-dual network structure for few-shot learning which establishes a commutative relationship between the support set and the query set, as well as a new self-supervision constraint for highly effective few-shot learning. Specifically, the prime network performs the forward label prediction of the query set from the support set, while the dual network performs the reverse label prediction of the support set from the query set. This forward and reserve prediction process with commutated support and query sets forms a label prediction loop and establishes a self-supervision constraint between the ground-truth labels and their predicted values. This unique constraint can be used to significantly improve the training performance of few-shot learning through coupled prime and dual network training. It can be also used as an objective function for optimization during the testing stage to refine the query label prediction results. Our extensive experimental results demonstrate that the proposed self-supervised commutative learning and optimization outperforms existing state-of-the-art few-shot learning methods by large margins on various benchmark datasets.
1 INTRODUCTION
Few-shot image classification aims to classify images from novel categories (query samples) based on very few labeled samples from each class (support images) (Hong et al., 2020a; Sun et al., 2021). During the training stage, the few-shot learning (FSL) is given a set of support-query set pairs with class labels. Once successfully trained, the model needs to be tested on unseen classes. The major challenge here is that the number of available support samples N is very small, often N ≤ 5. In an extreme case, N = 1 where it is called one-shot learning. In order to achieve this so-called learnto-learn capability, the FSL needs to capture the inherent visual or semantic relationship between the support samples and query samples, and more importantly, this learned relationship or prediction should be able to generalize well onto unseen classes (Liu et al., 2020d).
A fundamental challenge in prediction is that: if we know entity A and are trying to predict entity B, how do we know if the prediction of B, denoted by Φ(B), is accurate or not? Is there any way that we can verify the accuracy of the prediction Φ(B)? As we know, this is impossible since B has no ground-truth for us to evaluate or verify its prediction accuracy. If we can come up an indirect approach to effectively evaluate the prediction accuracy, it is expected that the learning and prediction performance can be significantly improved.
In this work, we propose to explore a prime-dual commutative network design for effective prediction, specifically for few-shot image classification. As illustrated in Figure 1, the prime network Φ is the original network that learns the forward prediction from A to B̂ = Φ(A). The dual network Γ performs the reverse prediction from B to  = Γ(B). If we cascade these two networks together which establishes a prediction loop from A to B and then back to A, we have
 = Γ(B̂) = Γ(Φ(A)). (1)
Since A is given, which has the ground-truth value, the difference between A and its prime-dual loop prediction result  forms a self-supervision loss
LS = d(A, Â) = d(A,Γ(Φ(A))), (2)
where d is a distance metric function. This self-supervision loss LS can be used to improve the training performance based on the coupling between the prime and dual networks. Furthermore, it can used to verify and adjust the prediction result by minimizing the self-supervision loss.
In this work, we propose to study this prime-dual network design with self-supervision for few-shot learning by exploiting the commutative relationship between the support set (entityA) and the query set (entity B). Specifically, the prime network learns to predict the labels of query samples using the support set with ground-truth labels as training samples. Meanwhile, the dual network learns to predict the labels of the support samples using the query set with ground-truth labels as training samples. For example, in 5-way 1-shot learning, the support set consists of 5 images from 5 classes with only one image per class. The query set also has 5 images from 5 classes. When training the prime and dual networks, the support set and the query set are switched for training samples. This forward and reserve prediction process with commutative support and query sets forms a label prediction loop and establishes a self-supervision constraint between the ground-truth labels and their predicted values. The prime-dual networks are jointly trained with the help from the self-supervision loss. This loss is also used during the testing stage to adjust and optimize the prediction results. Our extensive experimental results demonstrate that the proposed self-supervised commutative learning and optimization method outperforms existing state-of-the-art few-shot learning methods by a large margin on various benchmark datasets.
2 RELATED WORK AND UNIQUE CONTRIBUTIONS
Few-shot learning (FSL) aims to recognize instances from unseen categories with few labeled samples. There are three major categories of methods that have been developed for FSL. (1) Data Augmentation is the most direct method for few-shot learning, which explores different approaches to synthesize images to address the issue of few training samples. For example, self-training jigsaw augmentation (Chen et al., 2019) is able to synthesize new images by segmenting and reorganizing labeled and unlabeled gallery images. Mangla et al. (2020) apply self-supervision algorithms augmented with manifold mixup (Verma et al., 2019) for few-shot classification tasks. The F2GAN (Hong et al., 2020b) and MatchingGAN methods (Hong et al., 2020a) use generative adversarial networks (GANs) to construct high-quality samples for new image categories. (2) Optimization-based methods aim to learn a good initial network model for the classifier. This learned model can be then quickly adapted to novel classes using a few labeled samples. MAML (Finn et al., 2017) proposes to train a set of initialization models based on second-order gradients and meta-optimization. TAML (Jamal & Qi, 2019) reduces the bias introduced by the MAML algorithm to enforce equity between the tasks. In the Latent Embedding Optimization (LEO) method (Rusu et al., 2018), gradient-based optimization is performed in a low-dimensional latent space instead of the original high-dimensional parameter space. (3) Metric-based methods aim to learn a good metric space so that samples from novel categories can be effectively distinguished and correctly classified. For example, MatchingNet (Vinyals et al., 2016) applies a recurrent network to calculate the cosine similarity between samples. ProtoNet (Snell et al., 2017) compares features between samples in the Euclidean space. RelationNet (Sung et al., 2018) uses a CNN model and (Garcia & Bruna, 2017) uses the graph convolution network (GNN) to learn the metric relationship.
In this work, we also consider cross-domain FSL. For the cross-domain classification task, the model needs to generalize well from the source domain to a new or unseen target domain without accessing samples from the unseen domain during the training stage. Sun et al. (2021) propose a modelagnostic explanation-guided training method that dynamically finds and emphasizes the features which are important for the predictions. This improves the model generalization capability. To characterize the variation of image feature distribution across different domains, the LFT method (Tseng et al., 2020) learns the noise distribution by adding feature-wise transformation layers to the
image encoder. To avoid over-fitting on the source domain and increase the generalization capability to the target domain, the batch spectral regularization (BSR) method (Liu et al., 2020b) attempts to suppress all singular values of the batch feature matrices during pre-training. Another set of methods (Shankar et al., 2018; Volpi et al., 2018) learn to augment the input data with adversarial learning (Yang et al., 2020b) in order to generalize the task from the source domain to the unseen target domain.
In this work, we propose a commutative prime-dual network design for few-shot learning. In the literature, the mutual dependency and reciprocal relationship between multiple modules have been explored to achieve better performance. For example, (Xu et al., 2020) has developed a reciprocal cross-task architecture for image segmentation, which improves the learning efficiency and generation accuracy by exploiting the commonalities and differences across tasks. Sun et al. (2020) design a reciprocal learning network for human trajectory prediction, which consists of forward and backward prediction neural networks. The reciprocal learning enforces consistency between the forward and backward trajectory prediction, which helps each other to improve the learning performance and achieve higher accuracy. Zhu et al. (2017) design the CycleGAN contains two GANs forming a cycle network that can translate the images of the two domains into each other to achieve style transfer. Liu et al. (2021) develop a Temporal Reciprocal Learning (TRL) approach to fully explore the discriminative information from the disentangled features. Zhang et al. (2021b) design a support-query mutual guidance architecture for few-shot object detection.
Unique Contributions. Compared to existing work in the literature, the major contributions of this work include: (1) We propose a new prime-dual network design to explore the commutative relationship between support and query sets and establish a unique self-supervision constraint for few-shot learning. (2) We incorporate the self-supervision loss into the coupled prime-dual network training to improve the few-shot learning performance. (3) During the test stage, using the dual network to map the prediction results back to the support set domain and using the self-supervision constraint as an objective function, we develop an optimization-based scheme to verify and optimize the performance few-shot learning. (4) Our proposed method has significantly advanced the stateof-the-art performance of few-shot image classification.
3 METHOD
In this section, we present our method of self-supervised prime-dual network (SPDN) learning and optimization for few-shot image classification.
3.1 SELF-SUPERVISED COMMUTATIVE LEARNING
Figure 2 provides an overview of our proposed method of self-supervised commutative learning and optimization for few-shot image classification. In a typical setting of K-way N -shot learning, N labeled image samples from each of the K classes form the support set. For example, in a 5-way 1-shot learning, K = 5 and N = 1. Given a very small support set S = {Skn|1 ≤ k ≤ K, 1 ≤ n ≤ N}, the objective of the FSL is to predict the labels of the query images Q = {Qkm|1 ≤ k ≤ K, 1 ≤ m ≤ M} from the same K classes in M batches During the training stage, the labels of both support and query samples are available. The prime network ΦS→Q for few-shot classification is trained on these support-query sets, aiming to learn and represent the inherent visual
or semantic relationship between the support and query images. Once successfully learned, we will apply this network to unseen classes. Specifically, in the test stage, given a labeled support set S′ = {S′kn|1 ≤ k ≤ K, 1 ≤ n ≤ N} from these K unseen classes, we need to predict the labels for the query set Q′ = {Q′km|1 ≤ k ≤ K, 1 ≤ m ≤M} also from these unseen classes. Therefore, the fundamental challenge of FSL is to characterize and learn the inherent relationship between the support set S and the query set Q. Once learned, we can then shift or transfer this relationship to S′ and Q′ of unseen classes to infer the labels of Q′. In this work, as discussed in the following section, we propose to establish a graph neural network (GNN) to characterize and learn this relationship.
We recognize that, within the framework of few-shot learning, the support set and the query set are in an equal and symmetric position to each other. More specifically, if we can learn to predict the labels of query set Q from support set S, certainly, we can switch their order, predicting the labels of the support set S from the query set Q using the same network architecture. This observation leads to an interesting commutative prime-dual network design for few-shot learning. As illustrated in Figure 2, we introduce a dual network ΓQ→S, which performs the reverse label prediction of the support set S from the query set Q. Let L(S) and L(Q) be the label vectors of S and Q, respectively. Let L̂(S) and L̂(Q) be the predicted labels. The forward prediction by the prime network can be written as
L̂(Q) = ΦS→Q[L(S)], (3)
while the reverse prediction by the dual network can be written as
L̂(S) = ΓQ→S[L(Q)], (4)
If both networks Φ and Γ are well trained, and if we pass the label prediction output of the prime network as input to the dual network, then, we expect that the predicted labels for the support set should be close to its ground-truth. This leads to the following self-supervision loss
LSS = ||L(S)− L̂(S)||2 = ||L(S)− ΓQ→S[L̂(Q)] ||2 (5) = ||L(S)− ΓQ→S[ΦS→Q[L(S)]] ||2.
This self-supervision constraint can be established on both support set and query set, resulting in a coupled prime-dual network training. Figure 3 (a) and (b) shows the training processes for the prime network and the dual network, respectively. Specifically, from the support set S, the prime network learns to predict the labels of the query set Q. As in existing few-shot learning, we have the loss LPQ = ||L̂(Q)−L(Q)||2 between the predicted query labels and their ground-truth values. We then use the query samples and their predicted labels as input to the dual network ΓQ→S, we can predict the labels of the support set L̂(S) and compute the self-supervision loss LPS = ||L̂(S) − L(S)||2. These two losses are combined to form the loss function for training the prime network
LP = ||L̂Φ(Q)− L(Q)||2 + α · ||L̂Φ,Γ(S)− L(S)||2. (6)
α is a weighting parameter whose default value is set to be 0.5 in our experiments. Similarly, for the training of the dual network, as shown in Figure 3(b), its loss function is given by
LD = ||L̂Γ(S)− L(S)||2 + α · ||L̂Γ,Φ(Q)− L(Q)||2. (7)
3.2 GRAPH NEURAL NETWORK FOR FEW-SHOT IMAGE CLASSIFICATION
The proposed prime and dual networks share the same network design, which will be discussed in this section. The only difference between these two networks is that their support and query samples are switched. In the following, we use the prime network as an example to explain its design.
The central task of few-shot learning is to characterize the inherent relationship between the query and support samples, based on which we can infer the labels of the query samples from the support samples (Tseng et al., 2020; Liu et al., 2020b). In this work, we propose to use a graph neural network (GNN) to model and analyze this relationship. InK-wayN -shot learning, givenK classes, each with N support samples {Skn}, we need to learn the prime network to predict the labels for K query samples {Qk}. This implies, in each of the total M training batch, we have K × (N + 1) support samples and query samples. As illustrated in Figure 4(a), we use a backbone network, for example, Resnet-10 or Resnet-12, to extract feature for each of these support and query samples. We denote their features by S = {stkn} and Q = {qtk} where t represents the update iteration index in the GNN. Initially, t = 0. These support-query sample features form the nodes for the GNN, denoted by {xtj |1 ≤ j ≤ J}, J = K × (N + 1), for the simplicity of notations. The edge between two graph nodes represents the correlation ψ(xti,x t j) between nodes x t i and x t j . Note that our GNN has two groups of nodes: support sample nodes and query sample nodes. The support samples nodes have labels while the labels of the query samples need to be predicted by the prime network. If xti and xtj are both support nodes, we have
ψ(xti,x t j) =
{ 1 if L(xti) = L(x t j),
0 if L(xti) 6= L(xtj). (8)
Here, L(·) represents the label of the corresponding support sample. Since the labels for the query nodes are unknown, the correlation for edges linked to these query nodes need to be learned by the GNN. Initially, we set them to be random values between 0 and 1.
Each node of the GNN combines features from these neighboring nodes with the corresponding correlation as weights and updates its own feature by learning a multi-layer perceptron (MLP) network Go[·] as follows
xt+1j = Go
[ J∑
i=1
xtj · ψ(xti,xtj)
] . (9)
At each edge, another MLP network Ge[·, ·] is learned to predict the correlation between two graph nodes,
ψ(xti,x t j) = Ge[xti,xtj ], (10)
whose ground-truth values are obtained using the scheme discussed in the above. The feature generated by the prime GNN is then passed to a classification network to predict the query labels. Both the prime and dual GNNs are jointly trained with their final classification networks.
3.3 SELF-SUPERVISED OPTIMIZATION OF FEW-SHOT IMAGE CLASSIFICATION
Besides improving the training performance through mutual enforcement, the proposed selfsupervised prime-dual network design can be also used in the testing stage to optimize the label prediction of query samples. Specifically, we can use the dual network to refine and optimize the label prediction results obtained by the prime network. As illustrated in Figure 4(b), given a support
set S and a query set Q, the support set has class labels L(S). Let L̂(Q) be the prediction result, the output of the softmax layer of the classification network. In existing approaches of few-shot learning or other network prediction scenarios, we are not able to verify if the prediction is accurate or not since the ground-truth is not available for test samples. However, in this work, with the dual network ΓQ→S being successfully trained, we can use the prediction result L̂(Q) as input to the dual network to predict the class labels of the original support samples
L̂(S) = ΓQ→S[L̂(Q)]. (11)
Note that these support samples DO have ground-truth labels L(S). Define the label prediction error by
El(S) = ||L(S)− L̂(S)||2. (12) We assume that the correct query sample labels L∗(Q) is within the neighborhood of the prediction result L̂(Q). Let Ω be the set of candidate assignments of query labels which are within the neighborhood of L̂(Q). For example,
Ω = {L̃(Q) : ||L̃(Q)− L̂(Q)||2 ≤ ∆}, (13)
where ∆ is a given threshold for the label vector distance. We then search the candidate query labels L̃(Q) within the neighborhood set Ω to minimize the support label prediction error El(S) in (12). The optimized prediction of the query samples is given by
L∗(Q) = arg min L̃(Q)∈Ω || L(S)− ΓQ→S[L̃(Q)] ||2. (14)
From the experimental results, we will see that this unique self-supervised optimization of the query label prediction is able to significantly improve the few-shot image classification performance.
4 EXPERIMENTAL RESULTS
In this section, we provide experimental results on various benchmark datasets to demonstrate the performance of our proposed SPDN method for few-shot learning.
4.1 IMPLEMENTATION DETAILS
We use ResNet-10 as the backbone of our feature encoder. The input images are resized to 224×224 and the output feature vector size is 1 × 1 × 512. We choose the Adam optimizer with a learning rate of 0.01 and a batch size of 64 for training of 400 epochs. In the episodic meta-training stage, we use the graph neural network (GNN) discussed in the above section to generate the feature embedding for query samples. The prime network ΦS→Q and the dual network ΓQ→S are jointly trained. These two networks are both trained for 400 epochs with 100 episodes per epoch. In each episode, we randomly select K categories (K=5, 5-way) from the training set. Then, we randomly select N samples (N=1 or 5 for 1-shot or 5-shot) from each category to compose support set and query set, respectively. In the test stage, we use the average of 1000 trials as the final result for all the experiments. For each trial, we randomly select K categories from the test set. Similar to the training stage, N (1 or 5) samples are randomly selected as the support set and 15 samples as the query set from each category.
4.2 DATASETS
Five benchmark datasets are used for performance evaluation and comparison with other methods in the literature, Mini-ImageNet (Ravi & Larochelle, 2016), CUB (Wah et al., 2011), Cars (Krause et al., 2013), Places (Zhou et al., 2017) and Plantae (Van Horn et al., 2018). More details about dataset settings are presented in Appendix A.1.
4.3 RESULTS
To demonstrate the performance of our SPDN method, we conduct a series of experiments under different few-shot classification settings. In the literature, there are two major scenarios for testing the FSL methods: (1) intra-domain learning where the training classes and test classes are from the same object domain, for example, both from the Mini-ImageNet classess, and (2) cross-domain learning where the FSL is trained on one dataset (e.g., Mini-ImageNet) and the testing is performed on another dataset (e.g., CUB). Certainly, the cross-domain scenario is more challenging.
4.3.1 INTRA-DOMAIN FSL RESULTS.
First, we conduct intra-domain FSL experiments on the Mini-ImageNet. Table 1 summarizes the performance comparison with state-of-the-art FSL methods mainly developed in the past two years. We also list the backbone network used for extracting the features for the input images. We can see that, for the 5-way 1-shot image classification task, our method (with ResNet-10 backbone) outperform the current best method (with ResNet-12 backbone) from (Zhang et al., 2021a) by 5.42%. Another method which uses the same ResNet-10 backbone is the GNN+FT method (Tseng et al., 2020). Our method outperforms this method by 12.23%. For the 5-way 5-shot classification task, our method outperforms the current best by more than 5%, which is quite significant.
Second, we evaluate our method on intra-domain fine-grained image classification tasks on the CUB dataset. In this case, the FSL needs to learn subtle features to distinguish objects from close categories. Table 2 summarizes the performance results on 5-way 1-shot and 5-way 5-shot classification tasks. We can see that, for the one-shot classification task, our method outperforms the current best method, FRN (Wertheimer et al., 2021) by 6.72%. For the 5-shot classification task, our method improves the classification accuracy by 2.80%.
4.3.2 CROSS-DOMAIN FSL RESULTS.
The cross-domain few-shot learning is more challenging. Following existing methods, we train the model on the Mini-ImageNet object domain and test the trained model on other domains, including the CUB, Cars, Places and Plantae datasets. Table 3 summarizes the results for 5-way 1-shot classification (top) and 5-way 5-shot classification (bottom). We can see that our SPDN method has dramatically improved the classification accuracy on these cross-domain FSL tasks. For example, on the Cars dataset, our method outperforms the current best TPN+ATA (Wang & Deng, 2021) by 4.15%. On the Plantae dataset, the performance gain is 5.59%, which is quite significant. For the 5-way 5-shot classification task, the performance gains on these datasets are also very significant,
between 0.37-8.68%. This demonstrates that our SPDN method is able to learn the inherent visual relationship between the support and query samples and can generalize very well onto unseen classes in new object domains.
4.4 ABLATION STUDIES
In this section, we conduct ablation studies to further understand the proposed SPDN method and analyze the contributions of major algorithm components.
From algorithm design perspective, our SPDN method has two major components: self-supervised learning (SSL) of the prime and dual networks, and the self-supervised optimization (SSO) of the predicted query labels. We adopt the single GNN-based model (Tseng et al., 2020) as the baseline of our method and the SSL and SSO algorithm components are added onto this baseline method. To understand the performance of these two algorithm components, in the following experiment, we train the SPDN method using training samples from the Mini-ImageNet. We conduct intra-domain few-shot image classification on the Mini-ImageNet and cross-domain few-shot image classification on the CUB, Cars, Places, and Plantae datasets. Table 4 summarizes the results for 5-way 1-shot and 5-way 5-shot image classification. The second column shows the intra-domain few-shot image classification results on the Mini-ImageNet. The rest columns show the results for the cross-domain classification results. We can see that the self-supervised prime-dual network training is able to improve the classification accuracy by up to 1.8%. The performance gain achieved by the selfsupervised optimization of the predicted query labels is much more significant, ranging from 7-10%. This dramatic performance gain is a surprise to us. In the following, we will provide additional ablation studies to further understand this SSO algorithm module. Compared to the SSO module, the performance improvement by the first SSL module is relatively small. This is because the major new contribution of the SSL module is the self-supervised loss which aims to further improve the learning on the baseline GNN. However, it has successfully trained a dual network, which plays a very important role in the second SSO module. It is used to search and optimize the predicted labels of the query samples, resulting in major performance gain. We discuss the specific optimization results of our self-supervised optimization (SSO) module through an experiment in Appendix A.3.
In the following experiments, we attempt to further understand the behavior and performance of the SSO algorithm module. First, we conduct an experiment to understand the search and optimization process of SSO. Suppose L(Q) is the true label of the query samples. Let
L̃(Q) = L(Q) + λ ·∆L, (15)
be a label vector within the neighborhood of L(Q). Here, ∆L is a pre-generated random vector and λ is a disturbance coefficient to control the amount of variation. With the label vector L̃(Q) and the query samples, we can predict the labels of the support vector using the dual network. Then, we can compute the prediction error El(S) as in (12). Figure 5(a) shows the label prediction error El(S) as a function of λ. This experiment was performed on 5-way 1-shot image classification on the CUB dataset. We can see that the minimum error is achieved at λ = 0. This implies the groundtruth labels of the query samples have the minimum self-supervised label error El(S). This is a very important property of our SSO method. It suggests that, when the predicted query labels are not correct, and the ground-truth labels are within its neighborhood, we can use the SSO method to search for these ground-truth labels using the minimum self-supervised support label error criteria.
During our self-supervised optimization of the predicted query labels, we choose a small neighborhood Ω within the neighborhood of the predicted query labels L̂(Q) with a maximum distance ∆. This ∆controls the number of search positions in the label space. If we search more positions or candidate query labels, we can obtain smaller self-supervised label errors of the support samples El(S). Figure 5(b) plots the value of El(S) as a function of the number of search positions. This experiment was performed on 5-way 1-shot image classification on the CUB dataset. We can see that the error drops significantly with the number of searched positions. We recognize that, for each search position, we need to run the dual network once. This does introduce extra computational complexity. But, the amount of performance gain is very appealing. In our experiments, we limited the number of search positions to the 5, i.e., the nearest 5 label vectors (integer vectors) to the predicted query label.
5 CONCLUSION
In this work, we have successfully developed a novel prime-dual network structure for few-shot learning which explores the commutative relationship between the support set and the query set. The prime network performs the forward label prediction from the support set to the query set, while the dual network performs the reverse label prediction from the query set to the support set. This forward and reserve prediction process with commutative support and query sets forms a label prediction loop and establishes a self-supervision constraint between the ground-truth labels and their predicted values. We have established a self-supervised support error metric and used the learned dual network to optimize the predicted query labels during the testing stage. Our extensive experimental results on both intra-domain and cross-domain few-shot image classificaiton have demonstrated that the proposed self-supervised prime-dual network learning and optimization have significantly improved the performance of few-shot learning, especially for cross-domain few-shot learning tasks. We have also conducted detailed ablation studies to provide in-depth understanding of the significant performance gain achieved by the self-supervised optimization process. The self-supervised primedual network design is general and can be naturally incorporated into other prediction and learning methods.
A APPENDIX
In this appendix, we provide more details of experimental settings and additional results to further understand the performance of our proposed method.
A.1 DATASET
In our experiments, the following 5 datasets are used for performance evaluations and comparisons.
(1) Mini-ImageNet has randomly selected 100 categories from the ImageNet (Deng et al., 2009) and each category has 600 samples of size 84 × 84. The 100 categories are divided into a training set with 64 categories, a validation set with 16 categories, and a testing set with 20 categories. (2) CUB is a fine-grained dataset with 200 bird species mainly living in North America (Wah et al., 2011). We randomly split the dataset into 100, 50, 50 classes for training, validation and testing, respectively. (3) Cars contains 16,185 images of 207 fine-grained car types, which consist of 10 BMW models and 197 other car types (Krause et al., 2013). We randomly selected 196 categories include 98 training, 49 validation and 49 testing for the experiment. (4) Places is a dataset of scene images (Zhou et al., 2017), containing 73,000 training images from 365 scene categories, which are divided into 183 categories for training, 91 for validation and 91 for testing. (5) Plantae is a sub-set of the iNat2017 dataset (Van Horn et al., 2018), which contains 200 types of plants and a total of 47,242 images. We split them into 100 classes for training, 50 for validation, and 50 for testing.
The Mini-ImageNet is the most popular benchmark for few-shot classification. It is usually used as a baseline dataset for model training. The CUB dataset is more frequently used for few-shot fine-grained classification tasks. The Cars, Places and Plantae datasets are used for model testing in cross-domain few-shot classification tasks.
A.2 THE VISUALIZATION OF FEATURE IN SELF-SUPERVISED LEARNING.
The proposed SPDN method incorporates the self-supervised constraint into the training process, aiming to improve the quality of learned features and the generalization capability of the few-shot learning. Figure. 6 shows the tSNE visualization of the learned features of 100 samples from the mini-ImageNet dataset for each class in a 5-way 5-shot setting. We can see that, with the selfsupervised learning, the features of each class are more concentrated into clusters.
A.3 SELF-SUPERVISED OPTIMIZATION (SSO) MODULES
The proposed self-supervised optimization (SSO) modules aim to correct the predicted query labels. In the following experiment, we are trying to understand how many incorrect label prediction of the query labels have been successfully corrected by the SSO module. Table 5 shows the results from the 5-way 1-shot on the CUB dataset. We keep track of 75 randomly selected query samples. If we predict the query labels only using the prime network without using the SSO (before SSO), the number of query samples with incorrect labels is 57, and the number of correct ones is 18, which
is very low. After we apply the SSO, the number of query samples with incorrect labels is reduced to 45, the number of correct ones increases to 30. We can examine this correction process in more detail. The SSO module has corrected the labels for 15 samples, as shown in the third row (Incorrect → Correct Label) of the table. However, it has also mis-corrected the labels for 3 samples, as shown in the last row (Correct → Incorrect Label) of the table. In our experiments, we have observed that the SSO module is able to correct the labels for much more query samples than those miscorrected one. This implies that the dual network and the self-supervision constraint are working very well for few-shot learning. This explains the significant performance achieved by the proposed self-supervised prime-dual network method.
A.4 EXTENSION TO N -SHOT IMAGE CLASSIFICATION
In the main paper, we have used the 5-way 1-shot image classification as an example to present our method of self-supervised prime-dual network (SPDN) and optimization for few-shot image classification. This method can be naturally extended to genericK-wayN -shot image classification. Figure 7 illustrates an example of extension to 5-way 5-shot. In this case, each class, in both training and test stages, has 5 support samples and one query sample. In the prime network, we use these 5 support samples to predict the label of the query sample. To ensure that the dual network shares the same network structure as the prime network, for the reverse prediction, we randomly select one sample (denoted by s0) from the support set and switch it with the query sample q0. During the training and inference of the dual network, this updated support set is used to predict the label of s0, which is then compared to its ground-truth label to compute the self-supervised loss. This loss is used for joint prime-dual network training, as well as the self-supervised optimization of the label prediction for the query sample q0.
A.5 FURTHER UNDERSTANDING OF THE SELF-SUPERVISED OPTIMIZATION OF QUERY LABEL PREDICTION
In our proposed SPDN method, the self-supervised optimization of the query label prediction plays an important role and improves the performance significantly. In this section, we provide more experimental results to demonstrate and further under the performance of this algorithm module. Figure 8 shows 6 examples of 5-way 1-shot image classification. Initially, the predicted label for these query samples are incorrect. Then, we perform self-supervised search of the query labels within the neighborhood of the predicted label. We use this predicted labels as input to the dual network to predict the labels of the support samples. The label prediction error of the support
samples is used as the optimization objective. In Figure 8, under each query sample, we show the decreasing of the optimization objective (support label error) with the number of searched candidate query labels. These results show that it is sufficient to search 5-8 candidate query label vectors.
It should be noted that the self-supervised optimization query label prediction can correct the incorrect label prediction, adjusting incorrect label prediction into correct ones. Certainly, it will make mistake or mis-correct the query label prediction, adjusting correct label predictions into incorrect ones. However, the probability of the mis-correction is much lower. For example, Table 6 shows percentages of correct adjustment and incorrect adjustment by the optimization module on the Cars dataset. Specifically, the percentage of correct adjustment from incorrect query labels into correct ones is 21.6%. In the meantime, the percentage of incorrect adjustment is 5.7%. This result in a performance improvement of 15.8% in the overall few-shot image classification, from 32.8% to 48.6%, which is quite significant.
A.6 FURTHER DISCUSSION OF THE PROPOSED METHOD
The key idea and motivation behind our dual network design is as follows: one central challenge in network prediction is that we have no ways to check if the prediction is accurate or not, since we do not have the ground truth. To address this issue, we develop the prime-dual network structure, where the successfully learned dual network is used as a verification module to verify if the prediction results are good enough or not. It maps the prediction results back to the current known data. We establish the self-supervised loss defined on the current known data, use it as the objective function to perform local search and refinement of the prediction results. This process is unique and contributes significantly to the overall performance. The prime network is the baseline GNN+FT network using support samples to predict query samples. The dual network is another GNN+FT network (in opposite direction) using query samples to predict support samples. These two networks form a prediction loop and a self-supervised loss is then derived. We implement this new idea on the the GNN+FT few-shot learning method to demonstrate its performance. The proposed idea is generic and can be applied to other methods, even in other prediction and learning problems, which will be studied in our future work. Our proposed idea is new. However, it does introduce additional complexity. According to our estimation, it will add about 40-60% extra complexity on top of the existing baseline since a majority of computation, such as feature extraction, does not need to recomputed during the search process. In our future work, we plan to develop schemes to reduce the complexity of the self-supervised optimization, for example by merging multiple search steps into one execution cycle. | 1. What is the main contribution of the paper in few-shot image classification?
2. What are the strengths of the proposed method, particularly in its simplicity and experimental results?
3. What are the weaknesses of the paper regarding its connection to prior works, specifically CycleGAN?
4. How does the reviewer interpret the ablation study results, and what implications do they have for the significance of SSL in few-shot learning?
5. Do the comparison and optimization methods used by other state-of-the-art approaches affect their performance gains, and how does this relate to the fairness of comparing results? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a prime-dual network for few shot image classification, where the support set and query set are used in a flip manner so that the labeled samples in support set can be compared with themselves to determine a loss for network update.
Review
STREHGTH The proposed method is easy to follow and understand, and the experiment results are promising when directly compared with state-of-that-art methods.
WEAKNESS
This method reminds me of the cycle consistency loss in the well-known CycleGAN approach, but since CycleGAN is not referenced and discussed in the paper, I was wondering if I misunderstood that or is there a reason for that? Although it's a distantly related work in terms of applications, the formulation of problem and loss functions seem quite relevant. If that's the case, it would be desired to discuss the connection to prior work for better justification.
In ablation study, it appears that simply doing SSL is not helping much compared with baseline approach. Does it mean the discrepancy between training classes and unseen testing classes is still the dominant reason for generally low accuracy in few-shot learning task? If so, is it indicating that the SSL stage is not the major concern in such setting, and one should focus more on addressing the query and support sets in testing classes?
Following 2, if SSO is more essential for the performance gain, is it fair comparison with other SOTA results? In other words, did other methods go through an optimization stage on unseen testing classes, or do some of them in fact not rely on updates during testing? |
ICLR | Title
Self-Supervised Prime-Dual Networks for Few-Shot Image Classification
Abstract
We construct a prime-dual network structure for few-shot learning which establishes a commutative relationship between the support set and the query set, as well as a new self-supervision constraint for highly effective few-shot learning. Specifically, the prime network performs the forward label prediction of the query set from the support set, while the dual network performs the reverse label prediction of the support set from the query set. This forward and reserve prediction process with commutated support and query sets forms a label prediction loop and establishes a self-supervision constraint between the ground-truth labels and their predicted values. This unique constraint can be used to significantly improve the training performance of few-shot learning through coupled prime and dual network training. It can be also used as an objective function for optimization during the testing stage to refine the query label prediction results. Our extensive experimental results demonstrate that the proposed self-supervised commutative learning and optimization outperforms existing state-of-the-art few-shot learning methods by large margins on various benchmark datasets.
1 INTRODUCTION
Few-shot image classification aims to classify images from novel categories (query samples) based on very few labeled samples from each class (support images) (Hong et al., 2020a; Sun et al., 2021). During the training stage, the few-shot learning (FSL) is given a set of support-query set pairs with class labels. Once successfully trained, the model needs to be tested on unseen classes. The major challenge here is that the number of available support samples N is very small, often N ≤ 5. In an extreme case, N = 1 where it is called one-shot learning. In order to achieve this so-called learnto-learn capability, the FSL needs to capture the inherent visual or semantic relationship between the support samples and query samples, and more importantly, this learned relationship or prediction should be able to generalize well onto unseen classes (Liu et al., 2020d).
A fundamental challenge in prediction is that: if we know entity A and are trying to predict entity B, how do we know if the prediction of B, denoted by Φ(B), is accurate or not? Is there any way that we can verify the accuracy of the prediction Φ(B)? As we know, this is impossible since B has no ground-truth for us to evaluate or verify its prediction accuracy. If we can come up an indirect approach to effectively evaluate the prediction accuracy, it is expected that the learning and prediction performance can be significantly improved.
In this work, we propose to explore a prime-dual commutative network design for effective prediction, specifically for few-shot image classification. As illustrated in Figure 1, the prime network Φ is the original network that learns the forward prediction from A to B̂ = Φ(A). The dual network Γ performs the reverse prediction from B to  = Γ(B). If we cascade these two networks together which establishes a prediction loop from A to B and then back to A, we have
 = Γ(B̂) = Γ(Φ(A)). (1)
Since A is given, which has the ground-truth value, the difference between A and its prime-dual loop prediction result  forms a self-supervision loss
LS = d(A, Â) = d(A,Γ(Φ(A))), (2)
where d is a distance metric function. This self-supervision loss LS can be used to improve the training performance based on the coupling between the prime and dual networks. Furthermore, it can used to verify and adjust the prediction result by minimizing the self-supervision loss.
In this work, we propose to study this prime-dual network design with self-supervision for few-shot learning by exploiting the commutative relationship between the support set (entityA) and the query set (entity B). Specifically, the prime network learns to predict the labels of query samples using the support set with ground-truth labels as training samples. Meanwhile, the dual network learns to predict the labels of the support samples using the query set with ground-truth labels as training samples. For example, in 5-way 1-shot learning, the support set consists of 5 images from 5 classes with only one image per class. The query set also has 5 images from 5 classes. When training the prime and dual networks, the support set and the query set are switched for training samples. This forward and reserve prediction process with commutative support and query sets forms a label prediction loop and establishes a self-supervision constraint between the ground-truth labels and their predicted values. The prime-dual networks are jointly trained with the help from the self-supervision loss. This loss is also used during the testing stage to adjust and optimize the prediction results. Our extensive experimental results demonstrate that the proposed self-supervised commutative learning and optimization method outperforms existing state-of-the-art few-shot learning methods by a large margin on various benchmark datasets.
2 RELATED WORK AND UNIQUE CONTRIBUTIONS
Few-shot learning (FSL) aims to recognize instances from unseen categories with few labeled samples. There are three major categories of methods that have been developed for FSL. (1) Data Augmentation is the most direct method for few-shot learning, which explores different approaches to synthesize images to address the issue of few training samples. For example, self-training jigsaw augmentation (Chen et al., 2019) is able to synthesize new images by segmenting and reorganizing labeled and unlabeled gallery images. Mangla et al. (2020) apply self-supervision algorithms augmented with manifold mixup (Verma et al., 2019) for few-shot classification tasks. The F2GAN (Hong et al., 2020b) and MatchingGAN methods (Hong et al., 2020a) use generative adversarial networks (GANs) to construct high-quality samples for new image categories. (2) Optimization-based methods aim to learn a good initial network model for the classifier. This learned model can be then quickly adapted to novel classes using a few labeled samples. MAML (Finn et al., 2017) proposes to train a set of initialization models based on second-order gradients and meta-optimization. TAML (Jamal & Qi, 2019) reduces the bias introduced by the MAML algorithm to enforce equity between the tasks. In the Latent Embedding Optimization (LEO) method (Rusu et al., 2018), gradient-based optimization is performed in a low-dimensional latent space instead of the original high-dimensional parameter space. (3) Metric-based methods aim to learn a good metric space so that samples from novel categories can be effectively distinguished and correctly classified. For example, MatchingNet (Vinyals et al., 2016) applies a recurrent network to calculate the cosine similarity between samples. ProtoNet (Snell et al., 2017) compares features between samples in the Euclidean space. RelationNet (Sung et al., 2018) uses a CNN model and (Garcia & Bruna, 2017) uses the graph convolution network (GNN) to learn the metric relationship.
In this work, we also consider cross-domain FSL. For the cross-domain classification task, the model needs to generalize well from the source domain to a new or unseen target domain without accessing samples from the unseen domain during the training stage. Sun et al. (2021) propose a modelagnostic explanation-guided training method that dynamically finds and emphasizes the features which are important for the predictions. This improves the model generalization capability. To characterize the variation of image feature distribution across different domains, the LFT method (Tseng et al., 2020) learns the noise distribution by adding feature-wise transformation layers to the
image encoder. To avoid over-fitting on the source domain and increase the generalization capability to the target domain, the batch spectral regularization (BSR) method (Liu et al., 2020b) attempts to suppress all singular values of the batch feature matrices during pre-training. Another set of methods (Shankar et al., 2018; Volpi et al., 2018) learn to augment the input data with adversarial learning (Yang et al., 2020b) in order to generalize the task from the source domain to the unseen target domain.
In this work, we propose a commutative prime-dual network design for few-shot learning. In the literature, the mutual dependency and reciprocal relationship between multiple modules have been explored to achieve better performance. For example, (Xu et al., 2020) has developed a reciprocal cross-task architecture for image segmentation, which improves the learning efficiency and generation accuracy by exploiting the commonalities and differences across tasks. Sun et al. (2020) design a reciprocal learning network for human trajectory prediction, which consists of forward and backward prediction neural networks. The reciprocal learning enforces consistency between the forward and backward trajectory prediction, which helps each other to improve the learning performance and achieve higher accuracy. Zhu et al. (2017) design the CycleGAN contains two GANs forming a cycle network that can translate the images of the two domains into each other to achieve style transfer. Liu et al. (2021) develop a Temporal Reciprocal Learning (TRL) approach to fully explore the discriminative information from the disentangled features. Zhang et al. (2021b) design a support-query mutual guidance architecture for few-shot object detection.
Unique Contributions. Compared to existing work in the literature, the major contributions of this work include: (1) We propose a new prime-dual network design to explore the commutative relationship between support and query sets and establish a unique self-supervision constraint for few-shot learning. (2) We incorporate the self-supervision loss into the coupled prime-dual network training to improve the few-shot learning performance. (3) During the test stage, using the dual network to map the prediction results back to the support set domain and using the self-supervision constraint as an objective function, we develop an optimization-based scheme to verify and optimize the performance few-shot learning. (4) Our proposed method has significantly advanced the stateof-the-art performance of few-shot image classification.
3 METHOD
In this section, we present our method of self-supervised prime-dual network (SPDN) learning and optimization for few-shot image classification.
3.1 SELF-SUPERVISED COMMUTATIVE LEARNING
Figure 2 provides an overview of our proposed method of self-supervised commutative learning and optimization for few-shot image classification. In a typical setting of K-way N -shot learning, N labeled image samples from each of the K classes form the support set. For example, in a 5-way 1-shot learning, K = 5 and N = 1. Given a very small support set S = {Skn|1 ≤ k ≤ K, 1 ≤ n ≤ N}, the objective of the FSL is to predict the labels of the query images Q = {Qkm|1 ≤ k ≤ K, 1 ≤ m ≤ M} from the same K classes in M batches During the training stage, the labels of both support and query samples are available. The prime network ΦS→Q for few-shot classification is trained on these support-query sets, aiming to learn and represent the inherent visual
or semantic relationship between the support and query images. Once successfully learned, we will apply this network to unseen classes. Specifically, in the test stage, given a labeled support set S′ = {S′kn|1 ≤ k ≤ K, 1 ≤ n ≤ N} from these K unseen classes, we need to predict the labels for the query set Q′ = {Q′km|1 ≤ k ≤ K, 1 ≤ m ≤M} also from these unseen classes. Therefore, the fundamental challenge of FSL is to characterize and learn the inherent relationship between the support set S and the query set Q. Once learned, we can then shift or transfer this relationship to S′ and Q′ of unseen classes to infer the labels of Q′. In this work, as discussed in the following section, we propose to establish a graph neural network (GNN) to characterize and learn this relationship.
We recognize that, within the framework of few-shot learning, the support set and the query set are in an equal and symmetric position to each other. More specifically, if we can learn to predict the labels of query set Q from support set S, certainly, we can switch their order, predicting the labels of the support set S from the query set Q using the same network architecture. This observation leads to an interesting commutative prime-dual network design for few-shot learning. As illustrated in Figure 2, we introduce a dual network ΓQ→S, which performs the reverse label prediction of the support set S from the query set Q. Let L(S) and L(Q) be the label vectors of S and Q, respectively. Let L̂(S) and L̂(Q) be the predicted labels. The forward prediction by the prime network can be written as
L̂(Q) = ΦS→Q[L(S)], (3)
while the reverse prediction by the dual network can be written as
L̂(S) = ΓQ→S[L(Q)], (4)
If both networks Φ and Γ are well trained, and if we pass the label prediction output of the prime network as input to the dual network, then, we expect that the predicted labels for the support set should be close to its ground-truth. This leads to the following self-supervision loss
LSS = ||L(S)− L̂(S)||2 = ||L(S)− ΓQ→S[L̂(Q)] ||2 (5) = ||L(S)− ΓQ→S[ΦS→Q[L(S)]] ||2.
This self-supervision constraint can be established on both support set and query set, resulting in a coupled prime-dual network training. Figure 3 (a) and (b) shows the training processes for the prime network and the dual network, respectively. Specifically, from the support set S, the prime network learns to predict the labels of the query set Q. As in existing few-shot learning, we have the loss LPQ = ||L̂(Q)−L(Q)||2 between the predicted query labels and their ground-truth values. We then use the query samples and their predicted labels as input to the dual network ΓQ→S, we can predict the labels of the support set L̂(S) and compute the self-supervision loss LPS = ||L̂(S) − L(S)||2. These two losses are combined to form the loss function for training the prime network
LP = ||L̂Φ(Q)− L(Q)||2 + α · ||L̂Φ,Γ(S)− L(S)||2. (6)
α is a weighting parameter whose default value is set to be 0.5 in our experiments. Similarly, for the training of the dual network, as shown in Figure 3(b), its loss function is given by
LD = ||L̂Γ(S)− L(S)||2 + α · ||L̂Γ,Φ(Q)− L(Q)||2. (7)
3.2 GRAPH NEURAL NETWORK FOR FEW-SHOT IMAGE CLASSIFICATION
The proposed prime and dual networks share the same network design, which will be discussed in this section. The only difference between these two networks is that their support and query samples are switched. In the following, we use the prime network as an example to explain its design.
The central task of few-shot learning is to characterize the inherent relationship between the query and support samples, based on which we can infer the labels of the query samples from the support samples (Tseng et al., 2020; Liu et al., 2020b). In this work, we propose to use a graph neural network (GNN) to model and analyze this relationship. InK-wayN -shot learning, givenK classes, each with N support samples {Skn}, we need to learn the prime network to predict the labels for K query samples {Qk}. This implies, in each of the total M training batch, we have K × (N + 1) support samples and query samples. As illustrated in Figure 4(a), we use a backbone network, for example, Resnet-10 or Resnet-12, to extract feature for each of these support and query samples. We denote their features by S = {stkn} and Q = {qtk} where t represents the update iteration index in the GNN. Initially, t = 0. These support-query sample features form the nodes for the GNN, denoted by {xtj |1 ≤ j ≤ J}, J = K × (N + 1), for the simplicity of notations. The edge between two graph nodes represents the correlation ψ(xti,x t j) between nodes x t i and x t j . Note that our GNN has two groups of nodes: support sample nodes and query sample nodes. The support samples nodes have labels while the labels of the query samples need to be predicted by the prime network. If xti and xtj are both support nodes, we have
ψ(xti,x t j) =
{ 1 if L(xti) = L(x t j),
0 if L(xti) 6= L(xtj). (8)
Here, L(·) represents the label of the corresponding support sample. Since the labels for the query nodes are unknown, the correlation for edges linked to these query nodes need to be learned by the GNN. Initially, we set them to be random values between 0 and 1.
Each node of the GNN combines features from these neighboring nodes with the corresponding correlation as weights and updates its own feature by learning a multi-layer perceptron (MLP) network Go[·] as follows
xt+1j = Go
[ J∑
i=1
xtj · ψ(xti,xtj)
] . (9)
At each edge, another MLP network Ge[·, ·] is learned to predict the correlation between two graph nodes,
ψ(xti,x t j) = Ge[xti,xtj ], (10)
whose ground-truth values are obtained using the scheme discussed in the above. The feature generated by the prime GNN is then passed to a classification network to predict the query labels. Both the prime and dual GNNs are jointly trained with their final classification networks.
3.3 SELF-SUPERVISED OPTIMIZATION OF FEW-SHOT IMAGE CLASSIFICATION
Besides improving the training performance through mutual enforcement, the proposed selfsupervised prime-dual network design can be also used in the testing stage to optimize the label prediction of query samples. Specifically, we can use the dual network to refine and optimize the label prediction results obtained by the prime network. As illustrated in Figure 4(b), given a support
set S and a query set Q, the support set has class labels L(S). Let L̂(Q) be the prediction result, the output of the softmax layer of the classification network. In existing approaches of few-shot learning or other network prediction scenarios, we are not able to verify if the prediction is accurate or not since the ground-truth is not available for test samples. However, in this work, with the dual network ΓQ→S being successfully trained, we can use the prediction result L̂(Q) as input to the dual network to predict the class labels of the original support samples
L̂(S) = ΓQ→S[L̂(Q)]. (11)
Note that these support samples DO have ground-truth labels L(S). Define the label prediction error by
El(S) = ||L(S)− L̂(S)||2. (12) We assume that the correct query sample labels L∗(Q) is within the neighborhood of the prediction result L̂(Q). Let Ω be the set of candidate assignments of query labels which are within the neighborhood of L̂(Q). For example,
Ω = {L̃(Q) : ||L̃(Q)− L̂(Q)||2 ≤ ∆}, (13)
where ∆ is a given threshold for the label vector distance. We then search the candidate query labels L̃(Q) within the neighborhood set Ω to minimize the support label prediction error El(S) in (12). The optimized prediction of the query samples is given by
L∗(Q) = arg min L̃(Q)∈Ω || L(S)− ΓQ→S[L̃(Q)] ||2. (14)
From the experimental results, we will see that this unique self-supervised optimization of the query label prediction is able to significantly improve the few-shot image classification performance.
4 EXPERIMENTAL RESULTS
In this section, we provide experimental results on various benchmark datasets to demonstrate the performance of our proposed SPDN method for few-shot learning.
4.1 IMPLEMENTATION DETAILS
We use ResNet-10 as the backbone of our feature encoder. The input images are resized to 224×224 and the output feature vector size is 1 × 1 × 512. We choose the Adam optimizer with a learning rate of 0.01 and a batch size of 64 for training of 400 epochs. In the episodic meta-training stage, we use the graph neural network (GNN) discussed in the above section to generate the feature embedding for query samples. The prime network ΦS→Q and the dual network ΓQ→S are jointly trained. These two networks are both trained for 400 epochs with 100 episodes per epoch. In each episode, we randomly select K categories (K=5, 5-way) from the training set. Then, we randomly select N samples (N=1 or 5 for 1-shot or 5-shot) from each category to compose support set and query set, respectively. In the test stage, we use the average of 1000 trials as the final result for all the experiments. For each trial, we randomly select K categories from the test set. Similar to the training stage, N (1 or 5) samples are randomly selected as the support set and 15 samples as the query set from each category.
4.2 DATASETS
Five benchmark datasets are used for performance evaluation and comparison with other methods in the literature, Mini-ImageNet (Ravi & Larochelle, 2016), CUB (Wah et al., 2011), Cars (Krause et al., 2013), Places (Zhou et al., 2017) and Plantae (Van Horn et al., 2018). More details about dataset settings are presented in Appendix A.1.
4.3 RESULTS
To demonstrate the performance of our SPDN method, we conduct a series of experiments under different few-shot classification settings. In the literature, there are two major scenarios for testing the FSL methods: (1) intra-domain learning where the training classes and test classes are from the same object domain, for example, both from the Mini-ImageNet classess, and (2) cross-domain learning where the FSL is trained on one dataset (e.g., Mini-ImageNet) and the testing is performed on another dataset (e.g., CUB). Certainly, the cross-domain scenario is more challenging.
4.3.1 INTRA-DOMAIN FSL RESULTS.
First, we conduct intra-domain FSL experiments on the Mini-ImageNet. Table 1 summarizes the performance comparison with state-of-the-art FSL methods mainly developed in the past two years. We also list the backbone network used for extracting the features for the input images. We can see that, for the 5-way 1-shot image classification task, our method (with ResNet-10 backbone) outperform the current best method (with ResNet-12 backbone) from (Zhang et al., 2021a) by 5.42%. Another method which uses the same ResNet-10 backbone is the GNN+FT method (Tseng et al., 2020). Our method outperforms this method by 12.23%. For the 5-way 5-shot classification task, our method outperforms the current best by more than 5%, which is quite significant.
Second, we evaluate our method on intra-domain fine-grained image classification tasks on the CUB dataset. In this case, the FSL needs to learn subtle features to distinguish objects from close categories. Table 2 summarizes the performance results on 5-way 1-shot and 5-way 5-shot classification tasks. We can see that, for the one-shot classification task, our method outperforms the current best method, FRN (Wertheimer et al., 2021) by 6.72%. For the 5-shot classification task, our method improves the classification accuracy by 2.80%.
4.3.2 CROSS-DOMAIN FSL RESULTS.
The cross-domain few-shot learning is more challenging. Following existing methods, we train the model on the Mini-ImageNet object domain and test the trained model on other domains, including the CUB, Cars, Places and Plantae datasets. Table 3 summarizes the results for 5-way 1-shot classification (top) and 5-way 5-shot classification (bottom). We can see that our SPDN method has dramatically improved the classification accuracy on these cross-domain FSL tasks. For example, on the Cars dataset, our method outperforms the current best TPN+ATA (Wang & Deng, 2021) by 4.15%. On the Plantae dataset, the performance gain is 5.59%, which is quite significant. For the 5-way 5-shot classification task, the performance gains on these datasets are also very significant,
between 0.37-8.68%. This demonstrates that our SPDN method is able to learn the inherent visual relationship between the support and query samples and can generalize very well onto unseen classes in new object domains.
4.4 ABLATION STUDIES
In this section, we conduct ablation studies to further understand the proposed SPDN method and analyze the contributions of major algorithm components.
From algorithm design perspective, our SPDN method has two major components: self-supervised learning (SSL) of the prime and dual networks, and the self-supervised optimization (SSO) of the predicted query labels. We adopt the single GNN-based model (Tseng et al., 2020) as the baseline of our method and the SSL and SSO algorithm components are added onto this baseline method. To understand the performance of these two algorithm components, in the following experiment, we train the SPDN method using training samples from the Mini-ImageNet. We conduct intra-domain few-shot image classification on the Mini-ImageNet and cross-domain few-shot image classification on the CUB, Cars, Places, and Plantae datasets. Table 4 summarizes the results for 5-way 1-shot and 5-way 5-shot image classification. The second column shows the intra-domain few-shot image classification results on the Mini-ImageNet. The rest columns show the results for the cross-domain classification results. We can see that the self-supervised prime-dual network training is able to improve the classification accuracy by up to 1.8%. The performance gain achieved by the selfsupervised optimization of the predicted query labels is much more significant, ranging from 7-10%. This dramatic performance gain is a surprise to us. In the following, we will provide additional ablation studies to further understand this SSO algorithm module. Compared to the SSO module, the performance improvement by the first SSL module is relatively small. This is because the major new contribution of the SSL module is the self-supervised loss which aims to further improve the learning on the baseline GNN. However, it has successfully trained a dual network, which plays a very important role in the second SSO module. It is used to search and optimize the predicted labels of the query samples, resulting in major performance gain. We discuss the specific optimization results of our self-supervised optimization (SSO) module through an experiment in Appendix A.3.
In the following experiments, we attempt to further understand the behavior and performance of the SSO algorithm module. First, we conduct an experiment to understand the search and optimization process of SSO. Suppose L(Q) is the true label of the query samples. Let
L̃(Q) = L(Q) + λ ·∆L, (15)
be a label vector within the neighborhood of L(Q). Here, ∆L is a pre-generated random vector and λ is a disturbance coefficient to control the amount of variation. With the label vector L̃(Q) and the query samples, we can predict the labels of the support vector using the dual network. Then, we can compute the prediction error El(S) as in (12). Figure 5(a) shows the label prediction error El(S) as a function of λ. This experiment was performed on 5-way 1-shot image classification on the CUB dataset. We can see that the minimum error is achieved at λ = 0. This implies the groundtruth labels of the query samples have the minimum self-supervised label error El(S). This is a very important property of our SSO method. It suggests that, when the predicted query labels are not correct, and the ground-truth labels are within its neighborhood, we can use the SSO method to search for these ground-truth labels using the minimum self-supervised support label error criteria.
During our self-supervised optimization of the predicted query labels, we choose a small neighborhood Ω within the neighborhood of the predicted query labels L̂(Q) with a maximum distance ∆. This ∆controls the number of search positions in the label space. If we search more positions or candidate query labels, we can obtain smaller self-supervised label errors of the support samples El(S). Figure 5(b) plots the value of El(S) as a function of the number of search positions. This experiment was performed on 5-way 1-shot image classification on the CUB dataset. We can see that the error drops significantly with the number of searched positions. We recognize that, for each search position, we need to run the dual network once. This does introduce extra computational complexity. But, the amount of performance gain is very appealing. In our experiments, we limited the number of search positions to the 5, i.e., the nearest 5 label vectors (integer vectors) to the predicted query label.
5 CONCLUSION
In this work, we have successfully developed a novel prime-dual network structure for few-shot learning which explores the commutative relationship between the support set and the query set. The prime network performs the forward label prediction from the support set to the query set, while the dual network performs the reverse label prediction from the query set to the support set. This forward and reserve prediction process with commutative support and query sets forms a label prediction loop and establishes a self-supervision constraint between the ground-truth labels and their predicted values. We have established a self-supervised support error metric and used the learned dual network to optimize the predicted query labels during the testing stage. Our extensive experimental results on both intra-domain and cross-domain few-shot image classificaiton have demonstrated that the proposed self-supervised prime-dual network learning and optimization have significantly improved the performance of few-shot learning, especially for cross-domain few-shot learning tasks. We have also conducted detailed ablation studies to provide in-depth understanding of the significant performance gain achieved by the self-supervised optimization process. The self-supervised primedual network design is general and can be naturally incorporated into other prediction and learning methods.
A APPENDIX
In this appendix, we provide more details of experimental settings and additional results to further understand the performance of our proposed method.
A.1 DATASET
In our experiments, the following 5 datasets are used for performance evaluations and comparisons.
(1) Mini-ImageNet has randomly selected 100 categories from the ImageNet (Deng et al., 2009) and each category has 600 samples of size 84 × 84. The 100 categories are divided into a training set with 64 categories, a validation set with 16 categories, and a testing set with 20 categories. (2) CUB is a fine-grained dataset with 200 bird species mainly living in North America (Wah et al., 2011). We randomly split the dataset into 100, 50, 50 classes for training, validation and testing, respectively. (3) Cars contains 16,185 images of 207 fine-grained car types, which consist of 10 BMW models and 197 other car types (Krause et al., 2013). We randomly selected 196 categories include 98 training, 49 validation and 49 testing for the experiment. (4) Places is a dataset of scene images (Zhou et al., 2017), containing 73,000 training images from 365 scene categories, which are divided into 183 categories for training, 91 for validation and 91 for testing. (5) Plantae is a sub-set of the iNat2017 dataset (Van Horn et al., 2018), which contains 200 types of plants and a total of 47,242 images. We split them into 100 classes for training, 50 for validation, and 50 for testing.
The Mini-ImageNet is the most popular benchmark for few-shot classification. It is usually used as a baseline dataset for model training. The CUB dataset is more frequently used for few-shot fine-grained classification tasks. The Cars, Places and Plantae datasets are used for model testing in cross-domain few-shot classification tasks.
A.2 THE VISUALIZATION OF FEATURE IN SELF-SUPERVISED LEARNING.
The proposed SPDN method incorporates the self-supervised constraint into the training process, aiming to improve the quality of learned features and the generalization capability of the few-shot learning. Figure. 6 shows the tSNE visualization of the learned features of 100 samples from the mini-ImageNet dataset for each class in a 5-way 5-shot setting. We can see that, with the selfsupervised learning, the features of each class are more concentrated into clusters.
A.3 SELF-SUPERVISED OPTIMIZATION (SSO) MODULES
The proposed self-supervised optimization (SSO) modules aim to correct the predicted query labels. In the following experiment, we are trying to understand how many incorrect label prediction of the query labels have been successfully corrected by the SSO module. Table 5 shows the results from the 5-way 1-shot on the CUB dataset. We keep track of 75 randomly selected query samples. If we predict the query labels only using the prime network without using the SSO (before SSO), the number of query samples with incorrect labels is 57, and the number of correct ones is 18, which
is very low. After we apply the SSO, the number of query samples with incorrect labels is reduced to 45, the number of correct ones increases to 30. We can examine this correction process in more detail. The SSO module has corrected the labels for 15 samples, as shown in the third row (Incorrect → Correct Label) of the table. However, it has also mis-corrected the labels for 3 samples, as shown in the last row (Correct → Incorrect Label) of the table. In our experiments, we have observed that the SSO module is able to correct the labels for much more query samples than those miscorrected one. This implies that the dual network and the self-supervision constraint are working very well for few-shot learning. This explains the significant performance achieved by the proposed self-supervised prime-dual network method.
A.4 EXTENSION TO N -SHOT IMAGE CLASSIFICATION
In the main paper, we have used the 5-way 1-shot image classification as an example to present our method of self-supervised prime-dual network (SPDN) and optimization for few-shot image classification. This method can be naturally extended to genericK-wayN -shot image classification. Figure 7 illustrates an example of extension to 5-way 5-shot. In this case, each class, in both training and test stages, has 5 support samples and one query sample. In the prime network, we use these 5 support samples to predict the label of the query sample. To ensure that the dual network shares the same network structure as the prime network, for the reverse prediction, we randomly select one sample (denoted by s0) from the support set and switch it with the query sample q0. During the training and inference of the dual network, this updated support set is used to predict the label of s0, which is then compared to its ground-truth label to compute the self-supervised loss. This loss is used for joint prime-dual network training, as well as the self-supervised optimization of the label prediction for the query sample q0.
A.5 FURTHER UNDERSTANDING OF THE SELF-SUPERVISED OPTIMIZATION OF QUERY LABEL PREDICTION
In our proposed SPDN method, the self-supervised optimization of the query label prediction plays an important role and improves the performance significantly. In this section, we provide more experimental results to demonstrate and further under the performance of this algorithm module. Figure 8 shows 6 examples of 5-way 1-shot image classification. Initially, the predicted label for these query samples are incorrect. Then, we perform self-supervised search of the query labels within the neighborhood of the predicted label. We use this predicted labels as input to the dual network to predict the labels of the support samples. The label prediction error of the support
samples is used as the optimization objective. In Figure 8, under each query sample, we show the decreasing of the optimization objective (support label error) with the number of searched candidate query labels. These results show that it is sufficient to search 5-8 candidate query label vectors.
It should be noted that the self-supervised optimization query label prediction can correct the incorrect label prediction, adjusting incorrect label prediction into correct ones. Certainly, it will make mistake or mis-correct the query label prediction, adjusting correct label predictions into incorrect ones. However, the probability of the mis-correction is much lower. For example, Table 6 shows percentages of correct adjustment and incorrect adjustment by the optimization module on the Cars dataset. Specifically, the percentage of correct adjustment from incorrect query labels into correct ones is 21.6%. In the meantime, the percentage of incorrect adjustment is 5.7%. This result in a performance improvement of 15.8% in the overall few-shot image classification, from 32.8% to 48.6%, which is quite significant.
A.6 FURTHER DISCUSSION OF THE PROPOSED METHOD
The key idea and motivation behind our dual network design is as follows: one central challenge in network prediction is that we have no ways to check if the prediction is accurate or not, since we do not have the ground truth. To address this issue, we develop the prime-dual network structure, where the successfully learned dual network is used as a verification module to verify if the prediction results are good enough or not. It maps the prediction results back to the current known data. We establish the self-supervised loss defined on the current known data, use it as the objective function to perform local search and refinement of the prediction results. This process is unique and contributes significantly to the overall performance. The prime network is the baseline GNN+FT network using support samples to predict query samples. The dual network is another GNN+FT network (in opposite direction) using query samples to predict support samples. These two networks form a prediction loop and a self-supervised loss is then derived. We implement this new idea on the the GNN+FT few-shot learning method to demonstrate its performance. The proposed idea is generic and can be applied to other methods, even in other prediction and learning problems, which will be studied in our future work. Our proposed idea is new. However, it does introduce additional complexity. According to our estimation, it will add about 40-60% extra complexity on top of the existing baseline since a majority of computation, such as feature extraction, does not need to recomputed during the search process. In our future work, we plan to develop schemes to reduce the complexity of the self-supervised optimization, for example by merging multiple search steps into one execution cycle. | 1. What is the focus and contribution of the paper on few-shot learning?
2. What are the strengths of the proposed approach, particularly in its ability to characterize the relationship between the support set and the query set?
3. Do you have any concerns or questions regarding the proposed method, such as its reliance on a bipartite graph or the sharing of network designs between the prime and dual networks?
4. What are the limitations of the paper, especially in terms of its cross-domain results and the need for further discussion?
5. Are there any suggestions or ideas for future work related to this research on few-shot learning? | Summary Of The Paper
Review | Summary Of The Paper
The authors design a prime-dual architecture for few-shot learning to characterize the inherent relationship between the support set and the query set and introduce a self-supervision constraint for performance improvement. In particular, it proposes to correct query sample labels between the prime network and dual network and design an optimized prediction for the query samples with a selection mechanism. Extensive experiments are conducted on datasets with the ablation analysis, and the method achieves favorable results over existing algorithms.
Review
Strengths: 1. The paper tackles one of the important issues of meta/few-shot learning: ground-truth is not available for test samples. The issue is important and very practical in my opinion. 2. The proposed method utilizes the trained dual network to predict the class labels of support samples, which is kind of the concept of the autoencoder technique, simple and promising. The idea of constructing and learning the relationship between support set and query set is reasonable and interesting. 3. In this paper, the authors provide comprehensive experiments, including the comparison of the state-of-the-art methods in public benchmarks, the ablation analysis, and how the parameters affect the performance such as the searched position number, etc. For example, Table 6 shows the number of the proposed SSO that can help mis-correct the query labels. The analysis of ablation and SSO modules provide evidence and make it convincing.
Weaknesses: 1. In Section 3.2, the bipartite graph was mentioned, but I didn't see the authors use the property of the bipartite graph since there are edges constructed in the support set and also in the query set (Figure 4-a). This may not utilize the property from a bipartite graph. 2. As the paper mentioned, the prime and dual networks share the same network design, but I'd like to know whether the authors have used different network designs for prime and dual. What would be the pros and cons with the same/different designs? 3. In Figure 7, the 6 examples are shown for the 5-way 1-shot image classification experiment, I'd like to know why we show the position up to 20? Will the maximum be calculated by K(N + 1) as mentioned in the paper? 3. For the cross-domain result, the discussion may need to add more. For example, why the proposed method has less gain in Cars compared to 1-shot and 5-shot, and why does only CUB increase the performance gain when it's 5-way 5-shot? |
ICLR | Title
Self-Supervised Prime-Dual Networks for Few-Shot Image Classification
Abstract
We construct a prime-dual network structure for few-shot learning which establishes a commutative relationship between the support set and the query set, as well as a new self-supervision constraint for highly effective few-shot learning. Specifically, the prime network performs the forward label prediction of the query set from the support set, while the dual network performs the reverse label prediction of the support set from the query set. This forward and reserve prediction process with commutated support and query sets forms a label prediction loop and establishes a self-supervision constraint between the ground-truth labels and their predicted values. This unique constraint can be used to significantly improve the training performance of few-shot learning through coupled prime and dual network training. It can be also used as an objective function for optimization during the testing stage to refine the query label prediction results. Our extensive experimental results demonstrate that the proposed self-supervised commutative learning and optimization outperforms existing state-of-the-art few-shot learning methods by large margins on various benchmark datasets.
1 INTRODUCTION
Few-shot image classification aims to classify images from novel categories (query samples) based on very few labeled samples from each class (support images) (Hong et al., 2020a; Sun et al., 2021). During the training stage, the few-shot learning (FSL) is given a set of support-query set pairs with class labels. Once successfully trained, the model needs to be tested on unseen classes. The major challenge here is that the number of available support samples N is very small, often N ≤ 5. In an extreme case, N = 1 where it is called one-shot learning. In order to achieve this so-called learnto-learn capability, the FSL needs to capture the inherent visual or semantic relationship between the support samples and query samples, and more importantly, this learned relationship or prediction should be able to generalize well onto unseen classes (Liu et al., 2020d).
A fundamental challenge in prediction is that: if we know entity A and are trying to predict entity B, how do we know if the prediction of B, denoted by Φ(B), is accurate or not? Is there any way that we can verify the accuracy of the prediction Φ(B)? As we know, this is impossible since B has no ground-truth for us to evaluate or verify its prediction accuracy. If we can come up an indirect approach to effectively evaluate the prediction accuracy, it is expected that the learning and prediction performance can be significantly improved.
In this work, we propose to explore a prime-dual commutative network design for effective prediction, specifically for few-shot image classification. As illustrated in Figure 1, the prime network Φ is the original network that learns the forward prediction from A to B̂ = Φ(A). The dual network Γ performs the reverse prediction from B to  = Γ(B). If we cascade these two networks together which establishes a prediction loop from A to B and then back to A, we have
 = Γ(B̂) = Γ(Φ(A)). (1)
Since A is given, which has the ground-truth value, the difference between A and its prime-dual loop prediction result  forms a self-supervision loss
LS = d(A, Â) = d(A,Γ(Φ(A))), (2)
where d is a distance metric function. This self-supervision loss LS can be used to improve the training performance based on the coupling between the prime and dual networks. Furthermore, it can used to verify and adjust the prediction result by minimizing the self-supervision loss.
In this work, we propose to study this prime-dual network design with self-supervision for few-shot learning by exploiting the commutative relationship between the support set (entityA) and the query set (entity B). Specifically, the prime network learns to predict the labels of query samples using the support set with ground-truth labels as training samples. Meanwhile, the dual network learns to predict the labels of the support samples using the query set with ground-truth labels as training samples. For example, in 5-way 1-shot learning, the support set consists of 5 images from 5 classes with only one image per class. The query set also has 5 images from 5 classes. When training the prime and dual networks, the support set and the query set are switched for training samples. This forward and reserve prediction process with commutative support and query sets forms a label prediction loop and establishes a self-supervision constraint between the ground-truth labels and their predicted values. The prime-dual networks are jointly trained with the help from the self-supervision loss. This loss is also used during the testing stage to adjust and optimize the prediction results. Our extensive experimental results demonstrate that the proposed self-supervised commutative learning and optimization method outperforms existing state-of-the-art few-shot learning methods by a large margin on various benchmark datasets.
2 RELATED WORK AND UNIQUE CONTRIBUTIONS
Few-shot learning (FSL) aims to recognize instances from unseen categories with few labeled samples. There are three major categories of methods that have been developed for FSL. (1) Data Augmentation is the most direct method for few-shot learning, which explores different approaches to synthesize images to address the issue of few training samples. For example, self-training jigsaw augmentation (Chen et al., 2019) is able to synthesize new images by segmenting and reorganizing labeled and unlabeled gallery images. Mangla et al. (2020) apply self-supervision algorithms augmented with manifold mixup (Verma et al., 2019) for few-shot classification tasks. The F2GAN (Hong et al., 2020b) and MatchingGAN methods (Hong et al., 2020a) use generative adversarial networks (GANs) to construct high-quality samples for new image categories. (2) Optimization-based methods aim to learn a good initial network model for the classifier. This learned model can be then quickly adapted to novel classes using a few labeled samples. MAML (Finn et al., 2017) proposes to train a set of initialization models based on second-order gradients and meta-optimization. TAML (Jamal & Qi, 2019) reduces the bias introduced by the MAML algorithm to enforce equity between the tasks. In the Latent Embedding Optimization (LEO) method (Rusu et al., 2018), gradient-based optimization is performed in a low-dimensional latent space instead of the original high-dimensional parameter space. (3) Metric-based methods aim to learn a good metric space so that samples from novel categories can be effectively distinguished and correctly classified. For example, MatchingNet (Vinyals et al., 2016) applies a recurrent network to calculate the cosine similarity between samples. ProtoNet (Snell et al., 2017) compares features between samples in the Euclidean space. RelationNet (Sung et al., 2018) uses a CNN model and (Garcia & Bruna, 2017) uses the graph convolution network (GNN) to learn the metric relationship.
In this work, we also consider cross-domain FSL. For the cross-domain classification task, the model needs to generalize well from the source domain to a new or unseen target domain without accessing samples from the unseen domain during the training stage. Sun et al. (2021) propose a modelagnostic explanation-guided training method that dynamically finds and emphasizes the features which are important for the predictions. This improves the model generalization capability. To characterize the variation of image feature distribution across different domains, the LFT method (Tseng et al., 2020) learns the noise distribution by adding feature-wise transformation layers to the
image encoder. To avoid over-fitting on the source domain and increase the generalization capability to the target domain, the batch spectral regularization (BSR) method (Liu et al., 2020b) attempts to suppress all singular values of the batch feature matrices during pre-training. Another set of methods (Shankar et al., 2018; Volpi et al., 2018) learn to augment the input data with adversarial learning (Yang et al., 2020b) in order to generalize the task from the source domain to the unseen target domain.
In this work, we propose a commutative prime-dual network design for few-shot learning. In the literature, the mutual dependency and reciprocal relationship between multiple modules have been explored to achieve better performance. For example, (Xu et al., 2020) has developed a reciprocal cross-task architecture for image segmentation, which improves the learning efficiency and generation accuracy by exploiting the commonalities and differences across tasks. Sun et al. (2020) design a reciprocal learning network for human trajectory prediction, which consists of forward and backward prediction neural networks. The reciprocal learning enforces consistency between the forward and backward trajectory prediction, which helps each other to improve the learning performance and achieve higher accuracy. Zhu et al. (2017) design the CycleGAN contains two GANs forming a cycle network that can translate the images of the two domains into each other to achieve style transfer. Liu et al. (2021) develop a Temporal Reciprocal Learning (TRL) approach to fully explore the discriminative information from the disentangled features. Zhang et al. (2021b) design a support-query mutual guidance architecture for few-shot object detection.
Unique Contributions. Compared to existing work in the literature, the major contributions of this work include: (1) We propose a new prime-dual network design to explore the commutative relationship between support and query sets and establish a unique self-supervision constraint for few-shot learning. (2) We incorporate the self-supervision loss into the coupled prime-dual network training to improve the few-shot learning performance. (3) During the test stage, using the dual network to map the prediction results back to the support set domain and using the self-supervision constraint as an objective function, we develop an optimization-based scheme to verify and optimize the performance few-shot learning. (4) Our proposed method has significantly advanced the stateof-the-art performance of few-shot image classification.
3 METHOD
In this section, we present our method of self-supervised prime-dual network (SPDN) learning and optimization for few-shot image classification.
3.1 SELF-SUPERVISED COMMUTATIVE LEARNING
Figure 2 provides an overview of our proposed method of self-supervised commutative learning and optimization for few-shot image classification. In a typical setting of K-way N -shot learning, N labeled image samples from each of the K classes form the support set. For example, in a 5-way 1-shot learning, K = 5 and N = 1. Given a very small support set S = {Skn|1 ≤ k ≤ K, 1 ≤ n ≤ N}, the objective of the FSL is to predict the labels of the query images Q = {Qkm|1 ≤ k ≤ K, 1 ≤ m ≤ M} from the same K classes in M batches During the training stage, the labels of both support and query samples are available. The prime network ΦS→Q for few-shot classification is trained on these support-query sets, aiming to learn and represent the inherent visual
or semantic relationship between the support and query images. Once successfully learned, we will apply this network to unseen classes. Specifically, in the test stage, given a labeled support set S′ = {S′kn|1 ≤ k ≤ K, 1 ≤ n ≤ N} from these K unseen classes, we need to predict the labels for the query set Q′ = {Q′km|1 ≤ k ≤ K, 1 ≤ m ≤M} also from these unseen classes. Therefore, the fundamental challenge of FSL is to characterize and learn the inherent relationship between the support set S and the query set Q. Once learned, we can then shift or transfer this relationship to S′ and Q′ of unseen classes to infer the labels of Q′. In this work, as discussed in the following section, we propose to establish a graph neural network (GNN) to characterize and learn this relationship.
We recognize that, within the framework of few-shot learning, the support set and the query set are in an equal and symmetric position to each other. More specifically, if we can learn to predict the labels of query set Q from support set S, certainly, we can switch their order, predicting the labels of the support set S from the query set Q using the same network architecture. This observation leads to an interesting commutative prime-dual network design for few-shot learning. As illustrated in Figure 2, we introduce a dual network ΓQ→S, which performs the reverse label prediction of the support set S from the query set Q. Let L(S) and L(Q) be the label vectors of S and Q, respectively. Let L̂(S) and L̂(Q) be the predicted labels. The forward prediction by the prime network can be written as
L̂(Q) = ΦS→Q[L(S)], (3)
while the reverse prediction by the dual network can be written as
L̂(S) = ΓQ→S[L(Q)], (4)
If both networks Φ and Γ are well trained, and if we pass the label prediction output of the prime network as input to the dual network, then, we expect that the predicted labels for the support set should be close to its ground-truth. This leads to the following self-supervision loss
LSS = ||L(S)− L̂(S)||2 = ||L(S)− ΓQ→S[L̂(Q)] ||2 (5) = ||L(S)− ΓQ→S[ΦS→Q[L(S)]] ||2.
This self-supervision constraint can be established on both support set and query set, resulting in a coupled prime-dual network training. Figure 3 (a) and (b) shows the training processes for the prime network and the dual network, respectively. Specifically, from the support set S, the prime network learns to predict the labels of the query set Q. As in existing few-shot learning, we have the loss LPQ = ||L̂(Q)−L(Q)||2 between the predicted query labels and their ground-truth values. We then use the query samples and their predicted labels as input to the dual network ΓQ→S, we can predict the labels of the support set L̂(S) and compute the self-supervision loss LPS = ||L̂(S) − L(S)||2. These two losses are combined to form the loss function for training the prime network
LP = ||L̂Φ(Q)− L(Q)||2 + α · ||L̂Φ,Γ(S)− L(S)||2. (6)
α is a weighting parameter whose default value is set to be 0.5 in our experiments. Similarly, for the training of the dual network, as shown in Figure 3(b), its loss function is given by
LD = ||L̂Γ(S)− L(S)||2 + α · ||L̂Γ,Φ(Q)− L(Q)||2. (7)
3.2 GRAPH NEURAL NETWORK FOR FEW-SHOT IMAGE CLASSIFICATION
The proposed prime and dual networks share the same network design, which will be discussed in this section. The only difference between these two networks is that their support and query samples are switched. In the following, we use the prime network as an example to explain its design.
The central task of few-shot learning is to characterize the inherent relationship between the query and support samples, based on which we can infer the labels of the query samples from the support samples (Tseng et al., 2020; Liu et al., 2020b). In this work, we propose to use a graph neural network (GNN) to model and analyze this relationship. InK-wayN -shot learning, givenK classes, each with N support samples {Skn}, we need to learn the prime network to predict the labels for K query samples {Qk}. This implies, in each of the total M training batch, we have K × (N + 1) support samples and query samples. As illustrated in Figure 4(a), we use a backbone network, for example, Resnet-10 or Resnet-12, to extract feature for each of these support and query samples. We denote their features by S = {stkn} and Q = {qtk} where t represents the update iteration index in the GNN. Initially, t = 0. These support-query sample features form the nodes for the GNN, denoted by {xtj |1 ≤ j ≤ J}, J = K × (N + 1), for the simplicity of notations. The edge between two graph nodes represents the correlation ψ(xti,x t j) between nodes x t i and x t j . Note that our GNN has two groups of nodes: support sample nodes and query sample nodes. The support samples nodes have labels while the labels of the query samples need to be predicted by the prime network. If xti and xtj are both support nodes, we have
ψ(xti,x t j) =
{ 1 if L(xti) = L(x t j),
0 if L(xti) 6= L(xtj). (8)
Here, L(·) represents the label of the corresponding support sample. Since the labels for the query nodes are unknown, the correlation for edges linked to these query nodes need to be learned by the GNN. Initially, we set them to be random values between 0 and 1.
Each node of the GNN combines features from these neighboring nodes with the corresponding correlation as weights and updates its own feature by learning a multi-layer perceptron (MLP) network Go[·] as follows
xt+1j = Go
[ J∑
i=1
xtj · ψ(xti,xtj)
] . (9)
At each edge, another MLP network Ge[·, ·] is learned to predict the correlation between two graph nodes,
ψ(xti,x t j) = Ge[xti,xtj ], (10)
whose ground-truth values are obtained using the scheme discussed in the above. The feature generated by the prime GNN is then passed to a classification network to predict the query labels. Both the prime and dual GNNs are jointly trained with their final classification networks.
3.3 SELF-SUPERVISED OPTIMIZATION OF FEW-SHOT IMAGE CLASSIFICATION
Besides improving the training performance through mutual enforcement, the proposed selfsupervised prime-dual network design can be also used in the testing stage to optimize the label prediction of query samples. Specifically, we can use the dual network to refine and optimize the label prediction results obtained by the prime network. As illustrated in Figure 4(b), given a support
set S and a query set Q, the support set has class labels L(S). Let L̂(Q) be the prediction result, the output of the softmax layer of the classification network. In existing approaches of few-shot learning or other network prediction scenarios, we are not able to verify if the prediction is accurate or not since the ground-truth is not available for test samples. However, in this work, with the dual network ΓQ→S being successfully trained, we can use the prediction result L̂(Q) as input to the dual network to predict the class labels of the original support samples
L̂(S) = ΓQ→S[L̂(Q)]. (11)
Note that these support samples DO have ground-truth labels L(S). Define the label prediction error by
El(S) = ||L(S)− L̂(S)||2. (12) We assume that the correct query sample labels L∗(Q) is within the neighborhood of the prediction result L̂(Q). Let Ω be the set of candidate assignments of query labels which are within the neighborhood of L̂(Q). For example,
Ω = {L̃(Q) : ||L̃(Q)− L̂(Q)||2 ≤ ∆}, (13)
where ∆ is a given threshold for the label vector distance. We then search the candidate query labels L̃(Q) within the neighborhood set Ω to minimize the support label prediction error El(S) in (12). The optimized prediction of the query samples is given by
L∗(Q) = arg min L̃(Q)∈Ω || L(S)− ΓQ→S[L̃(Q)] ||2. (14)
From the experimental results, we will see that this unique self-supervised optimization of the query label prediction is able to significantly improve the few-shot image classification performance.
4 EXPERIMENTAL RESULTS
In this section, we provide experimental results on various benchmark datasets to demonstrate the performance of our proposed SPDN method for few-shot learning.
4.1 IMPLEMENTATION DETAILS
We use ResNet-10 as the backbone of our feature encoder. The input images are resized to 224×224 and the output feature vector size is 1 × 1 × 512. We choose the Adam optimizer with a learning rate of 0.01 and a batch size of 64 for training of 400 epochs. In the episodic meta-training stage, we use the graph neural network (GNN) discussed in the above section to generate the feature embedding for query samples. The prime network ΦS→Q and the dual network ΓQ→S are jointly trained. These two networks are both trained for 400 epochs with 100 episodes per epoch. In each episode, we randomly select K categories (K=5, 5-way) from the training set. Then, we randomly select N samples (N=1 or 5 for 1-shot or 5-shot) from each category to compose support set and query set, respectively. In the test stage, we use the average of 1000 trials as the final result for all the experiments. For each trial, we randomly select K categories from the test set. Similar to the training stage, N (1 or 5) samples are randomly selected as the support set and 15 samples as the query set from each category.
4.2 DATASETS
Five benchmark datasets are used for performance evaluation and comparison with other methods in the literature, Mini-ImageNet (Ravi & Larochelle, 2016), CUB (Wah et al., 2011), Cars (Krause et al., 2013), Places (Zhou et al., 2017) and Plantae (Van Horn et al., 2018). More details about dataset settings are presented in Appendix A.1.
4.3 RESULTS
To demonstrate the performance of our SPDN method, we conduct a series of experiments under different few-shot classification settings. In the literature, there are two major scenarios for testing the FSL methods: (1) intra-domain learning where the training classes and test classes are from the same object domain, for example, both from the Mini-ImageNet classess, and (2) cross-domain learning where the FSL is trained on one dataset (e.g., Mini-ImageNet) and the testing is performed on another dataset (e.g., CUB). Certainly, the cross-domain scenario is more challenging.
4.3.1 INTRA-DOMAIN FSL RESULTS.
First, we conduct intra-domain FSL experiments on the Mini-ImageNet. Table 1 summarizes the performance comparison with state-of-the-art FSL methods mainly developed in the past two years. We also list the backbone network used for extracting the features for the input images. We can see that, for the 5-way 1-shot image classification task, our method (with ResNet-10 backbone) outperform the current best method (with ResNet-12 backbone) from (Zhang et al., 2021a) by 5.42%. Another method which uses the same ResNet-10 backbone is the GNN+FT method (Tseng et al., 2020). Our method outperforms this method by 12.23%. For the 5-way 5-shot classification task, our method outperforms the current best by more than 5%, which is quite significant.
Second, we evaluate our method on intra-domain fine-grained image classification tasks on the CUB dataset. In this case, the FSL needs to learn subtle features to distinguish objects from close categories. Table 2 summarizes the performance results on 5-way 1-shot and 5-way 5-shot classification tasks. We can see that, for the one-shot classification task, our method outperforms the current best method, FRN (Wertheimer et al., 2021) by 6.72%. For the 5-shot classification task, our method improves the classification accuracy by 2.80%.
4.3.2 CROSS-DOMAIN FSL RESULTS.
The cross-domain few-shot learning is more challenging. Following existing methods, we train the model on the Mini-ImageNet object domain and test the trained model on other domains, including the CUB, Cars, Places and Plantae datasets. Table 3 summarizes the results for 5-way 1-shot classification (top) and 5-way 5-shot classification (bottom). We can see that our SPDN method has dramatically improved the classification accuracy on these cross-domain FSL tasks. For example, on the Cars dataset, our method outperforms the current best TPN+ATA (Wang & Deng, 2021) by 4.15%. On the Plantae dataset, the performance gain is 5.59%, which is quite significant. For the 5-way 5-shot classification task, the performance gains on these datasets are also very significant,
between 0.37-8.68%. This demonstrates that our SPDN method is able to learn the inherent visual relationship between the support and query samples and can generalize very well onto unseen classes in new object domains.
4.4 ABLATION STUDIES
In this section, we conduct ablation studies to further understand the proposed SPDN method and analyze the contributions of major algorithm components.
From algorithm design perspective, our SPDN method has two major components: self-supervised learning (SSL) of the prime and dual networks, and the self-supervised optimization (SSO) of the predicted query labels. We adopt the single GNN-based model (Tseng et al., 2020) as the baseline of our method and the SSL and SSO algorithm components are added onto this baseline method. To understand the performance of these two algorithm components, in the following experiment, we train the SPDN method using training samples from the Mini-ImageNet. We conduct intra-domain few-shot image classification on the Mini-ImageNet and cross-domain few-shot image classification on the CUB, Cars, Places, and Plantae datasets. Table 4 summarizes the results for 5-way 1-shot and 5-way 5-shot image classification. The second column shows the intra-domain few-shot image classification results on the Mini-ImageNet. The rest columns show the results for the cross-domain classification results. We can see that the self-supervised prime-dual network training is able to improve the classification accuracy by up to 1.8%. The performance gain achieved by the selfsupervised optimization of the predicted query labels is much more significant, ranging from 7-10%. This dramatic performance gain is a surprise to us. In the following, we will provide additional ablation studies to further understand this SSO algorithm module. Compared to the SSO module, the performance improvement by the first SSL module is relatively small. This is because the major new contribution of the SSL module is the self-supervised loss which aims to further improve the learning on the baseline GNN. However, it has successfully trained a dual network, which plays a very important role in the second SSO module. It is used to search and optimize the predicted labels of the query samples, resulting in major performance gain. We discuss the specific optimization results of our self-supervised optimization (SSO) module through an experiment in Appendix A.3.
In the following experiments, we attempt to further understand the behavior and performance of the SSO algorithm module. First, we conduct an experiment to understand the search and optimization process of SSO. Suppose L(Q) is the true label of the query samples. Let
L̃(Q) = L(Q) + λ ·∆L, (15)
be a label vector within the neighborhood of L(Q). Here, ∆L is a pre-generated random vector and λ is a disturbance coefficient to control the amount of variation. With the label vector L̃(Q) and the query samples, we can predict the labels of the support vector using the dual network. Then, we can compute the prediction error El(S) as in (12). Figure 5(a) shows the label prediction error El(S) as a function of λ. This experiment was performed on 5-way 1-shot image classification on the CUB dataset. We can see that the minimum error is achieved at λ = 0. This implies the groundtruth labels of the query samples have the minimum self-supervised label error El(S). This is a very important property of our SSO method. It suggests that, when the predicted query labels are not correct, and the ground-truth labels are within its neighborhood, we can use the SSO method to search for these ground-truth labels using the minimum self-supervised support label error criteria.
During our self-supervised optimization of the predicted query labels, we choose a small neighborhood Ω within the neighborhood of the predicted query labels L̂(Q) with a maximum distance ∆. This ∆controls the number of search positions in the label space. If we search more positions or candidate query labels, we can obtain smaller self-supervised label errors of the support samples El(S). Figure 5(b) plots the value of El(S) as a function of the number of search positions. This experiment was performed on 5-way 1-shot image classification on the CUB dataset. We can see that the error drops significantly with the number of searched positions. We recognize that, for each search position, we need to run the dual network once. This does introduce extra computational complexity. But, the amount of performance gain is very appealing. In our experiments, we limited the number of search positions to the 5, i.e., the nearest 5 label vectors (integer vectors) to the predicted query label.
5 CONCLUSION
In this work, we have successfully developed a novel prime-dual network structure for few-shot learning which explores the commutative relationship between the support set and the query set. The prime network performs the forward label prediction from the support set to the query set, while the dual network performs the reverse label prediction from the query set to the support set. This forward and reserve prediction process with commutative support and query sets forms a label prediction loop and establishes a self-supervision constraint between the ground-truth labels and their predicted values. We have established a self-supervised support error metric and used the learned dual network to optimize the predicted query labels during the testing stage. Our extensive experimental results on both intra-domain and cross-domain few-shot image classificaiton have demonstrated that the proposed self-supervised prime-dual network learning and optimization have significantly improved the performance of few-shot learning, especially for cross-domain few-shot learning tasks. We have also conducted detailed ablation studies to provide in-depth understanding of the significant performance gain achieved by the self-supervised optimization process. The self-supervised primedual network design is general and can be naturally incorporated into other prediction and learning methods.
A APPENDIX
In this appendix, we provide more details of experimental settings and additional results to further understand the performance of our proposed method.
A.1 DATASET
In our experiments, the following 5 datasets are used for performance evaluations and comparisons.
(1) Mini-ImageNet has randomly selected 100 categories from the ImageNet (Deng et al., 2009) and each category has 600 samples of size 84 × 84. The 100 categories are divided into a training set with 64 categories, a validation set with 16 categories, and a testing set with 20 categories. (2) CUB is a fine-grained dataset with 200 bird species mainly living in North America (Wah et al., 2011). We randomly split the dataset into 100, 50, 50 classes for training, validation and testing, respectively. (3) Cars contains 16,185 images of 207 fine-grained car types, which consist of 10 BMW models and 197 other car types (Krause et al., 2013). We randomly selected 196 categories include 98 training, 49 validation and 49 testing for the experiment. (4) Places is a dataset of scene images (Zhou et al., 2017), containing 73,000 training images from 365 scene categories, which are divided into 183 categories for training, 91 for validation and 91 for testing. (5) Plantae is a sub-set of the iNat2017 dataset (Van Horn et al., 2018), which contains 200 types of plants and a total of 47,242 images. We split them into 100 classes for training, 50 for validation, and 50 for testing.
The Mini-ImageNet is the most popular benchmark for few-shot classification. It is usually used as a baseline dataset for model training. The CUB dataset is more frequently used for few-shot fine-grained classification tasks. The Cars, Places and Plantae datasets are used for model testing in cross-domain few-shot classification tasks.
A.2 THE VISUALIZATION OF FEATURE IN SELF-SUPERVISED LEARNING.
The proposed SPDN method incorporates the self-supervised constraint into the training process, aiming to improve the quality of learned features and the generalization capability of the few-shot learning. Figure. 6 shows the tSNE visualization of the learned features of 100 samples from the mini-ImageNet dataset for each class in a 5-way 5-shot setting. We can see that, with the selfsupervised learning, the features of each class are more concentrated into clusters.
A.3 SELF-SUPERVISED OPTIMIZATION (SSO) MODULES
The proposed self-supervised optimization (SSO) modules aim to correct the predicted query labels. In the following experiment, we are trying to understand how many incorrect label prediction of the query labels have been successfully corrected by the SSO module. Table 5 shows the results from the 5-way 1-shot on the CUB dataset. We keep track of 75 randomly selected query samples. If we predict the query labels only using the prime network without using the SSO (before SSO), the number of query samples with incorrect labels is 57, and the number of correct ones is 18, which
is very low. After we apply the SSO, the number of query samples with incorrect labels is reduced to 45, the number of correct ones increases to 30. We can examine this correction process in more detail. The SSO module has corrected the labels for 15 samples, as shown in the third row (Incorrect → Correct Label) of the table. However, it has also mis-corrected the labels for 3 samples, as shown in the last row (Correct → Incorrect Label) of the table. In our experiments, we have observed that the SSO module is able to correct the labels for much more query samples than those miscorrected one. This implies that the dual network and the self-supervision constraint are working very well for few-shot learning. This explains the significant performance achieved by the proposed self-supervised prime-dual network method.
A.4 EXTENSION TO N -SHOT IMAGE CLASSIFICATION
In the main paper, we have used the 5-way 1-shot image classification as an example to present our method of self-supervised prime-dual network (SPDN) and optimization for few-shot image classification. This method can be naturally extended to genericK-wayN -shot image classification. Figure 7 illustrates an example of extension to 5-way 5-shot. In this case, each class, in both training and test stages, has 5 support samples and one query sample. In the prime network, we use these 5 support samples to predict the label of the query sample. To ensure that the dual network shares the same network structure as the prime network, for the reverse prediction, we randomly select one sample (denoted by s0) from the support set and switch it with the query sample q0. During the training and inference of the dual network, this updated support set is used to predict the label of s0, which is then compared to its ground-truth label to compute the self-supervised loss. This loss is used for joint prime-dual network training, as well as the self-supervised optimization of the label prediction for the query sample q0.
A.5 FURTHER UNDERSTANDING OF THE SELF-SUPERVISED OPTIMIZATION OF QUERY LABEL PREDICTION
In our proposed SPDN method, the self-supervised optimization of the query label prediction plays an important role and improves the performance significantly. In this section, we provide more experimental results to demonstrate and further under the performance of this algorithm module. Figure 8 shows 6 examples of 5-way 1-shot image classification. Initially, the predicted label for these query samples are incorrect. Then, we perform self-supervised search of the query labels within the neighborhood of the predicted label. We use this predicted labels as input to the dual network to predict the labels of the support samples. The label prediction error of the support
samples is used as the optimization objective. In Figure 8, under each query sample, we show the decreasing of the optimization objective (support label error) with the number of searched candidate query labels. These results show that it is sufficient to search 5-8 candidate query label vectors.
It should be noted that the self-supervised optimization query label prediction can correct the incorrect label prediction, adjusting incorrect label prediction into correct ones. Certainly, it will make mistake or mis-correct the query label prediction, adjusting correct label predictions into incorrect ones. However, the probability of the mis-correction is much lower. For example, Table 6 shows percentages of correct adjustment and incorrect adjustment by the optimization module on the Cars dataset. Specifically, the percentage of correct adjustment from incorrect query labels into correct ones is 21.6%. In the meantime, the percentage of incorrect adjustment is 5.7%. This result in a performance improvement of 15.8% in the overall few-shot image classification, from 32.8% to 48.6%, which is quite significant.
A.6 FURTHER DISCUSSION OF THE PROPOSED METHOD
The key idea and motivation behind our dual network design is as follows: one central challenge in network prediction is that we have no ways to check if the prediction is accurate or not, since we do not have the ground truth. To address this issue, we develop the prime-dual network structure, where the successfully learned dual network is used as a verification module to verify if the prediction results are good enough or not. It maps the prediction results back to the current known data. We establish the self-supervised loss defined on the current known data, use it as the objective function to perform local search and refinement of the prediction results. This process is unique and contributes significantly to the overall performance. The prime network is the baseline GNN+FT network using support samples to predict query samples. The dual network is another GNN+FT network (in opposite direction) using query samples to predict support samples. These two networks form a prediction loop and a self-supervised loss is then derived. We implement this new idea on the the GNN+FT few-shot learning method to demonstrate its performance. The proposed idea is generic and can be applied to other methods, even in other prediction and learning problems, which will be studied in our future work. Our proposed idea is new. However, it does introduce additional complexity. According to our estimation, it will add about 40-60% extra complexity on top of the existing baseline since a majority of computation, such as feature extraction, does not need to recomputed during the search process. In our future work, we plan to develop schemes to reduce the complexity of the self-supervised optimization, for example by merging multiple search steps into one execution cycle. | 1. What is the focus and contribution of the paper on multi-class classification?
2. What are the strengths of the proposed approach, particularly in terms of its ability to handle few examples?
3. What are the weaknesses of the paper, especially regarding the training process and the potential similarity between the prime and dual networks?
4. Do you have any concerns about the use of an auxiliary learning module and its impact on the accuracy gain of the model?
5. Why did the authors choose to use 224x224 images instead of the standard 84x84 images in the miniImageNet dataset? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a prime-dual network to extract discriminative relationships between support and query examples of the multi classes with few examples. While the prime network predicts labels of a query out of the support set, the dual network reverses the process and infer the category labels of the support set using query examples. For training, a weight self-supervised loss contains the sum of dual and prime networks and is evaluated on inner-domain and cross-domain cases.
Review
Strengths:
The paper is well written and the classification improvements seem to be significant.
Weekness:
While the training stage is a stochastic process and dual network is the reverse of the prime of in mapping support to the query, I was wondering how both of the networks wouldn’t be the copy of each other at the end of the training?
What happens if we attach the used auxiliary learning module on top of ResNet and ablate for example prototypical network? How we can be sure that the accuracy gain of the model is not obtained by an extra learning module.
I was wondering why 224x224 images are used while miniImageNet dataset contains 84x84 images? |
ICLR | Title
Self-Supervised Prime-Dual Networks for Few-Shot Image Classification
Abstract
We construct a prime-dual network structure for few-shot learning which establishes a commutative relationship between the support set and the query set, as well as a new self-supervision constraint for highly effective few-shot learning. Specifically, the prime network performs the forward label prediction of the query set from the support set, while the dual network performs the reverse label prediction of the support set from the query set. This forward and reserve prediction process with commutated support and query sets forms a label prediction loop and establishes a self-supervision constraint between the ground-truth labels and their predicted values. This unique constraint can be used to significantly improve the training performance of few-shot learning through coupled prime and dual network training. It can be also used as an objective function for optimization during the testing stage to refine the query label prediction results. Our extensive experimental results demonstrate that the proposed self-supervised commutative learning and optimization outperforms existing state-of-the-art few-shot learning methods by large margins on various benchmark datasets.
1 INTRODUCTION
Few-shot image classification aims to classify images from novel categories (query samples) based on very few labeled samples from each class (support images) (Hong et al., 2020a; Sun et al., 2021). During the training stage, the few-shot learning (FSL) is given a set of support-query set pairs with class labels. Once successfully trained, the model needs to be tested on unseen classes. The major challenge here is that the number of available support samples N is very small, often N ≤ 5. In an extreme case, N = 1 where it is called one-shot learning. In order to achieve this so-called learnto-learn capability, the FSL needs to capture the inherent visual or semantic relationship between the support samples and query samples, and more importantly, this learned relationship or prediction should be able to generalize well onto unseen classes (Liu et al., 2020d).
A fundamental challenge in prediction is that: if we know entity A and are trying to predict entity B, how do we know if the prediction of B, denoted by Φ(B), is accurate or not? Is there any way that we can verify the accuracy of the prediction Φ(B)? As we know, this is impossible since B has no ground-truth for us to evaluate or verify its prediction accuracy. If we can come up an indirect approach to effectively evaluate the prediction accuracy, it is expected that the learning and prediction performance can be significantly improved.
In this work, we propose to explore a prime-dual commutative network design for effective prediction, specifically for few-shot image classification. As illustrated in Figure 1, the prime network Φ is the original network that learns the forward prediction from A to B̂ = Φ(A). The dual network Γ performs the reverse prediction from B to  = Γ(B). If we cascade these two networks together which establishes a prediction loop from A to B and then back to A, we have
 = Γ(B̂) = Γ(Φ(A)). (1)
Since A is given, which has the ground-truth value, the difference between A and its prime-dual loop prediction result  forms a self-supervision loss
LS = d(A, Â) = d(A,Γ(Φ(A))), (2)
where d is a distance metric function. This self-supervision loss LS can be used to improve the training performance based on the coupling between the prime and dual networks. Furthermore, it can used to verify and adjust the prediction result by minimizing the self-supervision loss.
In this work, we propose to study this prime-dual network design with self-supervision for few-shot learning by exploiting the commutative relationship between the support set (entityA) and the query set (entity B). Specifically, the prime network learns to predict the labels of query samples using the support set with ground-truth labels as training samples. Meanwhile, the dual network learns to predict the labels of the support samples using the query set with ground-truth labels as training samples. For example, in 5-way 1-shot learning, the support set consists of 5 images from 5 classes with only one image per class. The query set also has 5 images from 5 classes. When training the prime and dual networks, the support set and the query set are switched for training samples. This forward and reserve prediction process with commutative support and query sets forms a label prediction loop and establishes a self-supervision constraint between the ground-truth labels and their predicted values. The prime-dual networks are jointly trained with the help from the self-supervision loss. This loss is also used during the testing stage to adjust and optimize the prediction results. Our extensive experimental results demonstrate that the proposed self-supervised commutative learning and optimization method outperforms existing state-of-the-art few-shot learning methods by a large margin on various benchmark datasets.
2 RELATED WORK AND UNIQUE CONTRIBUTIONS
Few-shot learning (FSL) aims to recognize instances from unseen categories with few labeled samples. There are three major categories of methods that have been developed for FSL. (1) Data Augmentation is the most direct method for few-shot learning, which explores different approaches to synthesize images to address the issue of few training samples. For example, self-training jigsaw augmentation (Chen et al., 2019) is able to synthesize new images by segmenting and reorganizing labeled and unlabeled gallery images. Mangla et al. (2020) apply self-supervision algorithms augmented with manifold mixup (Verma et al., 2019) for few-shot classification tasks. The F2GAN (Hong et al., 2020b) and MatchingGAN methods (Hong et al., 2020a) use generative adversarial networks (GANs) to construct high-quality samples for new image categories. (2) Optimization-based methods aim to learn a good initial network model for the classifier. This learned model can be then quickly adapted to novel classes using a few labeled samples. MAML (Finn et al., 2017) proposes to train a set of initialization models based on second-order gradients and meta-optimization. TAML (Jamal & Qi, 2019) reduces the bias introduced by the MAML algorithm to enforce equity between the tasks. In the Latent Embedding Optimization (LEO) method (Rusu et al., 2018), gradient-based optimization is performed in a low-dimensional latent space instead of the original high-dimensional parameter space. (3) Metric-based methods aim to learn a good metric space so that samples from novel categories can be effectively distinguished and correctly classified. For example, MatchingNet (Vinyals et al., 2016) applies a recurrent network to calculate the cosine similarity between samples. ProtoNet (Snell et al., 2017) compares features between samples in the Euclidean space. RelationNet (Sung et al., 2018) uses a CNN model and (Garcia & Bruna, 2017) uses the graph convolution network (GNN) to learn the metric relationship.
In this work, we also consider cross-domain FSL. For the cross-domain classification task, the model needs to generalize well from the source domain to a new or unseen target domain without accessing samples from the unseen domain during the training stage. Sun et al. (2021) propose a modelagnostic explanation-guided training method that dynamically finds and emphasizes the features which are important for the predictions. This improves the model generalization capability. To characterize the variation of image feature distribution across different domains, the LFT method (Tseng et al., 2020) learns the noise distribution by adding feature-wise transformation layers to the
image encoder. To avoid over-fitting on the source domain and increase the generalization capability to the target domain, the batch spectral regularization (BSR) method (Liu et al., 2020b) attempts to suppress all singular values of the batch feature matrices during pre-training. Another set of methods (Shankar et al., 2018; Volpi et al., 2018) learn to augment the input data with adversarial learning (Yang et al., 2020b) in order to generalize the task from the source domain to the unseen target domain.
In this work, we propose a commutative prime-dual network design for few-shot learning. In the literature, the mutual dependency and reciprocal relationship between multiple modules have been explored to achieve better performance. For example, (Xu et al., 2020) has developed a reciprocal cross-task architecture for image segmentation, which improves the learning efficiency and generation accuracy by exploiting the commonalities and differences across tasks. Sun et al. (2020) design a reciprocal learning network for human trajectory prediction, which consists of forward and backward prediction neural networks. The reciprocal learning enforces consistency between the forward and backward trajectory prediction, which helps each other to improve the learning performance and achieve higher accuracy. Zhu et al. (2017) design the CycleGAN contains two GANs forming a cycle network that can translate the images of the two domains into each other to achieve style transfer. Liu et al. (2021) develop a Temporal Reciprocal Learning (TRL) approach to fully explore the discriminative information from the disentangled features. Zhang et al. (2021b) design a support-query mutual guidance architecture for few-shot object detection.
Unique Contributions. Compared to existing work in the literature, the major contributions of this work include: (1) We propose a new prime-dual network design to explore the commutative relationship between support and query sets and establish a unique self-supervision constraint for few-shot learning. (2) We incorporate the self-supervision loss into the coupled prime-dual network training to improve the few-shot learning performance. (3) During the test stage, using the dual network to map the prediction results back to the support set domain and using the self-supervision constraint as an objective function, we develop an optimization-based scheme to verify and optimize the performance few-shot learning. (4) Our proposed method has significantly advanced the stateof-the-art performance of few-shot image classification.
3 METHOD
In this section, we present our method of self-supervised prime-dual network (SPDN) learning and optimization for few-shot image classification.
3.1 SELF-SUPERVISED COMMUTATIVE LEARNING
Figure 2 provides an overview of our proposed method of self-supervised commutative learning and optimization for few-shot image classification. In a typical setting of K-way N -shot learning, N labeled image samples from each of the K classes form the support set. For example, in a 5-way 1-shot learning, K = 5 and N = 1. Given a very small support set S = {Skn|1 ≤ k ≤ K, 1 ≤ n ≤ N}, the objective of the FSL is to predict the labels of the query images Q = {Qkm|1 ≤ k ≤ K, 1 ≤ m ≤ M} from the same K classes in M batches During the training stage, the labels of both support and query samples are available. The prime network ΦS→Q for few-shot classification is trained on these support-query sets, aiming to learn and represent the inherent visual
or semantic relationship between the support and query images. Once successfully learned, we will apply this network to unseen classes. Specifically, in the test stage, given a labeled support set S′ = {S′kn|1 ≤ k ≤ K, 1 ≤ n ≤ N} from these K unseen classes, we need to predict the labels for the query set Q′ = {Q′km|1 ≤ k ≤ K, 1 ≤ m ≤M} also from these unseen classes. Therefore, the fundamental challenge of FSL is to characterize and learn the inherent relationship between the support set S and the query set Q. Once learned, we can then shift or transfer this relationship to S′ and Q′ of unseen classes to infer the labels of Q′. In this work, as discussed in the following section, we propose to establish a graph neural network (GNN) to characterize and learn this relationship.
We recognize that, within the framework of few-shot learning, the support set and the query set are in an equal and symmetric position to each other. More specifically, if we can learn to predict the labels of query set Q from support set S, certainly, we can switch their order, predicting the labels of the support set S from the query set Q using the same network architecture. This observation leads to an interesting commutative prime-dual network design for few-shot learning. As illustrated in Figure 2, we introduce a dual network ΓQ→S, which performs the reverse label prediction of the support set S from the query set Q. Let L(S) and L(Q) be the label vectors of S and Q, respectively. Let L̂(S) and L̂(Q) be the predicted labels. The forward prediction by the prime network can be written as
L̂(Q) = ΦS→Q[L(S)], (3)
while the reverse prediction by the dual network can be written as
L̂(S) = ΓQ→S[L(Q)], (4)
If both networks Φ and Γ are well trained, and if we pass the label prediction output of the prime network as input to the dual network, then, we expect that the predicted labels for the support set should be close to its ground-truth. This leads to the following self-supervision loss
LSS = ||L(S)− L̂(S)||2 = ||L(S)− ΓQ→S[L̂(Q)] ||2 (5) = ||L(S)− ΓQ→S[ΦS→Q[L(S)]] ||2.
This self-supervision constraint can be established on both support set and query set, resulting in a coupled prime-dual network training. Figure 3 (a) and (b) shows the training processes for the prime network and the dual network, respectively. Specifically, from the support set S, the prime network learns to predict the labels of the query set Q. As in existing few-shot learning, we have the loss LPQ = ||L̂(Q)−L(Q)||2 between the predicted query labels and their ground-truth values. We then use the query samples and their predicted labels as input to the dual network ΓQ→S, we can predict the labels of the support set L̂(S) and compute the self-supervision loss LPS = ||L̂(S) − L(S)||2. These two losses are combined to form the loss function for training the prime network
LP = ||L̂Φ(Q)− L(Q)||2 + α · ||L̂Φ,Γ(S)− L(S)||2. (6)
α is a weighting parameter whose default value is set to be 0.5 in our experiments. Similarly, for the training of the dual network, as shown in Figure 3(b), its loss function is given by
LD = ||L̂Γ(S)− L(S)||2 + α · ||L̂Γ,Φ(Q)− L(Q)||2. (7)
3.2 GRAPH NEURAL NETWORK FOR FEW-SHOT IMAGE CLASSIFICATION
The proposed prime and dual networks share the same network design, which will be discussed in this section. The only difference between these two networks is that their support and query samples are switched. In the following, we use the prime network as an example to explain its design.
The central task of few-shot learning is to characterize the inherent relationship between the query and support samples, based on which we can infer the labels of the query samples from the support samples (Tseng et al., 2020; Liu et al., 2020b). In this work, we propose to use a graph neural network (GNN) to model and analyze this relationship. InK-wayN -shot learning, givenK classes, each with N support samples {Skn}, we need to learn the prime network to predict the labels for K query samples {Qk}. This implies, in each of the total M training batch, we have K × (N + 1) support samples and query samples. As illustrated in Figure 4(a), we use a backbone network, for example, Resnet-10 or Resnet-12, to extract feature for each of these support and query samples. We denote their features by S = {stkn} and Q = {qtk} where t represents the update iteration index in the GNN. Initially, t = 0. These support-query sample features form the nodes for the GNN, denoted by {xtj |1 ≤ j ≤ J}, J = K × (N + 1), for the simplicity of notations. The edge between two graph nodes represents the correlation ψ(xti,x t j) between nodes x t i and x t j . Note that our GNN has two groups of nodes: support sample nodes and query sample nodes. The support samples nodes have labels while the labels of the query samples need to be predicted by the prime network. If xti and xtj are both support nodes, we have
ψ(xti,x t j) =
{ 1 if L(xti) = L(x t j),
0 if L(xti) 6= L(xtj). (8)
Here, L(·) represents the label of the corresponding support sample. Since the labels for the query nodes are unknown, the correlation for edges linked to these query nodes need to be learned by the GNN. Initially, we set them to be random values between 0 and 1.
Each node of the GNN combines features from these neighboring nodes with the corresponding correlation as weights and updates its own feature by learning a multi-layer perceptron (MLP) network Go[·] as follows
xt+1j = Go
[ J∑
i=1
xtj · ψ(xti,xtj)
] . (9)
At each edge, another MLP network Ge[·, ·] is learned to predict the correlation between two graph nodes,
ψ(xti,x t j) = Ge[xti,xtj ], (10)
whose ground-truth values are obtained using the scheme discussed in the above. The feature generated by the prime GNN is then passed to a classification network to predict the query labels. Both the prime and dual GNNs are jointly trained with their final classification networks.
3.3 SELF-SUPERVISED OPTIMIZATION OF FEW-SHOT IMAGE CLASSIFICATION
Besides improving the training performance through mutual enforcement, the proposed selfsupervised prime-dual network design can be also used in the testing stage to optimize the label prediction of query samples. Specifically, we can use the dual network to refine and optimize the label prediction results obtained by the prime network. As illustrated in Figure 4(b), given a support
set S and a query set Q, the support set has class labels L(S). Let L̂(Q) be the prediction result, the output of the softmax layer of the classification network. In existing approaches of few-shot learning or other network prediction scenarios, we are not able to verify if the prediction is accurate or not since the ground-truth is not available for test samples. However, in this work, with the dual network ΓQ→S being successfully trained, we can use the prediction result L̂(Q) as input to the dual network to predict the class labels of the original support samples
L̂(S) = ΓQ→S[L̂(Q)]. (11)
Note that these support samples DO have ground-truth labels L(S). Define the label prediction error by
El(S) = ||L(S)− L̂(S)||2. (12) We assume that the correct query sample labels L∗(Q) is within the neighborhood of the prediction result L̂(Q). Let Ω be the set of candidate assignments of query labels which are within the neighborhood of L̂(Q). For example,
Ω = {L̃(Q) : ||L̃(Q)− L̂(Q)||2 ≤ ∆}, (13)
where ∆ is a given threshold for the label vector distance. We then search the candidate query labels L̃(Q) within the neighborhood set Ω to minimize the support label prediction error El(S) in (12). The optimized prediction of the query samples is given by
L∗(Q) = arg min L̃(Q)∈Ω || L(S)− ΓQ→S[L̃(Q)] ||2. (14)
From the experimental results, we will see that this unique self-supervised optimization of the query label prediction is able to significantly improve the few-shot image classification performance.
4 EXPERIMENTAL RESULTS
In this section, we provide experimental results on various benchmark datasets to demonstrate the performance of our proposed SPDN method for few-shot learning.
4.1 IMPLEMENTATION DETAILS
We use ResNet-10 as the backbone of our feature encoder. The input images are resized to 224×224 and the output feature vector size is 1 × 1 × 512. We choose the Adam optimizer with a learning rate of 0.01 and a batch size of 64 for training of 400 epochs. In the episodic meta-training stage, we use the graph neural network (GNN) discussed in the above section to generate the feature embedding for query samples. The prime network ΦS→Q and the dual network ΓQ→S are jointly trained. These two networks are both trained for 400 epochs with 100 episodes per epoch. In each episode, we randomly select K categories (K=5, 5-way) from the training set. Then, we randomly select N samples (N=1 or 5 for 1-shot or 5-shot) from each category to compose support set and query set, respectively. In the test stage, we use the average of 1000 trials as the final result for all the experiments. For each trial, we randomly select K categories from the test set. Similar to the training stage, N (1 or 5) samples are randomly selected as the support set and 15 samples as the query set from each category.
4.2 DATASETS
Five benchmark datasets are used for performance evaluation and comparison with other methods in the literature, Mini-ImageNet (Ravi & Larochelle, 2016), CUB (Wah et al., 2011), Cars (Krause et al., 2013), Places (Zhou et al., 2017) and Plantae (Van Horn et al., 2018). More details about dataset settings are presented in Appendix A.1.
4.3 RESULTS
To demonstrate the performance of our SPDN method, we conduct a series of experiments under different few-shot classification settings. In the literature, there are two major scenarios for testing the FSL methods: (1) intra-domain learning where the training classes and test classes are from the same object domain, for example, both from the Mini-ImageNet classess, and (2) cross-domain learning where the FSL is trained on one dataset (e.g., Mini-ImageNet) and the testing is performed on another dataset (e.g., CUB). Certainly, the cross-domain scenario is more challenging.
4.3.1 INTRA-DOMAIN FSL RESULTS.
First, we conduct intra-domain FSL experiments on the Mini-ImageNet. Table 1 summarizes the performance comparison with state-of-the-art FSL methods mainly developed in the past two years. We also list the backbone network used for extracting the features for the input images. We can see that, for the 5-way 1-shot image classification task, our method (with ResNet-10 backbone) outperform the current best method (with ResNet-12 backbone) from (Zhang et al., 2021a) by 5.42%. Another method which uses the same ResNet-10 backbone is the GNN+FT method (Tseng et al., 2020). Our method outperforms this method by 12.23%. For the 5-way 5-shot classification task, our method outperforms the current best by more than 5%, which is quite significant.
Second, we evaluate our method on intra-domain fine-grained image classification tasks on the CUB dataset. In this case, the FSL needs to learn subtle features to distinguish objects from close categories. Table 2 summarizes the performance results on 5-way 1-shot and 5-way 5-shot classification tasks. We can see that, for the one-shot classification task, our method outperforms the current best method, FRN (Wertheimer et al., 2021) by 6.72%. For the 5-shot classification task, our method improves the classification accuracy by 2.80%.
4.3.2 CROSS-DOMAIN FSL RESULTS.
The cross-domain few-shot learning is more challenging. Following existing methods, we train the model on the Mini-ImageNet object domain and test the trained model on other domains, including the CUB, Cars, Places and Plantae datasets. Table 3 summarizes the results for 5-way 1-shot classification (top) and 5-way 5-shot classification (bottom). We can see that our SPDN method has dramatically improved the classification accuracy on these cross-domain FSL tasks. For example, on the Cars dataset, our method outperforms the current best TPN+ATA (Wang & Deng, 2021) by 4.15%. On the Plantae dataset, the performance gain is 5.59%, which is quite significant. For the 5-way 5-shot classification task, the performance gains on these datasets are also very significant,
between 0.37-8.68%. This demonstrates that our SPDN method is able to learn the inherent visual relationship between the support and query samples and can generalize very well onto unseen classes in new object domains.
4.4 ABLATION STUDIES
In this section, we conduct ablation studies to further understand the proposed SPDN method and analyze the contributions of major algorithm components.
From algorithm design perspective, our SPDN method has two major components: self-supervised learning (SSL) of the prime and dual networks, and the self-supervised optimization (SSO) of the predicted query labels. We adopt the single GNN-based model (Tseng et al., 2020) as the baseline of our method and the SSL and SSO algorithm components are added onto this baseline method. To understand the performance of these two algorithm components, in the following experiment, we train the SPDN method using training samples from the Mini-ImageNet. We conduct intra-domain few-shot image classification on the Mini-ImageNet and cross-domain few-shot image classification on the CUB, Cars, Places, and Plantae datasets. Table 4 summarizes the results for 5-way 1-shot and 5-way 5-shot image classification. The second column shows the intra-domain few-shot image classification results on the Mini-ImageNet. The rest columns show the results for the cross-domain classification results. We can see that the self-supervised prime-dual network training is able to improve the classification accuracy by up to 1.8%. The performance gain achieved by the selfsupervised optimization of the predicted query labels is much more significant, ranging from 7-10%. This dramatic performance gain is a surprise to us. In the following, we will provide additional ablation studies to further understand this SSO algorithm module. Compared to the SSO module, the performance improvement by the first SSL module is relatively small. This is because the major new contribution of the SSL module is the self-supervised loss which aims to further improve the learning on the baseline GNN. However, it has successfully trained a dual network, which plays a very important role in the second SSO module. It is used to search and optimize the predicted labels of the query samples, resulting in major performance gain. We discuss the specific optimization results of our self-supervised optimization (SSO) module through an experiment in Appendix A.3.
In the following experiments, we attempt to further understand the behavior and performance of the SSO algorithm module. First, we conduct an experiment to understand the search and optimization process of SSO. Suppose L(Q) is the true label of the query samples. Let
L̃(Q) = L(Q) + λ ·∆L, (15)
be a label vector within the neighborhood of L(Q). Here, ∆L is a pre-generated random vector and λ is a disturbance coefficient to control the amount of variation. With the label vector L̃(Q) and the query samples, we can predict the labels of the support vector using the dual network. Then, we can compute the prediction error El(S) as in (12). Figure 5(a) shows the label prediction error El(S) as a function of λ. This experiment was performed on 5-way 1-shot image classification on the CUB dataset. We can see that the minimum error is achieved at λ = 0. This implies the groundtruth labels of the query samples have the minimum self-supervised label error El(S). This is a very important property of our SSO method. It suggests that, when the predicted query labels are not correct, and the ground-truth labels are within its neighborhood, we can use the SSO method to search for these ground-truth labels using the minimum self-supervised support label error criteria.
During our self-supervised optimization of the predicted query labels, we choose a small neighborhood Ω within the neighborhood of the predicted query labels L̂(Q) with a maximum distance ∆. This ∆controls the number of search positions in the label space. If we search more positions or candidate query labels, we can obtain smaller self-supervised label errors of the support samples El(S). Figure 5(b) plots the value of El(S) as a function of the number of search positions. This experiment was performed on 5-way 1-shot image classification on the CUB dataset. We can see that the error drops significantly with the number of searched positions. We recognize that, for each search position, we need to run the dual network once. This does introduce extra computational complexity. But, the amount of performance gain is very appealing. In our experiments, we limited the number of search positions to the 5, i.e., the nearest 5 label vectors (integer vectors) to the predicted query label.
5 CONCLUSION
In this work, we have successfully developed a novel prime-dual network structure for few-shot learning which explores the commutative relationship between the support set and the query set. The prime network performs the forward label prediction from the support set to the query set, while the dual network performs the reverse label prediction from the query set to the support set. This forward and reserve prediction process with commutative support and query sets forms a label prediction loop and establishes a self-supervision constraint between the ground-truth labels and their predicted values. We have established a self-supervised support error metric and used the learned dual network to optimize the predicted query labels during the testing stage. Our extensive experimental results on both intra-domain and cross-domain few-shot image classificaiton have demonstrated that the proposed self-supervised prime-dual network learning and optimization have significantly improved the performance of few-shot learning, especially for cross-domain few-shot learning tasks. We have also conducted detailed ablation studies to provide in-depth understanding of the significant performance gain achieved by the self-supervised optimization process. The self-supervised primedual network design is general and can be naturally incorporated into other prediction and learning methods.
A APPENDIX
In this appendix, we provide more details of experimental settings and additional results to further understand the performance of our proposed method.
A.1 DATASET
In our experiments, the following 5 datasets are used for performance evaluations and comparisons.
(1) Mini-ImageNet has randomly selected 100 categories from the ImageNet (Deng et al., 2009) and each category has 600 samples of size 84 × 84. The 100 categories are divided into a training set with 64 categories, a validation set with 16 categories, and a testing set with 20 categories. (2) CUB is a fine-grained dataset with 200 bird species mainly living in North America (Wah et al., 2011). We randomly split the dataset into 100, 50, 50 classes for training, validation and testing, respectively. (3) Cars contains 16,185 images of 207 fine-grained car types, which consist of 10 BMW models and 197 other car types (Krause et al., 2013). We randomly selected 196 categories include 98 training, 49 validation and 49 testing for the experiment. (4) Places is a dataset of scene images (Zhou et al., 2017), containing 73,000 training images from 365 scene categories, which are divided into 183 categories for training, 91 for validation and 91 for testing. (5) Plantae is a sub-set of the iNat2017 dataset (Van Horn et al., 2018), which contains 200 types of plants and a total of 47,242 images. We split them into 100 classes for training, 50 for validation, and 50 for testing.
The Mini-ImageNet is the most popular benchmark for few-shot classification. It is usually used as a baseline dataset for model training. The CUB dataset is more frequently used for few-shot fine-grained classification tasks. The Cars, Places and Plantae datasets are used for model testing in cross-domain few-shot classification tasks.
A.2 THE VISUALIZATION OF FEATURE IN SELF-SUPERVISED LEARNING.
The proposed SPDN method incorporates the self-supervised constraint into the training process, aiming to improve the quality of learned features and the generalization capability of the few-shot learning. Figure. 6 shows the tSNE visualization of the learned features of 100 samples from the mini-ImageNet dataset for each class in a 5-way 5-shot setting. We can see that, with the selfsupervised learning, the features of each class are more concentrated into clusters.
A.3 SELF-SUPERVISED OPTIMIZATION (SSO) MODULES
The proposed self-supervised optimization (SSO) modules aim to correct the predicted query labels. In the following experiment, we are trying to understand how many incorrect label prediction of the query labels have been successfully corrected by the SSO module. Table 5 shows the results from the 5-way 1-shot on the CUB dataset. We keep track of 75 randomly selected query samples. If we predict the query labels only using the prime network without using the SSO (before SSO), the number of query samples with incorrect labels is 57, and the number of correct ones is 18, which
is very low. After we apply the SSO, the number of query samples with incorrect labels is reduced to 45, the number of correct ones increases to 30. We can examine this correction process in more detail. The SSO module has corrected the labels for 15 samples, as shown in the third row (Incorrect → Correct Label) of the table. However, it has also mis-corrected the labels for 3 samples, as shown in the last row (Correct → Incorrect Label) of the table. In our experiments, we have observed that the SSO module is able to correct the labels for much more query samples than those miscorrected one. This implies that the dual network and the self-supervision constraint are working very well for few-shot learning. This explains the significant performance achieved by the proposed self-supervised prime-dual network method.
A.4 EXTENSION TO N -SHOT IMAGE CLASSIFICATION
In the main paper, we have used the 5-way 1-shot image classification as an example to present our method of self-supervised prime-dual network (SPDN) and optimization for few-shot image classification. This method can be naturally extended to genericK-wayN -shot image classification. Figure 7 illustrates an example of extension to 5-way 5-shot. In this case, each class, in both training and test stages, has 5 support samples and one query sample. In the prime network, we use these 5 support samples to predict the label of the query sample. To ensure that the dual network shares the same network structure as the prime network, for the reverse prediction, we randomly select one sample (denoted by s0) from the support set and switch it with the query sample q0. During the training and inference of the dual network, this updated support set is used to predict the label of s0, which is then compared to its ground-truth label to compute the self-supervised loss. This loss is used for joint prime-dual network training, as well as the self-supervised optimization of the label prediction for the query sample q0.
A.5 FURTHER UNDERSTANDING OF THE SELF-SUPERVISED OPTIMIZATION OF QUERY LABEL PREDICTION
In our proposed SPDN method, the self-supervised optimization of the query label prediction plays an important role and improves the performance significantly. In this section, we provide more experimental results to demonstrate and further under the performance of this algorithm module. Figure 8 shows 6 examples of 5-way 1-shot image classification. Initially, the predicted label for these query samples are incorrect. Then, we perform self-supervised search of the query labels within the neighborhood of the predicted label. We use this predicted labels as input to the dual network to predict the labels of the support samples. The label prediction error of the support
samples is used as the optimization objective. In Figure 8, under each query sample, we show the decreasing of the optimization objective (support label error) with the number of searched candidate query labels. These results show that it is sufficient to search 5-8 candidate query label vectors.
It should be noted that the self-supervised optimization query label prediction can correct the incorrect label prediction, adjusting incorrect label prediction into correct ones. Certainly, it will make mistake or mis-correct the query label prediction, adjusting correct label predictions into incorrect ones. However, the probability of the mis-correction is much lower. For example, Table 6 shows percentages of correct adjustment and incorrect adjustment by the optimization module on the Cars dataset. Specifically, the percentage of correct adjustment from incorrect query labels into correct ones is 21.6%. In the meantime, the percentage of incorrect adjustment is 5.7%. This result in a performance improvement of 15.8% in the overall few-shot image classification, from 32.8% to 48.6%, which is quite significant.
A.6 FURTHER DISCUSSION OF THE PROPOSED METHOD
The key idea and motivation behind our dual network design is as follows: one central challenge in network prediction is that we have no ways to check if the prediction is accurate or not, since we do not have the ground truth. To address this issue, we develop the prime-dual network structure, where the successfully learned dual network is used as a verification module to verify if the prediction results are good enough or not. It maps the prediction results back to the current known data. We establish the self-supervised loss defined on the current known data, use it as the objective function to perform local search and refinement of the prediction results. This process is unique and contributes significantly to the overall performance. The prime network is the baseline GNN+FT network using support samples to predict query samples. The dual network is another GNN+FT network (in opposite direction) using query samples to predict support samples. These two networks form a prediction loop and a self-supervised loss is then derived. We implement this new idea on the the GNN+FT few-shot learning method to demonstrate its performance. The proposed idea is generic and can be applied to other methods, even in other prediction and learning problems, which will be studied in our future work. Our proposed idea is new. However, it does introduce additional complexity. According to our estimation, it will add about 40-60% extra complexity on top of the existing baseline since a majority of computation, such as feature extraction, does not need to recomputed during the search process. In our future work, we plan to develop schemes to reduce the complexity of the self-supervised optimization, for example by merging multiple search steps into one execution cycle. | 1. What is the main contribution of the paper regarding few-shot learning and GNNs?
2. What are the strengths of the proposed approach, particularly in its performance gains?
3. Do you have any concerns or questions about the motivation and design of the dual network?
4. How does the reviewer assess the relation between this work and other graph-based methods, such as EGNN, DPGN, and Mutual CRF-GNN?
5. Why does the reviewer think the method is more effective for cross-domain few-shot classification?
6. Is there any concern about the computation overhead of the dual network? | Summary Of The Paper
Review | Summary Of The Paper
[1] This paper follows the line of metric-based few-shot learning methods with GNNs. The novelty of this paper is to design dual GNN graphs to capture the consistency of label prediction from the support set to the query set and its reversible task (from the query set to the support set).
[2] The proposed method achieves considerable performance gains compared with other state-of-the-art methods.
Review
Pros:
[1] This paper extends the usage of Graph Neural Networks in few-shot image classification tasks, which is to design dual graphs and regularize the consistencies of prediction from the support set to the query set and from the query set to the support set.
[2] The experiments are comprehensive. The author conducted experiments on intra-domain classification and inter-domain classification tasks.
======================
Cons:
[1] About motivation. The novelty of this paper is to design a dual network to make bi-directional predictions. However, the motivation is not clearly stated in the paper. In the testing stage, the authors only use the support samples to make label predictions in the query set, so why the consistency loss can improve the performance in the few-shot learning? I suggest the author visualize the feature distributions of samples in the support set and the query set to explain how the consistency loss influences the feature distributions.
[2] About the model design. Do prime networks and dual networks share the same parameters? Or individual training? When testing, how does the author use the dual networks?
[3] About the relation between this work and EGNN[A], DPGN[B], and Mutual CRF-GNN[C]. These methods follow the line of the graph-based methods and also focus on the relation between the query samples and support samples. The author should carefully compare their method with these papers. (i) These papers also design losses to supervise the prediction of support samples after message passing and also use complex graph network structures to capture the relation in edges, distributions, and label predictions. Does this paper share similar ideas? I would like to remind that when message passing in GNN, the information also transforms from the query set to the suppert set. (ii) This paper should add semi-supervised learning settings in EGNN and DPGN to show their effectiveness.
A. Edge-labeling Graph Neural Network for Few-shot Learning, CVPR19
B. Distribution propagation graph network for few-shot learning, CVPR20
C. Mutual crf-rnn for few-shot learning, CVPR21
[4] It seems that this work is more effective for cross-domain few-shot classification than intra-domain few-shot classification. The author should explain why their methods are effective as I fail to find any specific design for cross-domain classification.
[5] As the author recognized, the dual network will be forwarded for each neighborhood search, resulting in much extra computation overhead. I think it is a bit unfair to compare their methods with other methods in Table 1. |
ICLR | Title
Local Binary Pattern Networks for Character Recognition
Abstract
Memory and computation efficient deep learning architectures are crucial to the continued proliferation of machine learning capabilities to new platforms and systems, especially, mobile sensing devices with ultra-small resource footprints. In this paper, we demonstrate such an advance for the well-studied character recognition problem. We use a strategy different from the existing literature by proposing local binary pattern networks or LBPNet that can learn and perform bit-wise operations in an end-to-end fashion. Binarization of operations in convolutional neural networks has shown promising results in reducing the model size and computing efficiency. Characters consist of some particularly structured strokes that are suitable for binary operations. LBPNet uses local binary comparisons and random projection in place of conventional convolution (or approximation of convolution) operations, providing important means to improve memory and speed efficiency that is particularly suited for small footprint devices and hardware accelerators. These operations can be implemented efficiently on different platforms including direct hardware implementation. LBPNet demonstrates its particular advantage on the character classification task where the content is composed of strokes. We applied LBPNet to benchmark datasets like MNIST, SVHN, DHCD, ICDAR, and Chars74K and observed encouraging results. INTRODUCTION Convolutional Neural Networks (CNN) (LeCun et al., 1989a) have had a notable impact on many applications. Modern CNN architectures such as AlexNet (Krizhevsky et al., 2012), VGG (Simonyan & Zisserman, 2015), GoogLetNet (Szegedy et al., 2015), and ResNet (He et al., 2016) have greatly advanced the use of deep learning techniques (Hinton et al., 2006) into a wide range of computer vision applications (Girshick et al., 2014; Long et al., 2015). As deep learning models mature and take on increasingly complex pattern recognition tasks, these demand tremendous computational resources with correspondingly higher performance machines and accelerators that continue to be fielded by system designers. It also limits their use to applications that can afford the energy and/or cost of such systems. By contrast, the universe of embedded devices especially when used as intelligent edge-devices in the emerging distributed systems presents a higher range of potential applications from augmented reality systems to smart city systems. Optical character recognition (OCR) particularly in the wild, shown in Fig. 1, has become an essential task for computer vision applications such as autonomous driving and mixed reality. There existed CNN-based methods (Yin et al., 2013) and other probabilistic learning methods (Yao et al., 2014a;b) handling the OCR tasks. However, the CNN-based models are computation demanding, and the probabilistic learning methods required more patches, e.g., empirical rule, clustering, error correction, or boosting to improve accuracy. Various methods have been proposed to perform network pruning (LeCun et al., 1989b; Guo et al., 2016), compression (Han et al., 2015; Iandola et al., 2016), or sparsification(Liu et al., 2015). Impressive results have been achieved lately by using binarization of selected operations in CNNs (Courbariaux et al., 2015; Hubara et al., 2016; Rastegari et al., 2016). At the core, these efforts seek to approximate the internal computations from floating point to binary while keeping the underlying convolution operation exact or approximate, but the nature of character images has not been fully utilized yet.
N/A
Memory and computation efficient deep learning architectures are crucial to the continued proliferation of machine learning capabilities to new platforms and systems, especially, mobile sensing devices with ultra-small resource footprints. In this paper, we demonstrate such an advance for the well-studied character recognition problem. We use a strategy different from the existing literature by proposing local binary pattern networks or LBPNet that can learn and perform bit-wise operations in an end-to-end fashion. Binarization of operations in convolutional neural networks has shown promising results in reducing the model size and computing efficiency. Characters consist of some particularly structured strokes that are suitable for binary operations. LBPNet uses local binary comparisons and random projection in place of conventional convolution (or approximation of convolution) operations, providing important means to improve memory and speed efficiency that is particularly suited for small footprint devices and hardware accelerators. These operations can be implemented efficiently on different platforms including direct hardware implementation. LBPNet demonstrates its particular advantage on the character classification task where the content is composed of strokes. We applied LBPNet to benchmark datasets like MNIST, SVHN, DHCD, ICDAR, and Chars74K and observed encouraging results.
INTRODUCTION
Convolutional Neural Networks (CNN) (LeCun et al., 1989a) have had a notable impact on many applications. Modern CNN architectures such as AlexNet (Krizhevsky et al., 2012), VGG (Simonyan & Zisserman, 2015), GoogLetNet (Szegedy et al., 2015), and ResNet (He et al., 2016) have greatly advanced the use of deep learning techniques (Hinton et al., 2006) into a wide range of computer vision applications (Girshick et al., 2014; Long et al., 2015). As deep learning models mature and take on increasingly complex pattern recognition tasks, these demand tremendous computational resources with correspondingly higher performance machines and accelerators that continue to be fielded by system designers. It also limits their use to applications that can afford the energy and/or cost of such systems. By contrast, the universe of embedded devices especially when used as intelligent edge-devices in the emerging distributed systems presents a higher range of potential applications from augmented reality systems to smart city systems.
Optical character recognition (OCR) particularly in the wild, shown in Fig. 1, has become an essential task for computer vision applications such as autonomous driving and mixed reality. There existed CNN-based methods (Yin et al., 2013) and other probabilistic learning methods (Yao et al., 2014a;b) handling the OCR tasks. However, the CNN-based models are computation demanding, and the probabilistic learning methods required more patches, e.g., empirical rule, clustering, error correction, or boosting to improve accuracy.
Various methods have been proposed to perform network pruning (LeCun et al., 1989b; Guo et al., 2016), compression (Han et al., 2015; Iandola et al., 2016), or sparsification(Liu et al., 2015). Impressive results have been achieved lately by using binarization of selected operations in CNNs (Courbariaux et al., 2015; Hubara et al., 2016; Rastegari et al., 2016). At the core, these efforts seek to approximate the internal computations from floating point to binary while keeping the underlying convolution operation exact or approximate, but the nature of character images has not been fully utilized yet.
We propose LBPNet as a light-weighted and compact deep-learning approach that can leverage the nature of character images since LBPNet is sensitive to discriminative outlines and strokes. Precisely, we focus on the task of character classification by exploring an alternative using nonconvolutional operations that can be executed in an architectural and hardware-friendly manner, trained in an end-to-end fashion from scratch (distinct to the previous attempts of binarizing the CNN operations). We note that this work has roots in research before the current generation of deep learning methods. Namely, the adoption of local binary patterns (LBP) (Ojala et al., 1996), which uses a number of predefined sampling points that are mostly on the perimeter of a circle, to compare with the pixel value at the center. The combination of multiple logic outputs (“1” if the value on a sampling point is greater than that on the center point and “0” otherwise) gives rise to a surprisingly rich representation (Wang et al., 2009) about the underlying image patterns and has shown to be complementary to the SIFT-kind features (Lowe, 2004). However, LBP has been under-explored in the deep learning research community where the feature learning part in the existing deep learning models (Krizhevsky et al., 2012; He et al., 2016) primarily refers to the CNN features in a hierarchy. We found LBP operations particularly suitable in recognizing characters that consist of structured strokes. Despite recent attempts such as (Juefei-Xu et al., 2017), the logic operation (comparison) in LBP has not been used in the existing CNN frameworks due to the intrinsic difference between the convolution and comparison operations.
Several features make LBPNet distinct from previous attempts. All the binary logic operations in LPBNet are directly learned, which is in a stark distinction to previous attempts that try to either binarize CNN operations (Hubara et al., 2016; Rastegari et al., 2016) or to approximate LBP with convolution operations (Juefei-Xu et al., 2017). Further, the LBP kernels in previous works are fixed upon initialized because the lack of a suitable mechanism to train the sampling patterns. Instead, we derive a differentiable function to learn the binary pattern and adopt random projection for the fusion operations. Fig. 2 illustrates the overview of LBPNet. The resulting LBPNet is very suitable for the character recognition tasks because the comparison operation can capture and comprehend the sharp outlines and distinct strokes among character images.Experiments show that thus configured LBPNet achieves the state-of-the-art results on benchmark datasets while accomplishing a significant improvement in the parameter size reduction gain (hundreds) and speedup (thousand times faster). That means LBPNet efficiently utilizes every storage bit and computation unit through the learning of image representations.
RELATED WORKS
Related works regarding model reduction of CNN fall along four primary dimensions.
Character recognition. Besides CNN-based methods for character recognition like BNN (Hubara et al., 2016), random forest (Yao et al., 2014a;b) was prevailing as well. However, the random forest methods usually required one or more techniques such as feature extraction, clustering, or error correction codes to improve the recognition accuracy. Our method, instead, provides a compact endto-end and computation efficient solution to character recognition.
Binarization for CNN. Binarizing CNNs to reduce the model size has been an active research direction (Courbariaux et al., 2015; Hubara et al., 2016; Rastegari et al., 2016). Through binarizing both weights and activations, the model size was reduced, and a logic operation can replace the multiplication. Non-binary operations like batch normalization with scaling and shifting are still in floating-point (Hubara et al., 2016). The XNOR-Net (Rastegari et al., 2016) introduces extra scaling layer to compensate for the loss of binarization and achieves a state-of-the-art accuracy on ImageNet. Both BNNs and XNORs can be considered as the discretization of real-numbered CNNs, while the core of the two works is still based on spatial convolution.
CNN approximation for LBP operation. Recent work on local binary convolutional neural networks (LBCNN) in (Juefei-Xu et al., 2017) takes an opposite direction to BNN (Hubara et al., 2016). LBCNN utilizes subtraction between pixel values together with a ReLU layer to simulate the LBP operations. During the training, the sparse binarized difference filters are fixed, only the successive 1-by-1 convolution, serving as channel fusion mechanism and the parameters in batch normalization layers, are learned. However, the feature maps of LBCNN are still in floating-point numbers, resulting in significantly increased model complexity as shown in Table 2. By contrast, LBPNet learns binary patterns and logic operations from scratch, resulting in orders of magnitude reduction in the memory size and an increase in testing speed over LBCNN.
Active or deformable convolution. Among the notable line of recent work that learns local patterns are active convolution (Jeon & Kim, 2017) and deformable convolution (Dai et al., 2017), where data dependent convolution kernels are learned. Both of these are quite different from LBPNet since they do not seek to improve network efficiency. Our binary patterns learn the position of the sampling points in an end-to-end fashion as logic operations (without the need for the use of addition operations). By contrast, directly relevant earlier work (Dai et al., 2017) essentially learns data-dependent convolutions.
LOCAL BINARY PATTERN NETWORK
Fig. 2 shows an overview of the LBPNet architecture. The forward propagation is composed of two steps: LBP operation and channel fusion. We introduce the patterns in LBPNets and the two steps in the following sub-sections and then describe the engineered network structures for LBPNets.
PATTERNS IN LBPNETS
In LBPNet, multiple patterns defining the positions of sampling points generate multiple output channels. Patterns are randomly initialized with a uniform distribution of locations centered on a predefined square window, and then subsequently learned in an end-to-end supervised learning fashion. Fig. 3 (a) shows a traditional local binary pattern, which is a fixed
pattern without much variety; there are eight sampling points denoted by green circles, surrounding a pivot point in the meshed star at the center of pattern; Fig. 3(b)-(d) shows a learnable pattern with eight sampling points in green and a pivot point as a star at the center. Our learnable patterns are initialized using a normal distribution of positions within a given area. Different sizes of the green circle stand for the bit position of the comparison outcome on the output bit array. We allocate the comparison outcome of the largest green circle to the most significant bit of the output pixel, the second largest to the second largest bit, and so on. The red arrows represent the driving forces that can push the sampling points to better positions to minimize the classification error. The model size of an LBPNet is tiny compared with CNN because the learnable parameters in LBPNets are the sparse and discrete sampling patterns.
LBP OPERATION
First, LBPNet samples pixels from incoming images and compares the sampled pixel value with the center sampled point, the pivot. If the sampled pixel value is larger than that of the center one, the output is a bit “1”; otherwise, the output is set to “0.” Next, we allocate the output bits to a binary digit array in the output pixel based on a predefined ordering. The number of sampling points defines the number of bits of an output pixel on a feature map. Then we slide the local binary pattern to the
next location and perform the aforementioned steps until a feature map is generated. In most cases, the incoming image has multiple channels; hence we perform the LBP operation on every input channel.
Fig. 4 shows a snapshot of the LBP operations. Given two input channels, ch.a and ch.b, we perform the LBP operation on each channel with different kernel patterns. The two 4-bit response binary numbers of the intermediate output are shown on the bottom. For clarity, we use green dashed arrows to mark where the pixels are sampled and list the comparison equations under the resulting bits. A logical problem has emerged: we need a channel fusion mechanism to avoid the explosion of the exponential growing channel numbers.
CHANNEL FUSION WITH RANDOM PROJECTION
We use random projection (Bingham & Mannila, 2001) as a dimensionreducing and distance-preserving process to select output bits among intermediate channels for the concerned output channel as shown in Fig. 5. The random projection is implemented with a predefined mapping table for each output channel, i.e., we fix the projection map upon initialization. All output pixels on the same output channel share the same mapping. Random projection
not only solves the channel fusion with a bit-wise operation but also simplifies the computation, because we do not have to compare all sampling points with the pivots. For example, in Fig. 5, the two pink arrows from intermediate ch.a, and the two yellow arrows from intermediate ch.b bring the four bits for the composition of an output pixel. Only the MSB and LSB on ch.a and the middle two bits on the ch.b need to be computed. If the output pixel is n-bit, for each output pixel, there will be n comparisons needed, which is irrelevant to the number of input channels. The more input channels bring the more combinations of representations in a random projection table.
Throughout the forward propagation, there are no multiplication or addition operations. Only comparison and memory access are used. Therefore, the design of LBPNets is efficient in the aspects of both software and hardware.
NETWORK STRUCTURES FOR LBPNET
The network structure of LBPNet must be carefully designed. Owing to the nature of the comparison, the outcome of an LBP layer is very similar to the outlines in the input image. In other words, our LBP layer is good at extracting high-frequency components in the spatial domain but relatively weak at understanding low-frequency components. Therefore, we use a residual-like structure to compensate for this weakness of LBPNet. Fig. 6 shows three kinds of residual-net-like building
blocks. Fig. 6 (a) is the typical building block for residual networks. The convolutional kernels learn to obtain the residual of the output after the addition. Our first attempt is to introduce the LBP layer into this structure as shown in Fig. 6 (b), in which we utilize a 1-by-1 convolution to learn a combination of LBP feature maps. However, the convolution incurs too many multiplication and accumulation operations especially when the LBP kernels increases. Then, we combine LBP operation with a random projection as shown in Fig. 6 (c). Because the pixels in the LBP output feature maps are always positive, we use a shifted rectified linear layer (shifted-ReLU) to increase nonlinearities. The shifted-ReLU truncates any magnitudes below half of the maximum of the LBP output. More specifically, if a pattern has n sampling points, the shifted-ReLU is defined as Eq. 1.
f(x) =
{ x , x > 2n−1 − 1
2n−1 − 1 , otherwise (1) As mentioned earlier, the low-frequency components reduce as the information passes through several LBP layers. To preserve the low-frequency components while making the block MAC-free, we introduce a joint operation cascading the input tensor of the block and the output tensor of the shifted-ReLU along the channel dimension. The number of channels is under controlled since the increasing trend is linear to the number of input channels.
HARDWARE BENEFITS
LBPNet saves in hardware cost by avoiding the convolution operations. Table 1 lists the reference numbers of logic gates of the concerned arithmetic units. A ripple-carry full-adder requires 5 gates for each bit. A 32-bit multiplier includes a data-path logic and a control logic. Because there are too many feasible implementations of the control logic circuits, we
conservatively use an open range to express the sense of the hardware expense. The comparison can be made with a pure combinational logic circuit of 11 gates, which also means only the infinitesimal internal gate delays dominate the computation latency. The comparison is not only cheap regarding its gate count but also fast due to a lack of sequential logic inside. Slight difference in numbers of logic gates may apply if different synthesis tools or manufacturers are chosen. With the capability of an LBP layer as strong as a convolutional layer concerning classification accuracy, replacing the convolution operations with comparison gives us a 27X saving of hardware cost.
Another important benefit is energy saving. The energy demand for each arithmetic device has been shown in (Horowitz, 2014). If we replace all convolution operations with comparisons, the energy consumption is reduced by 153X.
Moreover, the core of LBPNet is composed of bit shifting and bitwise-OR, and both of them have no concurrent accessing issue. If we are implementing an LBPNet hardware accelerator, no matter on FPGA or ASIC flow, the absence of the concurrent issue resulted from convolution’s accumulation process will guarantee a speedup over CNN hardware accelerator. For more justification, please refer to the forward algorithm in the appendix.
BACKWARD PROPAGATION OF LBPNET
To train LBPNets with gradient-based optimization methods, we need to tackle two problems: 1). The non-differentiability of comparison; and 2). The lack of a source force to push the sampling points in a pattern.
DIFFERENTIABILITY
The first problem can be solved if we approximate the comparison operation with a shifted and scaled hyperbolic tangent function as shown in Eq. 2.
Ilbp > Ipivot approximated→ 1
2 (tanh( Ilbp − Ipivot k ) + 1), (2)
where k is the scaling parameters to accommodate the number of sampling points from a previous LBP layer, Ilbp is the sampled pixel in a learnable LBP kernel, and Ipivot is the sampled pixel on the pivot. We provide a sensitivity analysis of k w.r.t. classification accuracy in the appendix. The hyperbolic tangent function is differentiable and has a simple closed-form for the implementation.
DEFORMATION WITH OPTICAL FLOW THEORY
To deform the local binary patterns, we resort to the concept from optical flow theory. Assuming the image content in the same class share the same features, even though there are certain minor shape transformations, chrominance variations or different view angles, the optical flow on these images should share similarities with each other. ∂I∂xVx+ ∂I ∂yVy = − ∂I ∂t The equation above shows the optical flow theory, where I is the pixel value, a.k.a luminance, Vx and Vy represent the two orthogonal components of the optical flow among the same or similar image content. The LHS of optical flow theory can be interpreted as a dot-product of image gradient ( ∂I∂x x̂ + ∂I ∂y ŷ) and optical flow (Vxx̂ + Vy ŷ), and this product is the negative derivative of luminance versus time across different images, where x̂ and ŷ denote the two orthogonal unit vectors on the 2-D coordinate.
To minimize the difference between images in the same class is equivalent to extract similar features of the images in the same class for classification. However, both the direction and magnitude of the optical flow underlying the dataset are unknown. The minimization of a dot-product cannot be done by changing the image gradient to be orthogonal with the optical flow. Therefore, the only feasible path to minimize the magnitude of the RHS is to minimize the image gradient. Please note the sampled image gradient can be changed by deforming the apertures, which are the sampling points of local binary patterns.
When applying calculus chain rule on the cost of LBPNet with regard to the position of each sampling point, one can easily conclude that the last term of the chain rule is the image gradient. Since the sampled pixel value is the same as the pixel value on the image, the gradient of sampled value with regard to the sampling location on a pattern is equivalent to the image gradient on the incoming image. Eq. 3 shows the gradient from the output loss through a fully-connected layer with weights, wj , toward the image gradient.
∂cost ∂position = ∑ j (∆jwj) ∂g(s) ∂s ∂s ∂Ilbp ( dIlbp dx x̂ + dIlbp dy ŷ), (3)
where ∆j is the backward propagated error, ∂g(s) ∂s is the derivative of activation function, and ∂s ∂Ilbp is the gradient of Eq. 2. Please refer to the appendix for more details of the forward-backward training algorithm.
EXPERIMENTS
In this section, we conduct a series of experiments on five datasets and their subsets: MNIST, SVHN, DHCD, ICDAR2005, and Chars74K to verify the capability of LBPNet. Some typical images of these character datasets are shown in Fig. 1. Please refer to the appendix for the description of datasets. We additionally evaluate LBPNet on a few broader categories such as face, pedestrian, and affNIST and have observed promising results for object classification.
EXPERIMENT SETUP
In all of the experiments, we use all training examples to train LBPNets and directly validate on test sets. To avoid peeping, we do not employ the validation errors in the backward propagation. There are no data augmentations used in the experiments.
We implement two versions of LBPNet using the two building blocks shown in Fig. 6 (b) and (c). For the remaining parts of this paper, we call the LBPNet using 1-by-1 convolution as the channel fusion mechanism LBPNet(1x1) (has convolution in the fusion part), and the version of LBPNet utilizing random projection LBPNet(RP) (totally convolution-free). The number of sampling points in a pattern is set to 4, and the area size for the pattern to deform is 5-by-5.
LBPNet also has an additional multilayer perceptron (MLP) block, which is made with two fullyconnected layers of 512 and #classes neurons. Besides the nonlinearities, there is one batchnormalization layer. The MLP block’s performance without any convolutional layers or LBP layers on the three datasets is shown in Table 2, 3. The model size and speed of the MLP block are excluded in the comparisons since all models have an MLP block.
To understand the capability of LBPNet when compared with existing convolution-based methods, we build two feed-forward streamline CNNs as our baseline for each dataset. CNN-baseline is designed in the same number of layers and number of kernels with the LBPNet; the other, CNN-lite, is
designed subject to the same memory footprint with the LBPNet(RP). The basic block of the CNNs contains a spatial convolution layer (Conv) followed by a batch normalization layer (BatchNorm) and a rectified linear layer (ReLU).
In the BNN (Hubara et al., 2016) paper, the classification on MNIST is done with a binarized multilayer perceptron network (MLP). We adopt the binarized convolutional neural network (BCNN) in (Hubara et al., 2016) for SVHN to perform the classification and re-produce the same accuracy as shown in (Lin et al., 2017) on MNIST.
EXPERIMENTAL RESULTS
Table 2 and 3 show the experimental results of LBPNet on MNIST and SVHN together with the baseline and previous works. We list the classification error rate, model size, latency of the inference, and the speedup compared with the baseline CNN. The best value of each column is shown in bold. Please note the calculation of latency in cycles is made with an assumption that no SIMD parallelism and pipelining optimization is applied. Because we need to understand the total number of computations in every network but both floating-point and binary arithmetics are involved, we cannot use FLOPs as a measure. Therefore, we adopt typical cycle counts shown in Table 1 as the measure of latencies. For the calculation of model size, we exclude the MLP blocks and count the required memory for necessary variables to focus on the comparison between the intrinsic operations in CNNs and LBPNets, respectively the convolution and the LBP operation.
Table 2: The performance of LBPNet on MNIST.
Error ↓ Size ↓ Latency ↓ Speedup ↑(Bytes) (cycles) MLP Block 24.22% - - - CNN-baseline 0.44% 1.41M 222.0M 1X CNN-lite 1.20% 456 553K 401.4X BCNN 0.47% 1.89M 306.1M 0.725X LBCNN 0.49% 12.2M 8.78G 0.0253X LBPNet (this work) LBPNet (1x1) 0.50% 1.27M 27.73M 8.004X LBPNet (RP) 0.50% 397.5 651.2K 340.8X
Table 3: The performance of LBPNet on SVHN.
Error ↓ Size ↓ Latency ↓ Speedup ↑(Bytes) (cycles) MLP Block 77.78% - - - CNN-baseline 8.30% 15.96M 9.714G 1X CNN-lite 69.14% 2.80K 1.576M 6164X BCNN 2.53% 1.89M 312M 31.18X LBCNN 5.50% 6.70M 7.098G 1.369X LBPNet (this work) LBPNet (1x1) 8.33% 1.51M 9.175M 1059X LBPNet (RP) 7.31% 2.79K 4.575M 2123X
MNIST. The CNN-baseline and LBPNet(RP) share the same network structure, 39-40-80, and the CNN-lite is limited to the same memory size so that the network structure is 2-3. The baseline CNN achieves the lowest classification error rate 0.44%. The BCNN possesses a decent speedup while maintaining the classification accuracy. While LBCNN claimed its saving in memory footprint, to achieve 0.49% error rate, 75 layers of LBCNN basic blocks are used. As a result, LBCNN loses speedups. The 3-layer LBPNet(1x1) with 40 LBP kernels and 40 1-by-1 convolutional kernels achieves 0.50%. The 3-layer LBPNet(RP) reaches 0.50% error rate as well. Although LBPNet’s performance is slightly inferior, the model size of LBPNet(RP) is reduced to 397.5 bytes, and the speedup is 340.8X faster than the baseline CNN. Even BCNN cannot be on par with such a vast memory reduction and speedup. The CNN-lite delivering the worst error rate demonstrates that if we shrink a CNN model down to the same memory size as the LBPNet(RP), the classification error of CNN(lite) is greatly sacrificed.
SVHN. Table 3 shows the experimental results of LBPNet on SVHN together with the baseline and previous works. The CNN-baseline and LBPNet(RP) share the same network structure, 67-70- 140-280-560, and the CNN-lite is limited to the same memory size so that the network structure is 8-17. BCNN outperforms our baseline and achieves 2.53% with smaller memory footprint and higher speed. LBCNN also achieve a good memory reduction and 1.369X speed-up. The 5-layer LBPNet(1x1) with 8 LBP kernels and 32 1-by-1 convolutional kernels achieve 8.33%, which is close to our baseline CNN’s 8.30%. The convolution-free LBPNet(RP) for SVHN is built with 5 layers of LBP basic blocks, 67-70-140-280-560, and achieves 7.31% error rate. Compared with CNN(lite)’s high error rate, the learning of LBPNet’s sampling point positions is proven to be effective and economical.
More Results. Table 4 lists the experimental results of LBPNet(RP) on all character recognition datasets. LBPNets achieve the state-of-the-art accuracies on all of the datasets.
PRELIMINARY RESULTS ON OBJECTS AND DEFORMED PATTERNS
Next, we show results on datasets of general objects.
Pedestrain: We first evaluate LBPNet on the INRIA pedestrian dataset (Dalal & Triggs, 2005), which consists of cropped positive and negative images. Note that we did not im-
plement an image-based object detector due to the focus of our paper. Fig. 7 shows the trade-off curves of a 3-layer LBPNet (37-40-80) and a 3-layer CNN (37-40-80). Here we did not exhaustively explore the capability of LBPNet for object classification.
Face: We apply our LBPNet on FDDB dataset (Jain & Learned-Miller, 2010) to verify the face classification performance of LBPNet. Same as previously, we perform training and testing on a dataset of cropped images; we use the annotated positive face examples with cropped four non-person frames in every training image to create negative face examples for both training and testing. The structures of the LBPNet and CNN are the same as before (37-40-80). LBPNet achieves 97.78%, and the baseline CNN reaches 97.55%.
affNIST: We conduct an experiment on affNIST 1, which is composed of 32 translation variations of MNIST (including the original MNIST). To accelerate the experiment, we randomly draw three variations of each original example to get training and testing subsets of affNIST. We repeat the same process to draw examples and train
the networks ten times to get an averaged result. The network structure of LBPNet and our baseline CNN are the same, 39-40-80. To improve the translation invariant property of the networks, we use two max-pooling layers following the first and second LBP layer or convolutional layer. With the training and testing on the subsets of affNIST, LBPNet achieves 93.18%, and CNN achieves 94.88%.
CONCLUSION AND FUTURE WORK
We have built a convolution-free, end-to-end, and bitwise LBPNet from basic operations and verified its effectiveness on character recognition datasets with orders of magnitude speedup (hundred times) in testing and model size reduction (thousand times) when compared with the baseline and the binarized CNNs. The learning of local binary patterns results in an unprecedentedly efficient model since, to the best of our knowledge, there is no compression/discretization of CNN can achieve the KByte level model size while maintaining the state-of-the-art accuracy on the character recognition tasks. Both the memory footprints and computation latencies of LBPNet and previous works are listed. LBPNet points to a promising direction for building new generation hardware-friendly deep learning algorithms to perform computation on the edge devices.
1https://www.cs.toronto.edu/ tijmen/affNIST/
APPENDIX
FORWARD PROPAGATION ALGORITHM
Algorithm 1: Forward of LBPNet input : An input tensor X of shape (ci, w, h), previous pattern P of shape (co, ns), and the fixed projection
map M of shape (co, ns). The pattern width k and padding width d = ⌊ k 2 ⌋ . Please note every element
of P is a tuple. output: A scalar predictions y.
1 X ← ZeroPadding(X , d); 2 for io = 1 to co do 3 for ih = to h do 4 for iw = 1 to w do 5 for is = 1 to ns do 6 ii ←M [io, is]; 7 (ipx, ipy)← P [io, is]; 8 pivot← X[iw + d][iw + d][ii]; 9 sample← X[iw + ipx][iw + ipy][ii];
10 if sample > pivot then 11 y[iw][ih][io] | = 1 is 12 end 13 end 14 end 15 end 16 end 17 return y
Alg. 1 describes the forward algorithm of an LBP layer. The three outermost nested loops form the sliding window operation to generate an output feature maps, and the innermost loop is the LBP operation. We combine the LBP operation with random projection to skip unnecessary comparisons. Firstly, we look up the random projection map for the input plane index and then use it to sample only the necessary pairs for the comparison.
The core of LBPNet is implemented with bit shifting and bitwise-OR, and both of them have no concurrent accessing issue. That is, we can directly implement it with CUDA programming to accelerate the inference on GPU. If we are implementing an LBPNet hardware accelerator, no matter on FPGA or ASIC flow, the absence of concurrent issue resulted from CNN’s accumulation process will guarantee a speedup over CNN’s hardware accelerator.
BACKWARD PROPAGATION ALGORITHM
Algorithm 2: Backward of LBPNet input : An input tensor X , a gradient tensor of loss w.r.t the output of current layer go(co×wo×ho), previous
pattern P , and the fixed projection map M . The pattern width k and padding width d. During training, we remember the previous real-valued pattern R of the same shape of P .
output: The gradient of loss w.r.t. the input tensor gi in shape (ci, w, h), and the gradient of loss w.r.t. the position of sampling point. gP in shape (co, ns). Please note every element of gP is a tuple.
1 5← ImageGradient(X); 2 P ← round(R); 3 D ← LookUpDifference(X , P , M ); 4 E ← ConstructExp(tanh(D),P , M ); 5 dE ← ConstructDiffExp(1-tanh2(D),P , M ); 6 gi ← 12g T o E; 7 gP ← go(dEF5)T ; 8 return gi, gP , R, P
Alg. 2 describes the backward propagation at a high-level point of view. Because LBPNet requires sophisticated element-wise matrix operation, some of them have no matrix-to-vector or matrix-to-
matrix multiplication equivalence but can be implemented and optimized in low-level CUDA codes for training speed. The ImageGradient(.) function calculates the image gradient vector field of the input feature map. Then, round(.) function discretize the previous real-valued pattern for the image sampling later on. LookUpDifference(.) samples the input tensor with the concerned input plane index from the projection map. This step is similar to the core of Alg. 1, but we calculate the difference instead of comparing the pairs of sampled pixels.
The ConstructExp(.) function multiplies the hyperbolic tangential difference matrix with the exponential of 2 corresponding to the position of the comparison result in an output bit array. For example, if a comparison result is allocated to the MSB, the hyperbolic tangential value will be multiplied with 2ns , assuming ns sampling pairs per kernel. The ConstructDiffExp(.) performs the same calculation with ConstructExp(.) except for the first argument is replaced with the derivative of tanh(.). These two sub-routine functions convert sparse kernels to dense kernels for the follow matrix-to-matrix multiplications.
The sixth line uses a matrix-to-matrix multiplication to collect and weight the output gradient tensor from the successive layer. This step is the same with CNN’s backward propagation. The resulting tensor is also called input gradient tensor and will be passed to the preceding layer to accomplish the backward propagation.
The seventh line element-wisely times the differential exponential matrix with the image gradient first and then multiply the result with the output gradient tensor. The resulting tensor carries the gradient of LBP parameters, ∂cost∂position , which will be multiplied with an adaptive learning rate for the update of sampling positions of an LBP kernel.
DATASET DESCRIPTIONS
Images in the padded MNIST dataset are hand-written numbers from 0 to 9 in 32-by-32 grayscale bitmap format. The dataset is composed of a training set of 60, 000 examples and a test set of 10, 000 examples. Both staff and students wrote the manuscripts. Most of the images can be easily recognized and classified, but there is still a portion of sloppy images inside MNIST.
SVHN is a photo dataset of house numbers. Although cropped, images in SVHN include some distracting numbers around the labeled number in the middle of the image. The distracting parts increase the difficulty of classifying the printed numbers. There are 73, 257 training examples and 26, 032 test examples in SVHN.
Table 5: The datasets we used in the experiment.
Description #Class #Examples CNN Baseline LBPNet (RP) (ours)
DHCD Handwritten Devanagari characters 46 46x2,000 98.47% (Acharya et al., 2015) 99.19% ICDAR-DIGITS Photos of numbers 10 988 100.00% 100.00% ICDAR-UpperCase Photos of lower case Eng. char. 26 5,288 100.00% 100.00% ICDAR-LowerCase Photos of upper case Eng. char. 26 5,453 100.00% 100.00% Chars74K-EnglishImg Photos, Alphanumeric 62 7,705 47.09% (De Campos et al., 2009) 58.31% Chars74K-EnglishHnd Handwritten, Alphanumeric 62 3,410 71.32% 73.37% Chars74K-EnglishFnt Printed Fonts, Alphanumeric 62 62,992 78.09% 77.26%
LEARNING CURVES
Figure 8: Error curves on benchmark datasets. (a) test errors on MNIST; (b) test errors on SVHN.
Fig. 8 shows the learning curves of LBPNets on MNIST and SVHN.
SENSITIVITY ANALYSIS OF k
Fig 9 shows the sensitivity analysis of the parameter k in Eq. 2 w.r.t. the training accuracy. The LBPNet structure we use is 3-layer, 39-40-80. We gradually reduce k from 10 to 0.01 to verify the effect on the learning curves. Sub-figure (a) and (c) shows the smaller k is, the lower the error rate is, but there exist a saturation when k decreases below 1. Sub-figure (b) shows a smaller k suppresses the ripple of training loss better. As a summary, because we approximate the comparison function with a sifted and scaled hyperbolic tangent function. A smaller k implies less error between the approximation and the original comparison curve, and hence simulate the comparison while securing differentiability. In this paper, we choose k = 0.1 to balance between classification accuracy and the overflow risk of the gradient summation during backward propagation. | 1. What is the novel approach introduced by the paper in building lightweight convolutional neural networks?
2. How do the binary patterns used in the paper differ from traditional convolution filters, and what advantages does this provide?
3. What are the differences between the baseline BCNN and the network using the proposed method, and how might this impact the comparisons made in the paper?
4. Can the proposed method be applied to other computer vision tasks such as face recognition or object detection, and if so, how might its performance compare to specialized architectures for those tasks? | Review | Review
1. The paper introduces the idea of some existing hand-crafted features into the deep learning framework, which is a smart way for building light weighted convolutional neural networks.
2. I have noticed that binary patterns used in the paper are trainable, which means that these binary patterns can be seen as learned convolution filters with extremely space and computational complexity. Thus, the proposed method can also be recognized as a kind of binary network.
3. The baseline BCNN has a different architecture to the network using the proposed method. Thus, comparisons shown in Table 3 and Table 4 are somewhat unfair.
4. The capability of the proposed method was only verified on character recognition datasets. Does it can be easily used for other tasks such as face recognition or object detection on some relatively large datasets? |
ICLR | Title
Local Binary Pattern Networks for Character Recognition
Abstract
Memory and computation efficient deep learning architectures are crucial to the continued proliferation of machine learning capabilities to new platforms and systems, especially, mobile sensing devices with ultra-small resource footprints. In this paper, we demonstrate such an advance for the well-studied character recognition problem. We use a strategy different from the existing literature by proposing local binary pattern networks or LBPNet that can learn and perform bit-wise operations in an end-to-end fashion. Binarization of operations in convolutional neural networks has shown promising results in reducing the model size and computing efficiency. Characters consist of some particularly structured strokes that are suitable for binary operations. LBPNet uses local binary comparisons and random projection in place of conventional convolution (or approximation of convolution) operations, providing important means to improve memory and speed efficiency that is particularly suited for small footprint devices and hardware accelerators. These operations can be implemented efficiently on different platforms including direct hardware implementation. LBPNet demonstrates its particular advantage on the character classification task where the content is composed of strokes. We applied LBPNet to benchmark datasets like MNIST, SVHN, DHCD, ICDAR, and Chars74K and observed encouraging results. INTRODUCTION Convolutional Neural Networks (CNN) (LeCun et al., 1989a) have had a notable impact on many applications. Modern CNN architectures such as AlexNet (Krizhevsky et al., 2012), VGG (Simonyan & Zisserman, 2015), GoogLetNet (Szegedy et al., 2015), and ResNet (He et al., 2016) have greatly advanced the use of deep learning techniques (Hinton et al., 2006) into a wide range of computer vision applications (Girshick et al., 2014; Long et al., 2015). As deep learning models mature and take on increasingly complex pattern recognition tasks, these demand tremendous computational resources with correspondingly higher performance machines and accelerators that continue to be fielded by system designers. It also limits their use to applications that can afford the energy and/or cost of such systems. By contrast, the universe of embedded devices especially when used as intelligent edge-devices in the emerging distributed systems presents a higher range of potential applications from augmented reality systems to smart city systems. Optical character recognition (OCR) particularly in the wild, shown in Fig. 1, has become an essential task for computer vision applications such as autonomous driving and mixed reality. There existed CNN-based methods (Yin et al., 2013) and other probabilistic learning methods (Yao et al., 2014a;b) handling the OCR tasks. However, the CNN-based models are computation demanding, and the probabilistic learning methods required more patches, e.g., empirical rule, clustering, error correction, or boosting to improve accuracy. Various methods have been proposed to perform network pruning (LeCun et al., 1989b; Guo et al., 2016), compression (Han et al., 2015; Iandola et al., 2016), or sparsification(Liu et al., 2015). Impressive results have been achieved lately by using binarization of selected operations in CNNs (Courbariaux et al., 2015; Hubara et al., 2016; Rastegari et al., 2016). At the core, these efforts seek to approximate the internal computations from floating point to binary while keeping the underlying convolution operation exact or approximate, but the nature of character images has not been fully utilized yet.
N/A
Memory and computation efficient deep learning architectures are crucial to the continued proliferation of machine learning capabilities to new platforms and systems, especially, mobile sensing devices with ultra-small resource footprints. In this paper, we demonstrate such an advance for the well-studied character recognition problem. We use a strategy different from the existing literature by proposing local binary pattern networks or LBPNet that can learn and perform bit-wise operations in an end-to-end fashion. Binarization of operations in convolutional neural networks has shown promising results in reducing the model size and computing efficiency. Characters consist of some particularly structured strokes that are suitable for binary operations. LBPNet uses local binary comparisons and random projection in place of conventional convolution (or approximation of convolution) operations, providing important means to improve memory and speed efficiency that is particularly suited for small footprint devices and hardware accelerators. These operations can be implemented efficiently on different platforms including direct hardware implementation. LBPNet demonstrates its particular advantage on the character classification task where the content is composed of strokes. We applied LBPNet to benchmark datasets like MNIST, SVHN, DHCD, ICDAR, and Chars74K and observed encouraging results.
INTRODUCTION
Convolutional Neural Networks (CNN) (LeCun et al., 1989a) have had a notable impact on many applications. Modern CNN architectures such as AlexNet (Krizhevsky et al., 2012), VGG (Simonyan & Zisserman, 2015), GoogLetNet (Szegedy et al., 2015), and ResNet (He et al., 2016) have greatly advanced the use of deep learning techniques (Hinton et al., 2006) into a wide range of computer vision applications (Girshick et al., 2014; Long et al., 2015). As deep learning models mature and take on increasingly complex pattern recognition tasks, these demand tremendous computational resources with correspondingly higher performance machines and accelerators that continue to be fielded by system designers. It also limits their use to applications that can afford the energy and/or cost of such systems. By contrast, the universe of embedded devices especially when used as intelligent edge-devices in the emerging distributed systems presents a higher range of potential applications from augmented reality systems to smart city systems.
Optical character recognition (OCR) particularly in the wild, shown in Fig. 1, has become an essential task for computer vision applications such as autonomous driving and mixed reality. There existed CNN-based methods (Yin et al., 2013) and other probabilistic learning methods (Yao et al., 2014a;b) handling the OCR tasks. However, the CNN-based models are computation demanding, and the probabilistic learning methods required more patches, e.g., empirical rule, clustering, error correction, or boosting to improve accuracy.
Various methods have been proposed to perform network pruning (LeCun et al., 1989b; Guo et al., 2016), compression (Han et al., 2015; Iandola et al., 2016), or sparsification(Liu et al., 2015). Impressive results have been achieved lately by using binarization of selected operations in CNNs (Courbariaux et al., 2015; Hubara et al., 2016; Rastegari et al., 2016). At the core, these efforts seek to approximate the internal computations from floating point to binary while keeping the underlying convolution operation exact or approximate, but the nature of character images has not been fully utilized yet.
We propose LBPNet as a light-weighted and compact deep-learning approach that can leverage the nature of character images since LBPNet is sensitive to discriminative outlines and strokes. Precisely, we focus on the task of character classification by exploring an alternative using nonconvolutional operations that can be executed in an architectural and hardware-friendly manner, trained in an end-to-end fashion from scratch (distinct to the previous attempts of binarizing the CNN operations). We note that this work has roots in research before the current generation of deep learning methods. Namely, the adoption of local binary patterns (LBP) (Ojala et al., 1996), which uses a number of predefined sampling points that are mostly on the perimeter of a circle, to compare with the pixel value at the center. The combination of multiple logic outputs (“1” if the value on a sampling point is greater than that on the center point and “0” otherwise) gives rise to a surprisingly rich representation (Wang et al., 2009) about the underlying image patterns and has shown to be complementary to the SIFT-kind features (Lowe, 2004). However, LBP has been under-explored in the deep learning research community where the feature learning part in the existing deep learning models (Krizhevsky et al., 2012; He et al., 2016) primarily refers to the CNN features in a hierarchy. We found LBP operations particularly suitable in recognizing characters that consist of structured strokes. Despite recent attempts such as (Juefei-Xu et al., 2017), the logic operation (comparison) in LBP has not been used in the existing CNN frameworks due to the intrinsic difference between the convolution and comparison operations.
Several features make LBPNet distinct from previous attempts. All the binary logic operations in LPBNet are directly learned, which is in a stark distinction to previous attempts that try to either binarize CNN operations (Hubara et al., 2016; Rastegari et al., 2016) or to approximate LBP with convolution operations (Juefei-Xu et al., 2017). Further, the LBP kernels in previous works are fixed upon initialized because the lack of a suitable mechanism to train the sampling patterns. Instead, we derive a differentiable function to learn the binary pattern and adopt random projection for the fusion operations. Fig. 2 illustrates the overview of LBPNet. The resulting LBPNet is very suitable for the character recognition tasks because the comparison operation can capture and comprehend the sharp outlines and distinct strokes among character images.Experiments show that thus configured LBPNet achieves the state-of-the-art results on benchmark datasets while accomplishing a significant improvement in the parameter size reduction gain (hundreds) and speedup (thousand times faster). That means LBPNet efficiently utilizes every storage bit and computation unit through the learning of image representations.
RELATED WORKS
Related works regarding model reduction of CNN fall along four primary dimensions.
Character recognition. Besides CNN-based methods for character recognition like BNN (Hubara et al., 2016), random forest (Yao et al., 2014a;b) was prevailing as well. However, the random forest methods usually required one or more techniques such as feature extraction, clustering, or error correction codes to improve the recognition accuracy. Our method, instead, provides a compact endto-end and computation efficient solution to character recognition.
Binarization for CNN. Binarizing CNNs to reduce the model size has been an active research direction (Courbariaux et al., 2015; Hubara et al., 2016; Rastegari et al., 2016). Through binarizing both weights and activations, the model size was reduced, and a logic operation can replace the multiplication. Non-binary operations like batch normalization with scaling and shifting are still in floating-point (Hubara et al., 2016). The XNOR-Net (Rastegari et al., 2016) introduces extra scaling layer to compensate for the loss of binarization and achieves a state-of-the-art accuracy on ImageNet. Both BNNs and XNORs can be considered as the discretization of real-numbered CNNs, while the core of the two works is still based on spatial convolution.
CNN approximation for LBP operation. Recent work on local binary convolutional neural networks (LBCNN) in (Juefei-Xu et al., 2017) takes an opposite direction to BNN (Hubara et al., 2016). LBCNN utilizes subtraction between pixel values together with a ReLU layer to simulate the LBP operations. During the training, the sparse binarized difference filters are fixed, only the successive 1-by-1 convolution, serving as channel fusion mechanism and the parameters in batch normalization layers, are learned. However, the feature maps of LBCNN are still in floating-point numbers, resulting in significantly increased model complexity as shown in Table 2. By contrast, LBPNet learns binary patterns and logic operations from scratch, resulting in orders of magnitude reduction in the memory size and an increase in testing speed over LBCNN.
Active or deformable convolution. Among the notable line of recent work that learns local patterns are active convolution (Jeon & Kim, 2017) and deformable convolution (Dai et al., 2017), where data dependent convolution kernels are learned. Both of these are quite different from LBPNet since they do not seek to improve network efficiency. Our binary patterns learn the position of the sampling points in an end-to-end fashion as logic operations (without the need for the use of addition operations). By contrast, directly relevant earlier work (Dai et al., 2017) essentially learns data-dependent convolutions.
LOCAL BINARY PATTERN NETWORK
Fig. 2 shows an overview of the LBPNet architecture. The forward propagation is composed of two steps: LBP operation and channel fusion. We introduce the patterns in LBPNets and the two steps in the following sub-sections and then describe the engineered network structures for LBPNets.
PATTERNS IN LBPNETS
In LBPNet, multiple patterns defining the positions of sampling points generate multiple output channels. Patterns are randomly initialized with a uniform distribution of locations centered on a predefined square window, and then subsequently learned in an end-to-end supervised learning fashion. Fig. 3 (a) shows a traditional local binary pattern, which is a fixed
pattern without much variety; there are eight sampling points denoted by green circles, surrounding a pivot point in the meshed star at the center of pattern; Fig. 3(b)-(d) shows a learnable pattern with eight sampling points in green and a pivot point as a star at the center. Our learnable patterns are initialized using a normal distribution of positions within a given area. Different sizes of the green circle stand for the bit position of the comparison outcome on the output bit array. We allocate the comparison outcome of the largest green circle to the most significant bit of the output pixel, the second largest to the second largest bit, and so on. The red arrows represent the driving forces that can push the sampling points to better positions to minimize the classification error. The model size of an LBPNet is tiny compared with CNN because the learnable parameters in LBPNets are the sparse and discrete sampling patterns.
LBP OPERATION
First, LBPNet samples pixels from incoming images and compares the sampled pixel value with the center sampled point, the pivot. If the sampled pixel value is larger than that of the center one, the output is a bit “1”; otherwise, the output is set to “0.” Next, we allocate the output bits to a binary digit array in the output pixel based on a predefined ordering. The number of sampling points defines the number of bits of an output pixel on a feature map. Then we slide the local binary pattern to the
next location and perform the aforementioned steps until a feature map is generated. In most cases, the incoming image has multiple channels; hence we perform the LBP operation on every input channel.
Fig. 4 shows a snapshot of the LBP operations. Given two input channels, ch.a and ch.b, we perform the LBP operation on each channel with different kernel patterns. The two 4-bit response binary numbers of the intermediate output are shown on the bottom. For clarity, we use green dashed arrows to mark where the pixels are sampled and list the comparison equations under the resulting bits. A logical problem has emerged: we need a channel fusion mechanism to avoid the explosion of the exponential growing channel numbers.
CHANNEL FUSION WITH RANDOM PROJECTION
We use random projection (Bingham & Mannila, 2001) as a dimensionreducing and distance-preserving process to select output bits among intermediate channels for the concerned output channel as shown in Fig. 5. The random projection is implemented with a predefined mapping table for each output channel, i.e., we fix the projection map upon initialization. All output pixels on the same output channel share the same mapping. Random projection
not only solves the channel fusion with a bit-wise operation but also simplifies the computation, because we do not have to compare all sampling points with the pivots. For example, in Fig. 5, the two pink arrows from intermediate ch.a, and the two yellow arrows from intermediate ch.b bring the four bits for the composition of an output pixel. Only the MSB and LSB on ch.a and the middle two bits on the ch.b need to be computed. If the output pixel is n-bit, for each output pixel, there will be n comparisons needed, which is irrelevant to the number of input channels. The more input channels bring the more combinations of representations in a random projection table.
Throughout the forward propagation, there are no multiplication or addition operations. Only comparison and memory access are used. Therefore, the design of LBPNets is efficient in the aspects of both software and hardware.
NETWORK STRUCTURES FOR LBPNET
The network structure of LBPNet must be carefully designed. Owing to the nature of the comparison, the outcome of an LBP layer is very similar to the outlines in the input image. In other words, our LBP layer is good at extracting high-frequency components in the spatial domain but relatively weak at understanding low-frequency components. Therefore, we use a residual-like structure to compensate for this weakness of LBPNet. Fig. 6 shows three kinds of residual-net-like building
blocks. Fig. 6 (a) is the typical building block for residual networks. The convolutional kernels learn to obtain the residual of the output after the addition. Our first attempt is to introduce the LBP layer into this structure as shown in Fig. 6 (b), in which we utilize a 1-by-1 convolution to learn a combination of LBP feature maps. However, the convolution incurs too many multiplication and accumulation operations especially when the LBP kernels increases. Then, we combine LBP operation with a random projection as shown in Fig. 6 (c). Because the pixels in the LBP output feature maps are always positive, we use a shifted rectified linear layer (shifted-ReLU) to increase nonlinearities. The shifted-ReLU truncates any magnitudes below half of the maximum of the LBP output. More specifically, if a pattern has n sampling points, the shifted-ReLU is defined as Eq. 1.
f(x) =
{ x , x > 2n−1 − 1
2n−1 − 1 , otherwise (1) As mentioned earlier, the low-frequency components reduce as the information passes through several LBP layers. To preserve the low-frequency components while making the block MAC-free, we introduce a joint operation cascading the input tensor of the block and the output tensor of the shifted-ReLU along the channel dimension. The number of channels is under controlled since the increasing trend is linear to the number of input channels.
HARDWARE BENEFITS
LBPNet saves in hardware cost by avoiding the convolution operations. Table 1 lists the reference numbers of logic gates of the concerned arithmetic units. A ripple-carry full-adder requires 5 gates for each bit. A 32-bit multiplier includes a data-path logic and a control logic. Because there are too many feasible implementations of the control logic circuits, we
conservatively use an open range to express the sense of the hardware expense. The comparison can be made with a pure combinational logic circuit of 11 gates, which also means only the infinitesimal internal gate delays dominate the computation latency. The comparison is not only cheap regarding its gate count but also fast due to a lack of sequential logic inside. Slight difference in numbers of logic gates may apply if different synthesis tools or manufacturers are chosen. With the capability of an LBP layer as strong as a convolutional layer concerning classification accuracy, replacing the convolution operations with comparison gives us a 27X saving of hardware cost.
Another important benefit is energy saving. The energy demand for each arithmetic device has been shown in (Horowitz, 2014). If we replace all convolution operations with comparisons, the energy consumption is reduced by 153X.
Moreover, the core of LBPNet is composed of bit shifting and bitwise-OR, and both of them have no concurrent accessing issue. If we are implementing an LBPNet hardware accelerator, no matter on FPGA or ASIC flow, the absence of the concurrent issue resulted from convolution’s accumulation process will guarantee a speedup over CNN hardware accelerator. For more justification, please refer to the forward algorithm in the appendix.
BACKWARD PROPAGATION OF LBPNET
To train LBPNets with gradient-based optimization methods, we need to tackle two problems: 1). The non-differentiability of comparison; and 2). The lack of a source force to push the sampling points in a pattern.
DIFFERENTIABILITY
The first problem can be solved if we approximate the comparison operation with a shifted and scaled hyperbolic tangent function as shown in Eq. 2.
Ilbp > Ipivot approximated→ 1
2 (tanh( Ilbp − Ipivot k ) + 1), (2)
where k is the scaling parameters to accommodate the number of sampling points from a previous LBP layer, Ilbp is the sampled pixel in a learnable LBP kernel, and Ipivot is the sampled pixel on the pivot. We provide a sensitivity analysis of k w.r.t. classification accuracy in the appendix. The hyperbolic tangent function is differentiable and has a simple closed-form for the implementation.
DEFORMATION WITH OPTICAL FLOW THEORY
To deform the local binary patterns, we resort to the concept from optical flow theory. Assuming the image content in the same class share the same features, even though there are certain minor shape transformations, chrominance variations or different view angles, the optical flow on these images should share similarities with each other. ∂I∂xVx+ ∂I ∂yVy = − ∂I ∂t The equation above shows the optical flow theory, where I is the pixel value, a.k.a luminance, Vx and Vy represent the two orthogonal components of the optical flow among the same or similar image content. The LHS of optical flow theory can be interpreted as a dot-product of image gradient ( ∂I∂x x̂ + ∂I ∂y ŷ) and optical flow (Vxx̂ + Vy ŷ), and this product is the negative derivative of luminance versus time across different images, where x̂ and ŷ denote the two orthogonal unit vectors on the 2-D coordinate.
To minimize the difference between images in the same class is equivalent to extract similar features of the images in the same class for classification. However, both the direction and magnitude of the optical flow underlying the dataset are unknown. The minimization of a dot-product cannot be done by changing the image gradient to be orthogonal with the optical flow. Therefore, the only feasible path to minimize the magnitude of the RHS is to minimize the image gradient. Please note the sampled image gradient can be changed by deforming the apertures, which are the sampling points of local binary patterns.
When applying calculus chain rule on the cost of LBPNet with regard to the position of each sampling point, one can easily conclude that the last term of the chain rule is the image gradient. Since the sampled pixel value is the same as the pixel value on the image, the gradient of sampled value with regard to the sampling location on a pattern is equivalent to the image gradient on the incoming image. Eq. 3 shows the gradient from the output loss through a fully-connected layer with weights, wj , toward the image gradient.
∂cost ∂position = ∑ j (∆jwj) ∂g(s) ∂s ∂s ∂Ilbp ( dIlbp dx x̂ + dIlbp dy ŷ), (3)
where ∆j is the backward propagated error, ∂g(s) ∂s is the derivative of activation function, and ∂s ∂Ilbp is the gradient of Eq. 2. Please refer to the appendix for more details of the forward-backward training algorithm.
EXPERIMENTS
In this section, we conduct a series of experiments on five datasets and their subsets: MNIST, SVHN, DHCD, ICDAR2005, and Chars74K to verify the capability of LBPNet. Some typical images of these character datasets are shown in Fig. 1. Please refer to the appendix for the description of datasets. We additionally evaluate LBPNet on a few broader categories such as face, pedestrian, and affNIST and have observed promising results for object classification.
EXPERIMENT SETUP
In all of the experiments, we use all training examples to train LBPNets and directly validate on test sets. To avoid peeping, we do not employ the validation errors in the backward propagation. There are no data augmentations used in the experiments.
We implement two versions of LBPNet using the two building blocks shown in Fig. 6 (b) and (c). For the remaining parts of this paper, we call the LBPNet using 1-by-1 convolution as the channel fusion mechanism LBPNet(1x1) (has convolution in the fusion part), and the version of LBPNet utilizing random projection LBPNet(RP) (totally convolution-free). The number of sampling points in a pattern is set to 4, and the area size for the pattern to deform is 5-by-5.
LBPNet also has an additional multilayer perceptron (MLP) block, which is made with two fullyconnected layers of 512 and #classes neurons. Besides the nonlinearities, there is one batchnormalization layer. The MLP block’s performance without any convolutional layers or LBP layers on the three datasets is shown in Table 2, 3. The model size and speed of the MLP block are excluded in the comparisons since all models have an MLP block.
To understand the capability of LBPNet when compared with existing convolution-based methods, we build two feed-forward streamline CNNs as our baseline for each dataset. CNN-baseline is designed in the same number of layers and number of kernels with the LBPNet; the other, CNN-lite, is
designed subject to the same memory footprint with the LBPNet(RP). The basic block of the CNNs contains a spatial convolution layer (Conv) followed by a batch normalization layer (BatchNorm) and a rectified linear layer (ReLU).
In the BNN (Hubara et al., 2016) paper, the classification on MNIST is done with a binarized multilayer perceptron network (MLP). We adopt the binarized convolutional neural network (BCNN) in (Hubara et al., 2016) for SVHN to perform the classification and re-produce the same accuracy as shown in (Lin et al., 2017) on MNIST.
EXPERIMENTAL RESULTS
Table 2 and 3 show the experimental results of LBPNet on MNIST and SVHN together with the baseline and previous works. We list the classification error rate, model size, latency of the inference, and the speedup compared with the baseline CNN. The best value of each column is shown in bold. Please note the calculation of latency in cycles is made with an assumption that no SIMD parallelism and pipelining optimization is applied. Because we need to understand the total number of computations in every network but both floating-point and binary arithmetics are involved, we cannot use FLOPs as a measure. Therefore, we adopt typical cycle counts shown in Table 1 as the measure of latencies. For the calculation of model size, we exclude the MLP blocks and count the required memory for necessary variables to focus on the comparison between the intrinsic operations in CNNs and LBPNets, respectively the convolution and the LBP operation.
Table 2: The performance of LBPNet on MNIST.
Error ↓ Size ↓ Latency ↓ Speedup ↑(Bytes) (cycles) MLP Block 24.22% - - - CNN-baseline 0.44% 1.41M 222.0M 1X CNN-lite 1.20% 456 553K 401.4X BCNN 0.47% 1.89M 306.1M 0.725X LBCNN 0.49% 12.2M 8.78G 0.0253X LBPNet (this work) LBPNet (1x1) 0.50% 1.27M 27.73M 8.004X LBPNet (RP) 0.50% 397.5 651.2K 340.8X
Table 3: The performance of LBPNet on SVHN.
Error ↓ Size ↓ Latency ↓ Speedup ↑(Bytes) (cycles) MLP Block 77.78% - - - CNN-baseline 8.30% 15.96M 9.714G 1X CNN-lite 69.14% 2.80K 1.576M 6164X BCNN 2.53% 1.89M 312M 31.18X LBCNN 5.50% 6.70M 7.098G 1.369X LBPNet (this work) LBPNet (1x1) 8.33% 1.51M 9.175M 1059X LBPNet (RP) 7.31% 2.79K 4.575M 2123X
MNIST. The CNN-baseline and LBPNet(RP) share the same network structure, 39-40-80, and the CNN-lite is limited to the same memory size so that the network structure is 2-3. The baseline CNN achieves the lowest classification error rate 0.44%. The BCNN possesses a decent speedup while maintaining the classification accuracy. While LBCNN claimed its saving in memory footprint, to achieve 0.49% error rate, 75 layers of LBCNN basic blocks are used. As a result, LBCNN loses speedups. The 3-layer LBPNet(1x1) with 40 LBP kernels and 40 1-by-1 convolutional kernels achieves 0.50%. The 3-layer LBPNet(RP) reaches 0.50% error rate as well. Although LBPNet’s performance is slightly inferior, the model size of LBPNet(RP) is reduced to 397.5 bytes, and the speedup is 340.8X faster than the baseline CNN. Even BCNN cannot be on par with such a vast memory reduction and speedup. The CNN-lite delivering the worst error rate demonstrates that if we shrink a CNN model down to the same memory size as the LBPNet(RP), the classification error of CNN(lite) is greatly sacrificed.
SVHN. Table 3 shows the experimental results of LBPNet on SVHN together with the baseline and previous works. The CNN-baseline and LBPNet(RP) share the same network structure, 67-70- 140-280-560, and the CNN-lite is limited to the same memory size so that the network structure is 8-17. BCNN outperforms our baseline and achieves 2.53% with smaller memory footprint and higher speed. LBCNN also achieve a good memory reduction and 1.369X speed-up. The 5-layer LBPNet(1x1) with 8 LBP kernels and 32 1-by-1 convolutional kernels achieve 8.33%, which is close to our baseline CNN’s 8.30%. The convolution-free LBPNet(RP) for SVHN is built with 5 layers of LBP basic blocks, 67-70-140-280-560, and achieves 7.31% error rate. Compared with CNN(lite)’s high error rate, the learning of LBPNet’s sampling point positions is proven to be effective and economical.
More Results. Table 4 lists the experimental results of LBPNet(RP) on all character recognition datasets. LBPNets achieve the state-of-the-art accuracies on all of the datasets.
PRELIMINARY RESULTS ON OBJECTS AND DEFORMED PATTERNS
Next, we show results on datasets of general objects.
Pedestrain: We first evaluate LBPNet on the INRIA pedestrian dataset (Dalal & Triggs, 2005), which consists of cropped positive and negative images. Note that we did not im-
plement an image-based object detector due to the focus of our paper. Fig. 7 shows the trade-off curves of a 3-layer LBPNet (37-40-80) and a 3-layer CNN (37-40-80). Here we did not exhaustively explore the capability of LBPNet for object classification.
Face: We apply our LBPNet on FDDB dataset (Jain & Learned-Miller, 2010) to verify the face classification performance of LBPNet. Same as previously, we perform training and testing on a dataset of cropped images; we use the annotated positive face examples with cropped four non-person frames in every training image to create negative face examples for both training and testing. The structures of the LBPNet and CNN are the same as before (37-40-80). LBPNet achieves 97.78%, and the baseline CNN reaches 97.55%.
affNIST: We conduct an experiment on affNIST 1, which is composed of 32 translation variations of MNIST (including the original MNIST). To accelerate the experiment, we randomly draw three variations of each original example to get training and testing subsets of affNIST. We repeat the same process to draw examples and train
the networks ten times to get an averaged result. The network structure of LBPNet and our baseline CNN are the same, 39-40-80. To improve the translation invariant property of the networks, we use two max-pooling layers following the first and second LBP layer or convolutional layer. With the training and testing on the subsets of affNIST, LBPNet achieves 93.18%, and CNN achieves 94.88%.
CONCLUSION AND FUTURE WORK
We have built a convolution-free, end-to-end, and bitwise LBPNet from basic operations and verified its effectiveness on character recognition datasets with orders of magnitude speedup (hundred times) in testing and model size reduction (thousand times) when compared with the baseline and the binarized CNNs. The learning of local binary patterns results in an unprecedentedly efficient model since, to the best of our knowledge, there is no compression/discretization of CNN can achieve the KByte level model size while maintaining the state-of-the-art accuracy on the character recognition tasks. Both the memory footprints and computation latencies of LBPNet and previous works are listed. LBPNet points to a promising direction for building new generation hardware-friendly deep learning algorithms to perform computation on the edge devices.
1https://www.cs.toronto.edu/ tijmen/affNIST/
APPENDIX
FORWARD PROPAGATION ALGORITHM
Algorithm 1: Forward of LBPNet input : An input tensor X of shape (ci, w, h), previous pattern P of shape (co, ns), and the fixed projection
map M of shape (co, ns). The pattern width k and padding width d = ⌊ k 2 ⌋ . Please note every element
of P is a tuple. output: A scalar predictions y.
1 X ← ZeroPadding(X , d); 2 for io = 1 to co do 3 for ih = to h do 4 for iw = 1 to w do 5 for is = 1 to ns do 6 ii ←M [io, is]; 7 (ipx, ipy)← P [io, is]; 8 pivot← X[iw + d][iw + d][ii]; 9 sample← X[iw + ipx][iw + ipy][ii];
10 if sample > pivot then 11 y[iw][ih][io] | = 1 is 12 end 13 end 14 end 15 end 16 end 17 return y
Alg. 1 describes the forward algorithm of an LBP layer. The three outermost nested loops form the sliding window operation to generate an output feature maps, and the innermost loop is the LBP operation. We combine the LBP operation with random projection to skip unnecessary comparisons. Firstly, we look up the random projection map for the input plane index and then use it to sample only the necessary pairs for the comparison.
The core of LBPNet is implemented with bit shifting and bitwise-OR, and both of them have no concurrent accessing issue. That is, we can directly implement it with CUDA programming to accelerate the inference on GPU. If we are implementing an LBPNet hardware accelerator, no matter on FPGA or ASIC flow, the absence of concurrent issue resulted from CNN’s accumulation process will guarantee a speedup over CNN’s hardware accelerator.
BACKWARD PROPAGATION ALGORITHM
Algorithm 2: Backward of LBPNet input : An input tensor X , a gradient tensor of loss w.r.t the output of current layer go(co×wo×ho), previous
pattern P , and the fixed projection map M . The pattern width k and padding width d. During training, we remember the previous real-valued pattern R of the same shape of P .
output: The gradient of loss w.r.t. the input tensor gi in shape (ci, w, h), and the gradient of loss w.r.t. the position of sampling point. gP in shape (co, ns). Please note every element of gP is a tuple.
1 5← ImageGradient(X); 2 P ← round(R); 3 D ← LookUpDifference(X , P , M ); 4 E ← ConstructExp(tanh(D),P , M ); 5 dE ← ConstructDiffExp(1-tanh2(D),P , M ); 6 gi ← 12g T o E; 7 gP ← go(dEF5)T ; 8 return gi, gP , R, P
Alg. 2 describes the backward propagation at a high-level point of view. Because LBPNet requires sophisticated element-wise matrix operation, some of them have no matrix-to-vector or matrix-to-
matrix multiplication equivalence but can be implemented and optimized in low-level CUDA codes for training speed. The ImageGradient(.) function calculates the image gradient vector field of the input feature map. Then, round(.) function discretize the previous real-valued pattern for the image sampling later on. LookUpDifference(.) samples the input tensor with the concerned input plane index from the projection map. This step is similar to the core of Alg. 1, but we calculate the difference instead of comparing the pairs of sampled pixels.
The ConstructExp(.) function multiplies the hyperbolic tangential difference matrix with the exponential of 2 corresponding to the position of the comparison result in an output bit array. For example, if a comparison result is allocated to the MSB, the hyperbolic tangential value will be multiplied with 2ns , assuming ns sampling pairs per kernel. The ConstructDiffExp(.) performs the same calculation with ConstructExp(.) except for the first argument is replaced with the derivative of tanh(.). These two sub-routine functions convert sparse kernels to dense kernels for the follow matrix-to-matrix multiplications.
The sixth line uses a matrix-to-matrix multiplication to collect and weight the output gradient tensor from the successive layer. This step is the same with CNN’s backward propagation. The resulting tensor is also called input gradient tensor and will be passed to the preceding layer to accomplish the backward propagation.
The seventh line element-wisely times the differential exponential matrix with the image gradient first and then multiply the result with the output gradient tensor. The resulting tensor carries the gradient of LBP parameters, ∂cost∂position , which will be multiplied with an adaptive learning rate for the update of sampling positions of an LBP kernel.
DATASET DESCRIPTIONS
Images in the padded MNIST dataset are hand-written numbers from 0 to 9 in 32-by-32 grayscale bitmap format. The dataset is composed of a training set of 60, 000 examples and a test set of 10, 000 examples. Both staff and students wrote the manuscripts. Most of the images can be easily recognized and classified, but there is still a portion of sloppy images inside MNIST.
SVHN is a photo dataset of house numbers. Although cropped, images in SVHN include some distracting numbers around the labeled number in the middle of the image. The distracting parts increase the difficulty of classifying the printed numbers. There are 73, 257 training examples and 26, 032 test examples in SVHN.
Table 5: The datasets we used in the experiment.
Description #Class #Examples CNN Baseline LBPNet (RP) (ours)
DHCD Handwritten Devanagari characters 46 46x2,000 98.47% (Acharya et al., 2015) 99.19% ICDAR-DIGITS Photos of numbers 10 988 100.00% 100.00% ICDAR-UpperCase Photos of lower case Eng. char. 26 5,288 100.00% 100.00% ICDAR-LowerCase Photos of upper case Eng. char. 26 5,453 100.00% 100.00% Chars74K-EnglishImg Photos, Alphanumeric 62 7,705 47.09% (De Campos et al., 2009) 58.31% Chars74K-EnglishHnd Handwritten, Alphanumeric 62 3,410 71.32% 73.37% Chars74K-EnglishFnt Printed Fonts, Alphanumeric 62 62,992 78.09% 77.26%
LEARNING CURVES
Figure 8: Error curves on benchmark datasets. (a) test errors on MNIST; (b) test errors on SVHN.
Fig. 8 shows the learning curves of LBPNets on MNIST and SVHN.
SENSITIVITY ANALYSIS OF k
Fig 9 shows the sensitivity analysis of the parameter k in Eq. 2 w.r.t. the training accuracy. The LBPNet structure we use is 3-layer, 39-40-80. We gradually reduce k from 10 to 0.01 to verify the effect on the learning curves. Sub-figure (a) and (c) shows the smaller k is, the lower the error rate is, but there exist a saturation when k decreases below 1. Sub-figure (b) shows a smaller k suppresses the ripple of training loss better. As a summary, because we approximate the comparison function with a sifted and scaled hyperbolic tangent function. A smaller k implies less error between the approximation and the original comparison curve, and hence simulate the comparison while securing differentiability. In this paper, we choose k = 0.1 to balance between classification accuracy and the overflow risk of the gradient summation during backward propagation. | 1. What is the main contribution of the paper, and how does it compare to previous works?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its performance, speed, and size?
3. How does the reviewer assess the clarity and quality of the paper's content, including the description of the approach, diagrams, and backpropagation section?
4. What are the limitations of the proposed method, such as its applicability to specific datasets or object recognition tasks?
5. Are there any questions or concerns regarding the evaluation and performance guarantees of the proposed technique? | Review | Review
In this work, a neural network that uses local binary patterns instead of kernel convolutions is introduced. Using binary patterns has two advantages: a) it reduces the network definition to a set of binary patterns (which requires much less storage than the floating point descriptions of the kernel weights used in CNNs) and b) allows for fast implementations relying only on logical operations (particularly fast on dedicated hardware).
This work is mostly descriptive of a proposed technique with no particular theoretical performance guarantees, so its value hinges mostly on its practical performance on real data. In that sense, its evaluation is relatively limited, since only figures for MNIST and SVHN are provided.
A list of additional datasets is provided in Table 5, but only the performance metric is listed, which is meaningless if it is not accompanied with figures for size, latency and speedup. The only takeway about the additional datasets is that the proposed LBPNet can match or outperform a weak CNN baseline, but we don't know if the latter achieves state-of-the-art performance (previous figures of the baseline CNN suggest it doesn't) and we don't know if there's significant gain in speed or size.
Regarding MNIST and SVHN, which are tested in some more detail, again, we are interested in the performance-speed (or size) tradeoff, and it is unclear that the current proposal is superior. The baseline CNN does not achieve state of the art performance (particularly in SVHN, for which the state-of-the-art is 1.7% and the baseline CNN achieves 6.8%). For SVHN, BCNN has a much better performance-speed tradeoff than the baseline, since it is both faster and higher performance. Then, the proposed method, LBPNet, has much higher speed, but lower performance than BCNN. It is unclear how LBPNet's and BCNN's speeds would compare if we were to match their performances. For this reason, it is unclear to me that LBPNet is superior to BCNN on SVHN.
Also the numbers in boldface are confusing, aren't they just incorrect for both the Latency and Error in MNIST? Same for the Latency in SVHN.
The description of the approach is reasonably clear and clarifying diagrams are provided. The backpropagation section seems a bit superficial and could be improved. For instance, backpropagation is computed wrt the binary sampling points, as if these were continuous, but they have been defined as discrete before. The appendix contains a bit more detail, where it seems that backpropagation is alternated with rounding. It's not justified why this is a valid gradient descent algorithm.
Also how the scaling k of the tanh is set is not explained clearly. Do you mean that with more sampling points k should be larger to keep the outputs of the approximate comparison operator close to 0 and 1?
Minor:
What exactly in this method makes it specific to character recognition? Since you are trying to capture both high-level and low-level frequencies, it seems you'd be capturing all the relevant information. SVHN data are color images with objects (digits) in it, what is the reason that makes other objects not be detectable with this approach?
English errors are pervasive throughout the paper. A non-exhaustive list:
Fig 4.b: X2 should be Y2
particuarly
"to a binary digits"
"In most case"
"0.5 possibility"
"please refer to Sec .."
"FORWARD PROPATATION" |
ICLR | Title
Local Binary Pattern Networks for Character Recognition
Abstract
Memory and computation efficient deep learning architectures are crucial to the continued proliferation of machine learning capabilities to new platforms and systems, especially, mobile sensing devices with ultra-small resource footprints. In this paper, we demonstrate such an advance for the well-studied character recognition problem. We use a strategy different from the existing literature by proposing local binary pattern networks or LBPNet that can learn and perform bit-wise operations in an end-to-end fashion. Binarization of operations in convolutional neural networks has shown promising results in reducing the model size and computing efficiency. Characters consist of some particularly structured strokes that are suitable for binary operations. LBPNet uses local binary comparisons and random projection in place of conventional convolution (or approximation of convolution) operations, providing important means to improve memory and speed efficiency that is particularly suited for small footprint devices and hardware accelerators. These operations can be implemented efficiently on different platforms including direct hardware implementation. LBPNet demonstrates its particular advantage on the character classification task where the content is composed of strokes. We applied LBPNet to benchmark datasets like MNIST, SVHN, DHCD, ICDAR, and Chars74K and observed encouraging results. INTRODUCTION Convolutional Neural Networks (CNN) (LeCun et al., 1989a) have had a notable impact on many applications. Modern CNN architectures such as AlexNet (Krizhevsky et al., 2012), VGG (Simonyan & Zisserman, 2015), GoogLetNet (Szegedy et al., 2015), and ResNet (He et al., 2016) have greatly advanced the use of deep learning techniques (Hinton et al., 2006) into a wide range of computer vision applications (Girshick et al., 2014; Long et al., 2015). As deep learning models mature and take on increasingly complex pattern recognition tasks, these demand tremendous computational resources with correspondingly higher performance machines and accelerators that continue to be fielded by system designers. It also limits their use to applications that can afford the energy and/or cost of such systems. By contrast, the universe of embedded devices especially when used as intelligent edge-devices in the emerging distributed systems presents a higher range of potential applications from augmented reality systems to smart city systems. Optical character recognition (OCR) particularly in the wild, shown in Fig. 1, has become an essential task for computer vision applications such as autonomous driving and mixed reality. There existed CNN-based methods (Yin et al., 2013) and other probabilistic learning methods (Yao et al., 2014a;b) handling the OCR tasks. However, the CNN-based models are computation demanding, and the probabilistic learning methods required more patches, e.g., empirical rule, clustering, error correction, or boosting to improve accuracy. Various methods have been proposed to perform network pruning (LeCun et al., 1989b; Guo et al., 2016), compression (Han et al., 2015; Iandola et al., 2016), or sparsification(Liu et al., 2015). Impressive results have been achieved lately by using binarization of selected operations in CNNs (Courbariaux et al., 2015; Hubara et al., 2016; Rastegari et al., 2016). At the core, these efforts seek to approximate the internal computations from floating point to binary while keeping the underlying convolution operation exact or approximate, but the nature of character images has not been fully utilized yet.
N/A
Memory and computation efficient deep learning architectures are crucial to the continued proliferation of machine learning capabilities to new platforms and systems, especially, mobile sensing devices with ultra-small resource footprints. In this paper, we demonstrate such an advance for the well-studied character recognition problem. We use a strategy different from the existing literature by proposing local binary pattern networks or LBPNet that can learn and perform bit-wise operations in an end-to-end fashion. Binarization of operations in convolutional neural networks has shown promising results in reducing the model size and computing efficiency. Characters consist of some particularly structured strokes that are suitable for binary operations. LBPNet uses local binary comparisons and random projection in place of conventional convolution (or approximation of convolution) operations, providing important means to improve memory and speed efficiency that is particularly suited for small footprint devices and hardware accelerators. These operations can be implemented efficiently on different platforms including direct hardware implementation. LBPNet demonstrates its particular advantage on the character classification task where the content is composed of strokes. We applied LBPNet to benchmark datasets like MNIST, SVHN, DHCD, ICDAR, and Chars74K and observed encouraging results.
INTRODUCTION
Convolutional Neural Networks (CNN) (LeCun et al., 1989a) have had a notable impact on many applications. Modern CNN architectures such as AlexNet (Krizhevsky et al., 2012), VGG (Simonyan & Zisserman, 2015), GoogLetNet (Szegedy et al., 2015), and ResNet (He et al., 2016) have greatly advanced the use of deep learning techniques (Hinton et al., 2006) into a wide range of computer vision applications (Girshick et al., 2014; Long et al., 2015). As deep learning models mature and take on increasingly complex pattern recognition tasks, these demand tremendous computational resources with correspondingly higher performance machines and accelerators that continue to be fielded by system designers. It also limits their use to applications that can afford the energy and/or cost of such systems. By contrast, the universe of embedded devices especially when used as intelligent edge-devices in the emerging distributed systems presents a higher range of potential applications from augmented reality systems to smart city systems.
Optical character recognition (OCR) particularly in the wild, shown in Fig. 1, has become an essential task for computer vision applications such as autonomous driving and mixed reality. There existed CNN-based methods (Yin et al., 2013) and other probabilistic learning methods (Yao et al., 2014a;b) handling the OCR tasks. However, the CNN-based models are computation demanding, and the probabilistic learning methods required more patches, e.g., empirical rule, clustering, error correction, or boosting to improve accuracy.
Various methods have been proposed to perform network pruning (LeCun et al., 1989b; Guo et al., 2016), compression (Han et al., 2015; Iandola et al., 2016), or sparsification(Liu et al., 2015). Impressive results have been achieved lately by using binarization of selected operations in CNNs (Courbariaux et al., 2015; Hubara et al., 2016; Rastegari et al., 2016). At the core, these efforts seek to approximate the internal computations from floating point to binary while keeping the underlying convolution operation exact or approximate, but the nature of character images has not been fully utilized yet.
We propose LBPNet as a light-weighted and compact deep-learning approach that can leverage the nature of character images since LBPNet is sensitive to discriminative outlines and strokes. Precisely, we focus on the task of character classification by exploring an alternative using nonconvolutional operations that can be executed in an architectural and hardware-friendly manner, trained in an end-to-end fashion from scratch (distinct to the previous attempts of binarizing the CNN operations). We note that this work has roots in research before the current generation of deep learning methods. Namely, the adoption of local binary patterns (LBP) (Ojala et al., 1996), which uses a number of predefined sampling points that are mostly on the perimeter of a circle, to compare with the pixel value at the center. The combination of multiple logic outputs (“1” if the value on a sampling point is greater than that on the center point and “0” otherwise) gives rise to a surprisingly rich representation (Wang et al., 2009) about the underlying image patterns and has shown to be complementary to the SIFT-kind features (Lowe, 2004). However, LBP has been under-explored in the deep learning research community where the feature learning part in the existing deep learning models (Krizhevsky et al., 2012; He et al., 2016) primarily refers to the CNN features in a hierarchy. We found LBP operations particularly suitable in recognizing characters that consist of structured strokes. Despite recent attempts such as (Juefei-Xu et al., 2017), the logic operation (comparison) in LBP has not been used in the existing CNN frameworks due to the intrinsic difference between the convolution and comparison operations.
Several features make LBPNet distinct from previous attempts. All the binary logic operations in LPBNet are directly learned, which is in a stark distinction to previous attempts that try to either binarize CNN operations (Hubara et al., 2016; Rastegari et al., 2016) or to approximate LBP with convolution operations (Juefei-Xu et al., 2017). Further, the LBP kernels in previous works are fixed upon initialized because the lack of a suitable mechanism to train the sampling patterns. Instead, we derive a differentiable function to learn the binary pattern and adopt random projection for the fusion operations. Fig. 2 illustrates the overview of LBPNet. The resulting LBPNet is very suitable for the character recognition tasks because the comparison operation can capture and comprehend the sharp outlines and distinct strokes among character images.Experiments show that thus configured LBPNet achieves the state-of-the-art results on benchmark datasets while accomplishing a significant improvement in the parameter size reduction gain (hundreds) and speedup (thousand times faster). That means LBPNet efficiently utilizes every storage bit and computation unit through the learning of image representations.
RELATED WORKS
Related works regarding model reduction of CNN fall along four primary dimensions.
Character recognition. Besides CNN-based methods for character recognition like BNN (Hubara et al., 2016), random forest (Yao et al., 2014a;b) was prevailing as well. However, the random forest methods usually required one or more techniques such as feature extraction, clustering, or error correction codes to improve the recognition accuracy. Our method, instead, provides a compact endto-end and computation efficient solution to character recognition.
Binarization for CNN. Binarizing CNNs to reduce the model size has been an active research direction (Courbariaux et al., 2015; Hubara et al., 2016; Rastegari et al., 2016). Through binarizing both weights and activations, the model size was reduced, and a logic operation can replace the multiplication. Non-binary operations like batch normalization with scaling and shifting are still in floating-point (Hubara et al., 2016). The XNOR-Net (Rastegari et al., 2016) introduces extra scaling layer to compensate for the loss of binarization and achieves a state-of-the-art accuracy on ImageNet. Both BNNs and XNORs can be considered as the discretization of real-numbered CNNs, while the core of the two works is still based on spatial convolution.
CNN approximation for LBP operation. Recent work on local binary convolutional neural networks (LBCNN) in (Juefei-Xu et al., 2017) takes an opposite direction to BNN (Hubara et al., 2016). LBCNN utilizes subtraction between pixel values together with a ReLU layer to simulate the LBP operations. During the training, the sparse binarized difference filters are fixed, only the successive 1-by-1 convolution, serving as channel fusion mechanism and the parameters in batch normalization layers, are learned. However, the feature maps of LBCNN are still in floating-point numbers, resulting in significantly increased model complexity as shown in Table 2. By contrast, LBPNet learns binary patterns and logic operations from scratch, resulting in orders of magnitude reduction in the memory size and an increase in testing speed over LBCNN.
Active or deformable convolution. Among the notable line of recent work that learns local patterns are active convolution (Jeon & Kim, 2017) and deformable convolution (Dai et al., 2017), where data dependent convolution kernels are learned. Both of these are quite different from LBPNet since they do not seek to improve network efficiency. Our binary patterns learn the position of the sampling points in an end-to-end fashion as logic operations (without the need for the use of addition operations). By contrast, directly relevant earlier work (Dai et al., 2017) essentially learns data-dependent convolutions.
LOCAL BINARY PATTERN NETWORK
Fig. 2 shows an overview of the LBPNet architecture. The forward propagation is composed of two steps: LBP operation and channel fusion. We introduce the patterns in LBPNets and the two steps in the following sub-sections and then describe the engineered network structures for LBPNets.
PATTERNS IN LBPNETS
In LBPNet, multiple patterns defining the positions of sampling points generate multiple output channels. Patterns are randomly initialized with a uniform distribution of locations centered on a predefined square window, and then subsequently learned in an end-to-end supervised learning fashion. Fig. 3 (a) shows a traditional local binary pattern, which is a fixed
pattern without much variety; there are eight sampling points denoted by green circles, surrounding a pivot point in the meshed star at the center of pattern; Fig. 3(b)-(d) shows a learnable pattern with eight sampling points in green and a pivot point as a star at the center. Our learnable patterns are initialized using a normal distribution of positions within a given area. Different sizes of the green circle stand for the bit position of the comparison outcome on the output bit array. We allocate the comparison outcome of the largest green circle to the most significant bit of the output pixel, the second largest to the second largest bit, and so on. The red arrows represent the driving forces that can push the sampling points to better positions to minimize the classification error. The model size of an LBPNet is tiny compared with CNN because the learnable parameters in LBPNets are the sparse and discrete sampling patterns.
LBP OPERATION
First, LBPNet samples pixels from incoming images and compares the sampled pixel value with the center sampled point, the pivot. If the sampled pixel value is larger than that of the center one, the output is a bit “1”; otherwise, the output is set to “0.” Next, we allocate the output bits to a binary digit array in the output pixel based on a predefined ordering. The number of sampling points defines the number of bits of an output pixel on a feature map. Then we slide the local binary pattern to the
next location and perform the aforementioned steps until a feature map is generated. In most cases, the incoming image has multiple channels; hence we perform the LBP operation on every input channel.
Fig. 4 shows a snapshot of the LBP operations. Given two input channels, ch.a and ch.b, we perform the LBP operation on each channel with different kernel patterns. The two 4-bit response binary numbers of the intermediate output are shown on the bottom. For clarity, we use green dashed arrows to mark where the pixels are sampled and list the comparison equations under the resulting bits. A logical problem has emerged: we need a channel fusion mechanism to avoid the explosion of the exponential growing channel numbers.
CHANNEL FUSION WITH RANDOM PROJECTION
We use random projection (Bingham & Mannila, 2001) as a dimensionreducing and distance-preserving process to select output bits among intermediate channels for the concerned output channel as shown in Fig. 5. The random projection is implemented with a predefined mapping table for each output channel, i.e., we fix the projection map upon initialization. All output pixels on the same output channel share the same mapping. Random projection
not only solves the channel fusion with a bit-wise operation but also simplifies the computation, because we do not have to compare all sampling points with the pivots. For example, in Fig. 5, the two pink arrows from intermediate ch.a, and the two yellow arrows from intermediate ch.b bring the four bits for the composition of an output pixel. Only the MSB and LSB on ch.a and the middle two bits on the ch.b need to be computed. If the output pixel is n-bit, for each output pixel, there will be n comparisons needed, which is irrelevant to the number of input channels. The more input channels bring the more combinations of representations in a random projection table.
Throughout the forward propagation, there are no multiplication or addition operations. Only comparison and memory access are used. Therefore, the design of LBPNets is efficient in the aspects of both software and hardware.
NETWORK STRUCTURES FOR LBPNET
The network structure of LBPNet must be carefully designed. Owing to the nature of the comparison, the outcome of an LBP layer is very similar to the outlines in the input image. In other words, our LBP layer is good at extracting high-frequency components in the spatial domain but relatively weak at understanding low-frequency components. Therefore, we use a residual-like structure to compensate for this weakness of LBPNet. Fig. 6 shows three kinds of residual-net-like building
blocks. Fig. 6 (a) is the typical building block for residual networks. The convolutional kernels learn to obtain the residual of the output after the addition. Our first attempt is to introduce the LBP layer into this structure as shown in Fig. 6 (b), in which we utilize a 1-by-1 convolution to learn a combination of LBP feature maps. However, the convolution incurs too many multiplication and accumulation operations especially when the LBP kernels increases. Then, we combine LBP operation with a random projection as shown in Fig. 6 (c). Because the pixels in the LBP output feature maps are always positive, we use a shifted rectified linear layer (shifted-ReLU) to increase nonlinearities. The shifted-ReLU truncates any magnitudes below half of the maximum of the LBP output. More specifically, if a pattern has n sampling points, the shifted-ReLU is defined as Eq. 1.
f(x) =
{ x , x > 2n−1 − 1
2n−1 − 1 , otherwise (1) As mentioned earlier, the low-frequency components reduce as the information passes through several LBP layers. To preserve the low-frequency components while making the block MAC-free, we introduce a joint operation cascading the input tensor of the block and the output tensor of the shifted-ReLU along the channel dimension. The number of channels is under controlled since the increasing trend is linear to the number of input channels.
HARDWARE BENEFITS
LBPNet saves in hardware cost by avoiding the convolution operations. Table 1 lists the reference numbers of logic gates of the concerned arithmetic units. A ripple-carry full-adder requires 5 gates for each bit. A 32-bit multiplier includes a data-path logic and a control logic. Because there are too many feasible implementations of the control logic circuits, we
conservatively use an open range to express the sense of the hardware expense. The comparison can be made with a pure combinational logic circuit of 11 gates, which also means only the infinitesimal internal gate delays dominate the computation latency. The comparison is not only cheap regarding its gate count but also fast due to a lack of sequential logic inside. Slight difference in numbers of logic gates may apply if different synthesis tools or manufacturers are chosen. With the capability of an LBP layer as strong as a convolutional layer concerning classification accuracy, replacing the convolution operations with comparison gives us a 27X saving of hardware cost.
Another important benefit is energy saving. The energy demand for each arithmetic device has been shown in (Horowitz, 2014). If we replace all convolution operations with comparisons, the energy consumption is reduced by 153X.
Moreover, the core of LBPNet is composed of bit shifting and bitwise-OR, and both of them have no concurrent accessing issue. If we are implementing an LBPNet hardware accelerator, no matter on FPGA or ASIC flow, the absence of the concurrent issue resulted from convolution’s accumulation process will guarantee a speedup over CNN hardware accelerator. For more justification, please refer to the forward algorithm in the appendix.
BACKWARD PROPAGATION OF LBPNET
To train LBPNets with gradient-based optimization methods, we need to tackle two problems: 1). The non-differentiability of comparison; and 2). The lack of a source force to push the sampling points in a pattern.
DIFFERENTIABILITY
The first problem can be solved if we approximate the comparison operation with a shifted and scaled hyperbolic tangent function as shown in Eq. 2.
Ilbp > Ipivot approximated→ 1
2 (tanh( Ilbp − Ipivot k ) + 1), (2)
where k is the scaling parameters to accommodate the number of sampling points from a previous LBP layer, Ilbp is the sampled pixel in a learnable LBP kernel, and Ipivot is the sampled pixel on the pivot. We provide a sensitivity analysis of k w.r.t. classification accuracy in the appendix. The hyperbolic tangent function is differentiable and has a simple closed-form for the implementation.
DEFORMATION WITH OPTICAL FLOW THEORY
To deform the local binary patterns, we resort to the concept from optical flow theory. Assuming the image content in the same class share the same features, even though there are certain minor shape transformations, chrominance variations or different view angles, the optical flow on these images should share similarities with each other. ∂I∂xVx+ ∂I ∂yVy = − ∂I ∂t The equation above shows the optical flow theory, where I is the pixel value, a.k.a luminance, Vx and Vy represent the two orthogonal components of the optical flow among the same or similar image content. The LHS of optical flow theory can be interpreted as a dot-product of image gradient ( ∂I∂x x̂ + ∂I ∂y ŷ) and optical flow (Vxx̂ + Vy ŷ), and this product is the negative derivative of luminance versus time across different images, where x̂ and ŷ denote the two orthogonal unit vectors on the 2-D coordinate.
To minimize the difference between images in the same class is equivalent to extract similar features of the images in the same class for classification. However, both the direction and magnitude of the optical flow underlying the dataset are unknown. The minimization of a dot-product cannot be done by changing the image gradient to be orthogonal with the optical flow. Therefore, the only feasible path to minimize the magnitude of the RHS is to minimize the image gradient. Please note the sampled image gradient can be changed by deforming the apertures, which are the sampling points of local binary patterns.
When applying calculus chain rule on the cost of LBPNet with regard to the position of each sampling point, one can easily conclude that the last term of the chain rule is the image gradient. Since the sampled pixel value is the same as the pixel value on the image, the gradient of sampled value with regard to the sampling location on a pattern is equivalent to the image gradient on the incoming image. Eq. 3 shows the gradient from the output loss through a fully-connected layer with weights, wj , toward the image gradient.
∂cost ∂position = ∑ j (∆jwj) ∂g(s) ∂s ∂s ∂Ilbp ( dIlbp dx x̂ + dIlbp dy ŷ), (3)
where ∆j is the backward propagated error, ∂g(s) ∂s is the derivative of activation function, and ∂s ∂Ilbp is the gradient of Eq. 2. Please refer to the appendix for more details of the forward-backward training algorithm.
EXPERIMENTS
In this section, we conduct a series of experiments on five datasets and their subsets: MNIST, SVHN, DHCD, ICDAR2005, and Chars74K to verify the capability of LBPNet. Some typical images of these character datasets are shown in Fig. 1. Please refer to the appendix for the description of datasets. We additionally evaluate LBPNet on a few broader categories such as face, pedestrian, and affNIST and have observed promising results for object classification.
EXPERIMENT SETUP
In all of the experiments, we use all training examples to train LBPNets and directly validate on test sets. To avoid peeping, we do not employ the validation errors in the backward propagation. There are no data augmentations used in the experiments.
We implement two versions of LBPNet using the two building blocks shown in Fig. 6 (b) and (c). For the remaining parts of this paper, we call the LBPNet using 1-by-1 convolution as the channel fusion mechanism LBPNet(1x1) (has convolution in the fusion part), and the version of LBPNet utilizing random projection LBPNet(RP) (totally convolution-free). The number of sampling points in a pattern is set to 4, and the area size for the pattern to deform is 5-by-5.
LBPNet also has an additional multilayer perceptron (MLP) block, which is made with two fullyconnected layers of 512 and #classes neurons. Besides the nonlinearities, there is one batchnormalization layer. The MLP block’s performance without any convolutional layers or LBP layers on the three datasets is shown in Table 2, 3. The model size and speed of the MLP block are excluded in the comparisons since all models have an MLP block.
To understand the capability of LBPNet when compared with existing convolution-based methods, we build two feed-forward streamline CNNs as our baseline for each dataset. CNN-baseline is designed in the same number of layers and number of kernels with the LBPNet; the other, CNN-lite, is
designed subject to the same memory footprint with the LBPNet(RP). The basic block of the CNNs contains a spatial convolution layer (Conv) followed by a batch normalization layer (BatchNorm) and a rectified linear layer (ReLU).
In the BNN (Hubara et al., 2016) paper, the classification on MNIST is done with a binarized multilayer perceptron network (MLP). We adopt the binarized convolutional neural network (BCNN) in (Hubara et al., 2016) for SVHN to perform the classification and re-produce the same accuracy as shown in (Lin et al., 2017) on MNIST.
EXPERIMENTAL RESULTS
Table 2 and 3 show the experimental results of LBPNet on MNIST and SVHN together with the baseline and previous works. We list the classification error rate, model size, latency of the inference, and the speedup compared with the baseline CNN. The best value of each column is shown in bold. Please note the calculation of latency in cycles is made with an assumption that no SIMD parallelism and pipelining optimization is applied. Because we need to understand the total number of computations in every network but both floating-point and binary arithmetics are involved, we cannot use FLOPs as a measure. Therefore, we adopt typical cycle counts shown in Table 1 as the measure of latencies. For the calculation of model size, we exclude the MLP blocks and count the required memory for necessary variables to focus on the comparison between the intrinsic operations in CNNs and LBPNets, respectively the convolution and the LBP operation.
Table 2: The performance of LBPNet on MNIST.
Error ↓ Size ↓ Latency ↓ Speedup ↑(Bytes) (cycles) MLP Block 24.22% - - - CNN-baseline 0.44% 1.41M 222.0M 1X CNN-lite 1.20% 456 553K 401.4X BCNN 0.47% 1.89M 306.1M 0.725X LBCNN 0.49% 12.2M 8.78G 0.0253X LBPNet (this work) LBPNet (1x1) 0.50% 1.27M 27.73M 8.004X LBPNet (RP) 0.50% 397.5 651.2K 340.8X
Table 3: The performance of LBPNet on SVHN.
Error ↓ Size ↓ Latency ↓ Speedup ↑(Bytes) (cycles) MLP Block 77.78% - - - CNN-baseline 8.30% 15.96M 9.714G 1X CNN-lite 69.14% 2.80K 1.576M 6164X BCNN 2.53% 1.89M 312M 31.18X LBCNN 5.50% 6.70M 7.098G 1.369X LBPNet (this work) LBPNet (1x1) 8.33% 1.51M 9.175M 1059X LBPNet (RP) 7.31% 2.79K 4.575M 2123X
MNIST. The CNN-baseline and LBPNet(RP) share the same network structure, 39-40-80, and the CNN-lite is limited to the same memory size so that the network structure is 2-3. The baseline CNN achieves the lowest classification error rate 0.44%. The BCNN possesses a decent speedup while maintaining the classification accuracy. While LBCNN claimed its saving in memory footprint, to achieve 0.49% error rate, 75 layers of LBCNN basic blocks are used. As a result, LBCNN loses speedups. The 3-layer LBPNet(1x1) with 40 LBP kernels and 40 1-by-1 convolutional kernels achieves 0.50%. The 3-layer LBPNet(RP) reaches 0.50% error rate as well. Although LBPNet’s performance is slightly inferior, the model size of LBPNet(RP) is reduced to 397.5 bytes, and the speedup is 340.8X faster than the baseline CNN. Even BCNN cannot be on par with such a vast memory reduction and speedup. The CNN-lite delivering the worst error rate demonstrates that if we shrink a CNN model down to the same memory size as the LBPNet(RP), the classification error of CNN(lite) is greatly sacrificed.
SVHN. Table 3 shows the experimental results of LBPNet on SVHN together with the baseline and previous works. The CNN-baseline and LBPNet(RP) share the same network structure, 67-70- 140-280-560, and the CNN-lite is limited to the same memory size so that the network structure is 8-17. BCNN outperforms our baseline and achieves 2.53% with smaller memory footprint and higher speed. LBCNN also achieve a good memory reduction and 1.369X speed-up. The 5-layer LBPNet(1x1) with 8 LBP kernels and 32 1-by-1 convolutional kernels achieve 8.33%, which is close to our baseline CNN’s 8.30%. The convolution-free LBPNet(RP) for SVHN is built with 5 layers of LBP basic blocks, 67-70-140-280-560, and achieves 7.31% error rate. Compared with CNN(lite)’s high error rate, the learning of LBPNet’s sampling point positions is proven to be effective and economical.
More Results. Table 4 lists the experimental results of LBPNet(RP) on all character recognition datasets. LBPNets achieve the state-of-the-art accuracies on all of the datasets.
PRELIMINARY RESULTS ON OBJECTS AND DEFORMED PATTERNS
Next, we show results on datasets of general objects.
Pedestrain: We first evaluate LBPNet on the INRIA pedestrian dataset (Dalal & Triggs, 2005), which consists of cropped positive and negative images. Note that we did not im-
plement an image-based object detector due to the focus of our paper. Fig. 7 shows the trade-off curves of a 3-layer LBPNet (37-40-80) and a 3-layer CNN (37-40-80). Here we did not exhaustively explore the capability of LBPNet for object classification.
Face: We apply our LBPNet on FDDB dataset (Jain & Learned-Miller, 2010) to verify the face classification performance of LBPNet. Same as previously, we perform training and testing on a dataset of cropped images; we use the annotated positive face examples with cropped four non-person frames in every training image to create negative face examples for both training and testing. The structures of the LBPNet and CNN are the same as before (37-40-80). LBPNet achieves 97.78%, and the baseline CNN reaches 97.55%.
affNIST: We conduct an experiment on affNIST 1, which is composed of 32 translation variations of MNIST (including the original MNIST). To accelerate the experiment, we randomly draw three variations of each original example to get training and testing subsets of affNIST. We repeat the same process to draw examples and train
the networks ten times to get an averaged result. The network structure of LBPNet and our baseline CNN are the same, 39-40-80. To improve the translation invariant property of the networks, we use two max-pooling layers following the first and second LBP layer or convolutional layer. With the training and testing on the subsets of affNIST, LBPNet achieves 93.18%, and CNN achieves 94.88%.
CONCLUSION AND FUTURE WORK
We have built a convolution-free, end-to-end, and bitwise LBPNet from basic operations and verified its effectiveness on character recognition datasets with orders of magnitude speedup (hundred times) in testing and model size reduction (thousand times) when compared with the baseline and the binarized CNNs. The learning of local binary patterns results in an unprecedentedly efficient model since, to the best of our knowledge, there is no compression/discretization of CNN can achieve the KByte level model size while maintaining the state-of-the-art accuracy on the character recognition tasks. Both the memory footprints and computation latencies of LBPNet and previous works are listed. LBPNet points to a promising direction for building new generation hardware-friendly deep learning algorithms to perform computation on the edge devices.
1https://www.cs.toronto.edu/ tijmen/affNIST/
APPENDIX
FORWARD PROPAGATION ALGORITHM
Algorithm 1: Forward of LBPNet input : An input tensor X of shape (ci, w, h), previous pattern P of shape (co, ns), and the fixed projection
map M of shape (co, ns). The pattern width k and padding width d = ⌊ k 2 ⌋ . Please note every element
of P is a tuple. output: A scalar predictions y.
1 X ← ZeroPadding(X , d); 2 for io = 1 to co do 3 for ih = to h do 4 for iw = 1 to w do 5 for is = 1 to ns do 6 ii ←M [io, is]; 7 (ipx, ipy)← P [io, is]; 8 pivot← X[iw + d][iw + d][ii]; 9 sample← X[iw + ipx][iw + ipy][ii];
10 if sample > pivot then 11 y[iw][ih][io] | = 1 is 12 end 13 end 14 end 15 end 16 end 17 return y
Alg. 1 describes the forward algorithm of an LBP layer. The three outermost nested loops form the sliding window operation to generate an output feature maps, and the innermost loop is the LBP operation. We combine the LBP operation with random projection to skip unnecessary comparisons. Firstly, we look up the random projection map for the input plane index and then use it to sample only the necessary pairs for the comparison.
The core of LBPNet is implemented with bit shifting and bitwise-OR, and both of them have no concurrent accessing issue. That is, we can directly implement it with CUDA programming to accelerate the inference on GPU. If we are implementing an LBPNet hardware accelerator, no matter on FPGA or ASIC flow, the absence of concurrent issue resulted from CNN’s accumulation process will guarantee a speedup over CNN’s hardware accelerator.
BACKWARD PROPAGATION ALGORITHM
Algorithm 2: Backward of LBPNet input : An input tensor X , a gradient tensor of loss w.r.t the output of current layer go(co×wo×ho), previous
pattern P , and the fixed projection map M . The pattern width k and padding width d. During training, we remember the previous real-valued pattern R of the same shape of P .
output: The gradient of loss w.r.t. the input tensor gi in shape (ci, w, h), and the gradient of loss w.r.t. the position of sampling point. gP in shape (co, ns). Please note every element of gP is a tuple.
1 5← ImageGradient(X); 2 P ← round(R); 3 D ← LookUpDifference(X , P , M ); 4 E ← ConstructExp(tanh(D),P , M ); 5 dE ← ConstructDiffExp(1-tanh2(D),P , M ); 6 gi ← 12g T o E; 7 gP ← go(dEF5)T ; 8 return gi, gP , R, P
Alg. 2 describes the backward propagation at a high-level point of view. Because LBPNet requires sophisticated element-wise matrix operation, some of them have no matrix-to-vector or matrix-to-
matrix multiplication equivalence but can be implemented and optimized in low-level CUDA codes for training speed. The ImageGradient(.) function calculates the image gradient vector field of the input feature map. Then, round(.) function discretize the previous real-valued pattern for the image sampling later on. LookUpDifference(.) samples the input tensor with the concerned input plane index from the projection map. This step is similar to the core of Alg. 1, but we calculate the difference instead of comparing the pairs of sampled pixels.
The ConstructExp(.) function multiplies the hyperbolic tangential difference matrix with the exponential of 2 corresponding to the position of the comparison result in an output bit array. For example, if a comparison result is allocated to the MSB, the hyperbolic tangential value will be multiplied with 2ns , assuming ns sampling pairs per kernel. The ConstructDiffExp(.) performs the same calculation with ConstructExp(.) except for the first argument is replaced with the derivative of tanh(.). These two sub-routine functions convert sparse kernels to dense kernels for the follow matrix-to-matrix multiplications.
The sixth line uses a matrix-to-matrix multiplication to collect and weight the output gradient tensor from the successive layer. This step is the same with CNN’s backward propagation. The resulting tensor is also called input gradient tensor and will be passed to the preceding layer to accomplish the backward propagation.
The seventh line element-wisely times the differential exponential matrix with the image gradient first and then multiply the result with the output gradient tensor. The resulting tensor carries the gradient of LBP parameters, ∂cost∂position , which will be multiplied with an adaptive learning rate for the update of sampling positions of an LBP kernel.
DATASET DESCRIPTIONS
Images in the padded MNIST dataset are hand-written numbers from 0 to 9 in 32-by-32 grayscale bitmap format. The dataset is composed of a training set of 60, 000 examples and a test set of 10, 000 examples. Both staff and students wrote the manuscripts. Most of the images can be easily recognized and classified, but there is still a portion of sloppy images inside MNIST.
SVHN is a photo dataset of house numbers. Although cropped, images in SVHN include some distracting numbers around the labeled number in the middle of the image. The distracting parts increase the difficulty of classifying the printed numbers. There are 73, 257 training examples and 26, 032 test examples in SVHN.
Table 5: The datasets we used in the experiment.
Description #Class #Examples CNN Baseline LBPNet (RP) (ours)
DHCD Handwritten Devanagari characters 46 46x2,000 98.47% (Acharya et al., 2015) 99.19% ICDAR-DIGITS Photos of numbers 10 988 100.00% 100.00% ICDAR-UpperCase Photos of lower case Eng. char. 26 5,288 100.00% 100.00% ICDAR-LowerCase Photos of upper case Eng. char. 26 5,453 100.00% 100.00% Chars74K-EnglishImg Photos, Alphanumeric 62 7,705 47.09% (De Campos et al., 2009) 58.31% Chars74K-EnglishHnd Handwritten, Alphanumeric 62 3,410 71.32% 73.37% Chars74K-EnglishFnt Printed Fonts, Alphanumeric 62 62,992 78.09% 77.26%
LEARNING CURVES
Figure 8: Error curves on benchmark datasets. (a) test errors on MNIST; (b) test errors on SVHN.
Fig. 8 shows the learning curves of LBPNets on MNIST and SVHN.
SENSITIVITY ANALYSIS OF k
Fig 9 shows the sensitivity analysis of the parameter k in Eq. 2 w.r.t. the training accuracy. The LBPNet structure we use is 3-layer, 39-40-80. We gradually reduce k from 10 to 0.01 to verify the effect on the learning curves. Sub-figure (a) and (c) shows the smaller k is, the lower the error rate is, but there exist a saturation when k decreases below 1. Sub-figure (b) shows a smaller k suppresses the ripple of training loss better. As a summary, because we approximate the comparison function with a sifted and scaled hyperbolic tangent function. A smaller k implies less error between the approximation and the original comparison curve, and hence simulate the comparison while securing differentiability. In this paper, we choose k = 0.1 to balance between classification accuracy and the overflow risk of the gradient summation during backward propagation. | 1. What is the focus and contribution of the paper on character recognition?
2. What are the strengths of the proposed approach, particularly in terms of efficiency improvement?
3. What are the weaknesses of the paper, especially regarding the experiment section?
4. Do you have any concerns about the implementation and calculation of the proposed method?
5. How would the network perform with a greater number of layers?
6. How would the network perform on datasets that are not well-aligned?
7. Can the proposed method be applied to other vision tasks where LBP is traditionally used? | Review | Review
This paper proposed a LBPNet for character recognition, which introduces the LBP feature extraction into deep learning. Personally I think that this idea is interesting for improving the efficiency of CNNs, as traditionally LBP has been demonstrated its good performance and efficiency in some vision tasks such as face recognition or pedestrian detection. However, I do have the following concerns about the paper:
1. Calculation/Implementation of Eq. 4: I do not quite understand how it derived, and how to use Eq. 3 in calculation. I suggest the authors to explain more details, as this is the key for implementation of LBP layers.
2. Effects of several factors on performance in the experiments are missing: (1) random projection map in Fig. 5, (2) $k$ in Eq. 2, and (3) the order of images for computing RHS of Eq. 3. In order to better demonstrate LBPNet, I suggest to add such experiments, plus training/testing behavior comparison of different networks.
3. Does this network work with more much deeper?
4. Data: The datasets used in the experiments are all well-aligned. This makes me feel that the RHS of Eq. 3 does make sense, because it will capture the spatial difference among data, like temporal difference in videos. How will the network behave on the dataset that is not aligned well, like affnist dataset?
5. How will this network behave for the applications such as face recognition or pedestrian detection where traditionally LBP is applied? |
ICLR | Title
Network Architecture Search for Domain Adaptation
Abstract
Deep networks have been used to learn transferable representations for domain adaptation. Existing deep domain adaptation methods systematically employ popular hand-crafted networks designed specifically for image-classification tasks, leading to sub-optimal domain adaptation performance. In this paper, we present Neural Architecture Search for Domain Adaptation (NASDA), a principle framework that leverages differentiable neural architecture search to derive the optimal network architecture for domain adaptation task. NASDA is designed with two novel training strategies: neural architecture search with multi-kernel Maximum Mean Discrepancy to derive the optimal architecture, and adversarial training between a feature generator and a batch of classifiers to consolidate the feature generator. We demonstrate experimentally that NASDA leads to state-of-the-art performance on several domain adaptation benchmarks.
1 INTRODUCTION
Supervised machine learning models (Φ) aim to minimize the empirical test error ( (Φ(x),y)) by optimizing Φ on training data (x) and ground truth labels (y), assuming that the training and testing data are sampled i.i.d from the same distribution. While in practical, the training and testing data are typically collected from related domains under different distributions, a phenomenon known as domain shift (or domain discrepancy) (Quionero-Candela et al., 2009). To avoid the cost of annotating each new test data, Unsupervised Domain Adaptation (UDA) tackles domain shift by transferring the knowledge learned from a rich-labeled source domain (P (xs,ys)) to the unlabeled target domain (Q(xt)). Recently unsupervised domain adaptation research has achieved significant progress with techniques like discrepancy alignment (Long et al., 2017; Tzeng et al., 2014; Ghifary et al., 2014; Peng & Saenko, 2018; Long et al., 2015; Sun & Saenko, 2016), adversarial alignment (Xu et al., 2019a; Liu & Tuzel, 2016; Tzeng et al., 2017; Liu et al., 2018a; Ganin & Lempitsky, 2015; Saito et al., 2018; Long et al., 2018), and reconstruction-based alignment (Yi et al., 2017; Zhu et al., 2017; Hoffman et al., 2018; Kim et al., 2017). While such models typically learn feature mapping from one domain (Φ(xs)) to another (Φ(xt)) or derive a joint representation across domains (Φ(xs)⊗ Φ(xt)), the developed models have limited capacities in deriving an optimal neural architecture specific for domain transfer.
To advance network designs, neural architecture search (NAS) automates the net architecture engineering process by reinforcement supervision (Zoph & Le, 2017) or through neuro-evlolution (Real et al., 2019a). Conventional NAS models aim to derive neural architecture α along with the network parameters w, by solving a bilevel optimization problem (Anandalingam & Friesz, 1992): Φα,w = arg minα Lval(w∗(α), α) s.t. w∗(α) = argminwLtrain(w,α), where Ltrain and Lval indicate the training and validation loss, respectively. While recent works demonstrate competitive performance on tasks such as image classification (Zoph et al., 2018; Liu et al., 2018c;b; Real et al., 2019b) and object detection (Zoph & Le, 2017), designs of existing NAS algorithms typically assume that the training and testing domain are sampled from the same distribution, neglecting the scenario where two data domains or multiple feature distributions are of interest.
To efficiently devise a neural architecture across different data domains, we propose a novel learning task called Neural Architecture Search for Domain Adaptation (NASDA). The ultimate goal of NASDA is to minimize the validation loss of the target domain (Ltval). We postulate that a solution to NASDA should not only minimize validation loss of the source domain (Lsval), but should also
reduce the domain gap between the source and target. To this end, we propose a new NAS learning schema:
Φα,w = argminαLsval(w∗(α), α) + disc(Φ∗(xs),Φ∗(xt)) (1) s.t. w∗(α) = argminw Lstrain(w,α) (2)
where Φ∗ = Φα,w∗(α), and disc(Φ∗(xs),Φ∗(xt)) denotes the domain discrepancy between the source and target. Note that in unsupervised domain adaptation, Lttrain and Ltval cannot be computed directly due to the lack of label in the target domain.
Inspired by the past works in NAS and unsupervised domain adaptation, we propose in this paper an instantiated NASDA model, which comprises of two training phases, as shown in Figure 1. The first is the neural architecture searching phase, aiming to derive an optimal neural architecture (α∗), following the learning schema of Equation 1,2. Inspired by Differentiable ARchiTecture Search (DARTS) (Liu et al., 2019a), we relax the search space to be continuous so that α can be optimized with respect to Lsval and disc(Φ(xs),Φ(xt)) by gradient descent. Specifically, we enhance the feature transferability by embedding the hidden representations of the task-specific layers to a reproducing kernel Hilbert space where the mean embeddings can be explicitly matched by minimizing disc(Φ(xs),Φ(xt)). We use multi-kernel Maximum Mean Discrepancy (MK-MMD) (Gretton et al., 2007) to evaluate the domain discrepancy.
The second training phase aims to learn a good feature generator with task-specific loss, based on the derived α∗ from the first phase. To establish this goal, we use the derived deep neural network (Φα∗ ) as the feature generator (G) and devise an adversarial training process between G and a batch of classifiers C. The high-level intuition is to first diversify C in the training process, and train G to generate features such that the diversified C can have similar outputs. The training process is similar to Maximum Classifier Discrepancy framework (MCD) (Saito et al., 2018) except that we extend the dual-classifier in MCD to an ensembling of multiple classifiers. Experiments on standard UDA benchmarks demonstrate the effectiveness of our derived NASDA model in achieving significant improvements over state-of-the-art methods.
Our contributions of this paper are highlighted as follows:
• We formulate a novel dual-objective task of Neural Architecture Search for Domain Adaptation (NASDA), which optimize neural architecture for unsupervised domain adaptation, concerning both source performance objective and transfer learning objective.
• We propose an instantiated NASDA model that comprises two training stages, aiming to derive optimal architecture parameters α∗ and feature extractor G, respectively. We are the first to show the effectiveness of MK-MMD in NAS process specified for domain adaptation.
• Extensive experiments on multiple cross-domain recognition tasks demonstrate that NASDA achieves significant improvements over traditional unsupervised domain adaptation models as well as state-of-the-art NAS-based methods.
2 RELATED WORK
Deep convolutional neural network has been dominating image recognition task. In recent years, many handcrafted architectures have been proposed, including VGG (Simonyan & Zisserman, 2014), ResNet (He et al., 2016), Inception (Szegedy et al., 2015), etc., all of which verifies the importance of human expertise in network design. Our work bridges domain adaptation and the emerging field of neural architecture search (NAS), a process of automating architecture engineering technique.
Neural Architecture Search Neural Architecture Search has become the mainstream approach to discover efficient and powerful network structures (Zoph & Le, 2017; Zoph et al., 2018). The automatically searched architectures have achieved highly competitive performance in tasks such as image classification (Liu et al., 2018c;b), object detection (Zoph et al., 2018), and semantic segmentation (Chen et al., 2018). Reinforce learning based NAS methods (Zoph & Le, 2017; Tan et al., 2019; Tan & Le, 2019) are usually computational intensive, thus hampering its usage with limited computational budget. To accelerate the search procedure, many techniques has been proposed and they mainly follow four directions: (1) estimating the actual performance with lower fidelities. Such lower fidelities include shorter training times (Zoph et al., 2018; Zela et al., 2018), training on a subset of the data (Klein et al., 2017), or on lower-resolution images. (2) estimating the performance based on the learning curve extrapolation. Domhan et al. (2015) propose to extrapolate initial learning curves and terminate those predicted to perform poorly. (3) initializing the novel architectures based on other well-trained architectures. Wei et al. (2016) introduce network morphisms to modify an architecture without changing the network objects, resulting in methods that only require a few GPU days (Elsken et al., 2017; Cai et al., 2018a; Jin et al., 2019; Cai et al., 2018b). (4) one-shot architecture search. One-shot NAS treats all architectures as different subgraphs of a supergraph and shares weights between architectures that have edges of this supergraph in common (Saxena & Verbeek, 2016; Liu et al., 2019b; Bender, 2018). DARTS (Liu et al., 2019a) places a mixture of candidate operations on each edge of the one-shot model and optimizes the weights of the candidate operations with a continuous relaxation of the search space. Inspired by DARTS (Liu et al., 2019a), our model employs differentiable architecture search to derive the optimal feature extractor for unsupervised domain adaptation.
Domain Adaptation Unsupervised domain adaptation (UDA) aims to transfer the knowledge learned from one or more labeled source domains to an unlabeled target domain. Various methods have been proposed, including discrepancy-based UDA approaches (Long et al., 2017; Tzeng et al., 2014; Ghifary et al., 2014; Peng & Saenko, 2018), adversary-based approaches (Liu & Tuzel, 2016; Tzeng et al., 2017; Liu et al., 2018a), and reconstruction-based approaches (Yi et al., 2017; Zhu et al., 2017; Hoffman et al., 2018; Kim et al., 2017). These models are typically designed to tackle single source to single target adaptation. Compared with single source adaptation, multi-source domain adaptation (MSDA) assumes that training data are collected from multiple sources. Originating from the theoretical analysis in (Ben-David et al., 2010; Mansour et al., 2009; Crammer et al., 2008), MSDA has been applied to many practical applications (Xu et al., 2018; Duan et al., 2012; Peng et al., 2019). Specifically, Ben-David et al. (2010) introduce an H∆H-divergence between the weighted combination of source domains and a target domain. These models are developed using the existing hand-crafted network architecture. This property limits the capacity and versatility of domain adaptation as the backbones to extract the features are fixed. In contrast, we tackle the UDA from a different perspective, not yet considered in the UDA literature. We propose a novel dual-objective model of NASDA, which optimize neural architecture for unsupervised domain adaptation. We are the first to show the effectiveness of MK-MMD in NAS process which is designed specifically for domain adaptation.
3 NEURAL ARCHITECTURE SEARCH FOR DOMAIN ADAPTATION
In unsupervised domain adaptation, we are given a source domain Ds = {(xsi ,ysi )} ns i=1 of ns labeled examples and a target domainDt = {xtj} nt j=1 of nt unlabeled examples. The source domain and target domain are sampled from joint distributions P (xs,ys) and Q(xt,yt), respectively. The goal of this paper is to leverage NAS to derive a deep networkG : x 7→ y, which is optimal for reducing the shifts in data distributions across domains, such that the target risk t (G) = E(xt,yt)∼Q [G (xt) 6= yt] is minimized. We will start by introducing some preliminary background in Section 3.1. We then describe how to incorporate the MK-MMD into the neural architecture searching framework in
Section 3.2. Finally, we introduce the adversarial training between our derived deep network and a batch of classifiers in Section 3.3. An overview of our model can be seen in Algorithm 1.
3.1 PRELIMINARY: DARTS
In this work, we leverage DARTS (Liu et al., 2019a) as our baseline framework. Our goal is to search for a robust cell and apply it to a network that is optimal to achieve domain alignment between Ds and Dt. Following Zoph et al. (2018), we search for a computation cell as the building block of the final architecture. The final convolutional network for domain adaptation can be stacked from the learned cell. A cell is defined as a directed acyclic graph (DAG) of L nodes, {xi}Ni=1, where each node x(i) is a latent representation and each directed edge e(i,j) is associated with some operation o(i,j) that transforms x(i). DARTS (Liu et al., 2019a) assumes that cells contain two input nodes and a single output node. To make the search space continuous, DARTS relaxes the categorical choice of a particular operation to a softmax over all possible operations and is thus formulated as:
ō(i,j)(x) = ∑ o∈O exp(α (i,j) o )∑ o′∈O exp(α (i,j) o′ ) o(x) (3)
where O denotes the set of candidate operations and i < j so that skip-connect can be applied. An intermediate node can be represented as xj = ∑ i<j o
(i,j)(xi). The task of architecture search then reduces to learning a set of continuous variables α = {α(i,j)}. At the end of search, a discrete architecture can be obtained by replacing each mixed operation ō(i,j) with the most likely operation, i.e., o∗ (i,j)
= argmaxo∈O α (i,j) o and α∗ = {o∗ (i,j)}.
3.2 SEARCHING NEURAL ARCHITECTURE
Denote by Ltrain and Lval the training loss and validation loss, respectively. Conventional neural architecture search models aim to derive Φα,w by solving a bilevel optimization problem (Anandalingam & Friesz, 1992): Φα,w = arg minα Lval(w∗(α), α) s.t. w∗(α) = argminwLtrain(w,α). While recent work (Zoph et al., 2018; Liu et al., 2018c) have show promising performance on tasks such as image classification and object detection, the existing models assume that the training data and testing data are sampled from the same distributions. Our goal is to jointly learn the architecture α and the weights w within all the mixed operations (e.g. weights of the convolution filters) so that the derived model Φw∗,α∗ can transfer knowledge fromDs toDt with some simple domain adapation guidence. Initialized by Equation 1, we leverage multi-kernel Maximum Mean Discrepancy (Gretton et al., 2007) to evaluate disc(Φ∗(xs),Φ∗(xt).
MK-MMD Denote by Hk be the Reproducing Kernel Hilbert Space (RKHS) endowed with a characteristic kernel k. The mean embedding of distribution p in Hk is a unique element µk(P ) such that Ex∼P f (x) = 〈f (x) , µk (P )〉Hk for all f ∈ Hk. The MK-MMD dk (P,Q) between probability distributions P and Q is defined as the RKHS distance between the mean embeddings of P and Q. The squared formulation of MK-MMD is defined as
d2k (P,Q) , ∥∥EP [Φα (xs)]−EQ [Φα (xt)]∥∥2Hk . (4)
In this paper, we consider the case of combining Gaussian kernels with injective functions fΦ, where k(x, x′) = exp(−‖fΦ(x) − fΦ(x)′‖2). Inspired by Long et al. (2015), the characteristic kernel associated with the feature map Φ, k (xs,xt) = 〈Φ (xs) ,Φ (xt)〉, is defined as the convex combination of n positive semidefinite kernels {ku},
K , { k =
n∑ u=1 βuku : n∑ u=1 βu = 1, βu > 0,∀u
} , (5)
where the constraints on {βu} are imposed to guarantee that the k is characteristic. In practice we use finite samples from distributions to estimate MMD distance. Given Xs = {xs1, · · · ,xsm} ∼ P and Xt = {xt1, · · · ,xtm} ∼ Q, one estimator of d2k(P,Q) is
d̂2k(P,Q) = 1( m 2 ) ∑ i 6=i′ k(xsi,x s′ i)− 2( m 2 ) ∑ i 6=j k(xsi ,x t j) + 1( m 2 ) ∑ j 6=j′ k(xtj ,x t′ j). (6)
Algorithm 1 Neural Architecture Search for Domain Adaptation Phase I: Searching Neural Architecture 1: Create a mixed operation o(i,j) parametrized by α(i,j) for each edge (i, j) 2: while not converged do 3: Update architecture α by ∂∂αL s val ( w − ξ ∂∂wL s train(w,α), α ) + λ ∂∂α ( d̂2k (Φ(x s),Φ(xt)) )
4: Update weights w by descending ∂∂wL s train(w,α) 5: end while 6: Derive the final architecture based on the learned α∗.
Phase II: Adversarial Training for Domain Adaptation 1: Stack feature generator G based on α∗, initialize classifiers C 2: while not converged do 3: Step one: Train G and C with Ls(xs,ys) = −E(xs,ys)∼Ds ∑K k=1 1[k=ys] log p(y
s|xs) 4: Step two: Fix G, train C with loss: Ls(xs,ys)− Ladv(xt)(Eq. 13) 5: Step three: Fix C, train G with loss: Ladv(xt) 6: end while
The merit of multi-kernel MMD lies in its differentiability such that it can be easily incorporated into the deep network. However, the computation of the d̂2k(P,Q) incurs a complexity of O(m
2), which is undesirable in the differentiable architecture search framework. In this paper, we use the unbiased estimation of MK-MMD (Gretton et al., 2012) which can be computed with linear complexity.
NAS for Domain Adaptation Denote by Lstrain and Lsval the training loss and validation loss on the source domain, respectively. Both losses are affected by the architecture α as well as by the weights w in the network. The goal for NASDA is to find α∗ that minimizes the validation loss Ltval(w∗, α∗) on the target domain, where the weights w∗ associated with the architecture are obtained by minimizing the training loss w∗ = argminw Lstrain(w,α∗). Due to the lack of labels in the target domain, it is prohibitive to compute Ltval directly, hampering the assumption of previous gradient-based NAS algorithms (Liu et al., 2019a; Chen et al., 2019). Instead, we derive α∗ by minimizing the validation loss Lsval(w∗, α∗) on the source domain plus the domain discrepancy, disc(Φ(xs),Φ(xt)), as shown in Equation 1.
Inspired by the gradient-based hyperparameter optimization (Franceschi et al., 2018; Pedregosa, 2016; Maclaurin et al., 2015), we set the architecture parameters α as a special type of hyperparameter. This implies a bilevel optimization problem (Anandalingam & Friesz, 1992) with α as the upper-level variable and w as the lower-level variable. In practice, we utilize the MK-MMD to evaluate the domain discrepancy. The optimization can be summarized as follows:
Φα,w = argminα ( Lsval(w∗(α), α) + λd̂2k ( Φ(xs),Φ(xt) ) ) (7)
s.t. w∗(α) = argminw Lstrain(w,α) (8)
where λ is the trade-off hyperparameter between the source validation loss and the MK-MMD loss.
Approximate Architecture Search Equation 7,8 imply that directly optimizing the architecture gradient is prohibitive due to the expensive inner optimization. Inspired by DARTS (Liu et al., 2019a), we approximate w∗(α) by adapting w using only a single training step, without solving the optimization in Equation 8 by training until convergence. This idea has been adopted and proven to be effective in meta-learning for model transfer (Finn et al., 2017), gradient-based hyperparameter tuning (Luketina et al., 2016) and unrolled generative adversarial networks. We therefore propose a simple approximation scheme as follows:
∂
∂α
( Lsval(w∗(α), α) + λd̂2k ( Φ(xs),Φ(xt) ) ) ≈ ∂ ∂α Lsval ( w − ξ ∂ ∂w Lstrain(w,α), α ) + λ ∂ ∂α ( d̂2k ( Φ(xs),Φ(xt) ) ) (9)
where w − ξ ∂∂wL s train(w,α) denotes weight for one-step forward model and ξ is the learning rate for a step of inner optimization. Note Equation 9 reduces to ∇αLval(w,α) if w is already a local optimum for the inner optimization and thus∇wLtrain(w,α) = 0.
The second term of Equation 9 can be computed directly with some forward and backward passes. For the first term, applying chain rule to the approximate architecture gradient yields
∂
∂α Lsval(w′, α)− ξ ( ∂2 ∂α∂w Lstrain(w,α) ∂ ∂w′ Lsval(w′, α) ) (10)
wherew′ = w−ξ ∂∂wLtrain(w,α). The expression above contains an expensive matrix-vector product in its second term. We leverage the central difference approximation to reduce the computation complexity. Specifically, let η be a small scalar and w± = w ± η ∂∂w′L s val(w ′, α). Then:
∂2
∂α∂w Lstrain(w,α)
∂
∂w′ Lsval(w′, α) ≈
∂ ∂αLtrain(w +, α)− ∂∂αLtrain(w −, α)
2η (11)
Evaluating the central difference only requires two forward passes for the weights and two backward passes for α, reducing the complexity from quadratic to linear.
3.3 ADVERSARIAL TRAINING FOR DOMAIN ADAPTATION
By neural architecture searching from Section 3.2, we have derived the optimal cell structure (α∗) for domain adaptation. We then stack the cells to derive our feature generator G. In this section, we describe how do we consolidate G by an adversarial training of G and the classifiers C. Assume C includes N independent classifiers {C(i)}Ni=1 and denote pi(y|x) as the K-way propabilistic outputs of C(i), where K is the category number.
The high-level intuition is to consolidate the feature generator G such that it can make the diversified C generate similar outputs. To this end, our training process include three steps: (1) train G and C on Ds to obtain task-specific features, (2) fix G and train C to make {C(i)}Ni=1 have diversified output, (3) fix C and train G to minimize the output discrepancy between C. Related techniques have been used in Saito et al. (2018); Kumar et al. (2018).
First, we train both G and C to classify the source samples correctly with cross-entropy loss. This step is crucial as it enables G and C to extract the task-specific features. The training objective is min G,C Ls(xs,ys) and the loss function is defined as follows:
Ls(xs,ys) = −E(xs,ys)∼Ds K∑ k=1 1[k=ys] log p(y s|xs) (12)
In the second step, we are aiming to diversify C. To establish this goal, we fix G and train C to increase the discrepancy of C’s output. To avoid mode collapse (e.g. C(1) outputs all zeros and C(2) output all ones), we add Ls(xs,ys) as a regularizer in the training process. The high-level intuition is that we do not expect C to forget the information learned in the first step in the training process. The training objective is min
C Ls(xs,ys)− Ladv(xt), where the adversarial loss is defined as:
Ladv(xt) = Ext∼Dt N−1∑ i=1 N∑ j=i+1 ‖(pi(y|xt)− pj(y|xt)‖1 (13)
In the last step, we are trying to consolidate the feature generator G by training G to extract generalizable representations such that the discrepancy of C’s output is minimized. To achieve this goal, we fix the diversified classifiers C and train G with the adversarial loss (defined in Equation 13). The training objective is min
G Ladv(xt).
4 EXPERIMENTS
We compare the proposed NASDA model with many stateof-the-art UDA baselines on multiple benchmarks. In the main paper, we only report major results; more details are provided in the supplementary material. All of our experiments are implemented in the PyTorch platform.
In the architecture search phase, we use λ=1 for all the searching experiments. We leverage the ReLU-Conv-BN order for convolutional operations, and each separable convolution is always applied twice. Our search space O includes the following operations: 3 × 3 and 5 × 5 separable convolutions, 3 × 3 and 5 × 5 dilated separable convolutions, 3 × 3 max pooling, identity, and zero. Our convolutional cell consists of N = 7 nodes. Cells located at the 13 and 2 3 of the total depth of the network are reduction cells. The architecture encoding therefore is
(αnormal, αreduce), where αnormal is shared by all the normal cells and αreduce is shared by all the reduction cells.
4.1 SETUP
Digits We investigate three digits datasets: MNIST, USPS, and Street View House Numbers (SVHN). We adopt the evaluation protocol of CyCADA (Hoffman et al., 2018) with three transfer tasks: USPS to MNIST (U→ M), MNIST to USPS (M→ U), and SVHN to MNIST (S→ M). We train our model using the training sets: MNIST (60,000), USPS (7,291), standard SVHN train (73,257).
STL→CIFAR10 Both CIFAR10 (Krizhevsky et al., 2009) and STL (Coates et al., 2011) are both 10-class image datasets. These two datasets contain nine overlapping classes. We remove the ‘frog’ class in CIFAR10 and the ‘monkey’ class in STL datasets as they have no equivalent in the other dataset, resulting in a 9-class problem. The STL images were down-scaled to 32×32 resolution to match that of CIFAR10.
SYN SIGNS→GTSRB We evaluated the adaptation from synthetic traffic sign dataset called SYN SIGNS (Moiseev et al., 2013) to real-world sign dataset called GTSRB (Stallkamp et al., 2011). These datasets contain 43 classes.
We compare our NASDA model with state-of-the-art DA methods: Deep Adaptation Network (DAN) (Long et al., 2015), Domain Adversarial Neural Network (DANN) (Ganin & Lempitsky, 2015), Domain Separation Network (DSN) (Bousmalis et al., 2016), Coupled Generative Adversarial Networks (CoGAN) (Liu & Tuzel, 2016), Maximum Classifier Discrepancy (MCD) (Saito et al., 2018), Generate to Adapt (G2A) (Sankaranarayanan et al., 2018), Stochastic Neighborhood Embedding (d-SNE) (Xu et al., 2019b), Associative Domain Adaptation (ASSOC) (Haeusser et al., 2017).
4.2 EMPIRICAL RESULTS
Neural Architecture Search Results We show the neural architecture search results in Figure 2. We can observe that our model contains more “avg_pool” and “5x5_conv” layer than other NAS model. This will make our model more generic as the average pooling method smooths out the image and hence the model is not congested with sharp features and domain-specific features. We also show that our NASDA model contains less parameters and takes less time to converge compared with state-of-the-art NAS architectures. Another interesting finding is that our NASDA contains more sequential connections in both Normal and Reduce cells when trained on MNIST→USPS. Unsupervised Domain Adaptation Results The UDA results for Digits and SYN SIGNS→GTSRB are reported in Table 2, with results of baselines directly reported from the original papers if the protocol is the same (numbers with ∗ indicates training on partial data). The NASDA model achieves a 98.4% average accuracy for Digits dataset, outperforming other baselines. For SYN SIGNS→GTSRB task, our model gets comparable results with state-of-the-art baselines. The results demonstrate the effectiveness of our NASDA model on small images.
The UDA results on the STL→CIFAR10 recognition task are reported in Table 1. Our model achieves a performance of 76.8%, outperforming all the baselines. To compare our search neural architecture with previous NAS models, we replace the neural architecture in G with other NAS models. Other training settings in the second phase are identical to our model. As such, we derive NASNet+Phase II (Zoph et al., 2018), AmoebaNet+Phase II (Shah et al., 2018) , DARTS+Phase II (Liu et al., 2019a), and PDARTS+Phase II (Chen et al., 2019) models. The results in Table 1 demonstrate that our model outperform other NAS based model by a large margin, which shows the effectiveness of our model in unsupervised domain adaptation. Specifically, we set PDARTS+Phase II as an ablation study to demonstrate the effectiveness of our task-specific design in learning domain-adaption aware features.
Analysis To dive deeper into the training process of our NASDA model, we plot the T-SNE embedding of the weights of C in USPS→MNIST in Figure 3. This is achieved by recording the weights of all the classifiers at each epoch. The black dot indicates epoch zero, which is the common starting point. The color from light to dark corresponds to the epoch number from small to large. The T-SNE plots clearly show that the classifiers are diverged from each other, demonstrating the effectiveness of the second step of our NASDA training described in Section 3.2.
5 CONCLUSION
In this paper, we first formulate a novel dual-objective task of Neural Architecture Search for Domain Adaptation (NASDA) to invigorate the design of transfer-aware network architectures. Towards tackling the NASDA task, we have proposed a novel learning framework that leverages MK-MMD to guide the neural architecture search process. Instead of aligning the features from existing handcrafted backbones, our model directly searches for the optimal neural architecture specific for domain adaptation. Furthermore, we have introduced the ways to consolidate the feature generator, which is stacked from the searched architecture, in order to boost the UDA performance. Extensive empirical evaluations on UDA benchmarks have demonstrated the efficacy of the proposed model against several state-of-the-art domain adaptation algorithms. | 1. What is the main contribution of the paper regarding unsupervised domain adaptation?
2. What are the strengths of the proposed approach, particularly in combining a modified DARTS objective with an adversarial objective?
3. What are the concerns or questions raised by the reviewer regarding the experimental setup and results?
4. How does the reviewer assess the novelty and independent effectiveness of Phase-2 and the regularization term?
5. Are there any suggestions or recommendations provided by the reviewer for improving the work, such as extending PDARTS with the MK-MMD regularization term or detailing the experimental setup with different NAS algorithms? | Review | Review
This work devises a two step process for searching optimal models for unsupervised domain adaptation. The first step involves a modification of DARTS where a discrepancy term between features for source and target domain is added to the negative-reward. The obtained feature transformer is then re-trained with an adversarial objective in order to ensure that it performs well across multiple classifiers.
The work addresses an interesting problem of automatically finding suitable architectures that transfer to unlabelled datasets. The novelty is involved in the formulation of a protocol that combines a modified DARTS objective with a post-processing step to get a suitable feature generator. The experiment section is well detailed.
I have following concerns with respect to the current state of this work
It would be useful to get insights on the number of classifiers used in Phase-2 for current experiments along with the influence of this hyperparameter on the overall performance.
Is it possible to extend PDARTS with the MK-MMD regularisation term? Do authors have any insights in this direction?
Establishing the independent effectiveness of Phase-2 and regularisation term will add value to this work. These would also serve as relevant baselines.
If possible, could the authors detail the experimental setup wrt different NAS algorithms as it has been shown that often use of simple data augmentation techniques can lead to significant changes in performance.
Similarly, what part of the network was used as the feature generator? Does the performance vary with the number of layers used for G? Since the introduction of MK-MDD term led to models with reduced parameters as per the table in Figure 2 (c), it would be interesting to see the effect of the size of task-specific and feature-specific components on the performance. Or does the MK-MMD term incentivise the use of parameter free operators in the network as recently noted in Chen et al 2020 (https://arxiv.org/pdf/2002.05283.pdf)?
Minor remark: I think the paper can benefit from one through proofread for grammatical corrections and notational consistencies. For instance, in the line above Section 4, x_{t} should be changed to x^{t} and \mathcal{L} in the line following eq(10) should include the superscript of “s”. Similarly, if the convention of underlined numbers in tables refers to second-best models that it should be made consistent across all tables. |
ICLR | Title
Network Architecture Search for Domain Adaptation
Abstract
Deep networks have been used to learn transferable representations for domain adaptation. Existing deep domain adaptation methods systematically employ popular hand-crafted networks designed specifically for image-classification tasks, leading to sub-optimal domain adaptation performance. In this paper, we present Neural Architecture Search for Domain Adaptation (NASDA), a principle framework that leverages differentiable neural architecture search to derive the optimal network architecture for domain adaptation task. NASDA is designed with two novel training strategies: neural architecture search with multi-kernel Maximum Mean Discrepancy to derive the optimal architecture, and adversarial training between a feature generator and a batch of classifiers to consolidate the feature generator. We demonstrate experimentally that NASDA leads to state-of-the-art performance on several domain adaptation benchmarks.
1 INTRODUCTION
Supervised machine learning models (Φ) aim to minimize the empirical test error ( (Φ(x),y)) by optimizing Φ on training data (x) and ground truth labels (y), assuming that the training and testing data are sampled i.i.d from the same distribution. While in practical, the training and testing data are typically collected from related domains under different distributions, a phenomenon known as domain shift (or domain discrepancy) (Quionero-Candela et al., 2009). To avoid the cost of annotating each new test data, Unsupervised Domain Adaptation (UDA) tackles domain shift by transferring the knowledge learned from a rich-labeled source domain (P (xs,ys)) to the unlabeled target domain (Q(xt)). Recently unsupervised domain adaptation research has achieved significant progress with techniques like discrepancy alignment (Long et al., 2017; Tzeng et al., 2014; Ghifary et al., 2014; Peng & Saenko, 2018; Long et al., 2015; Sun & Saenko, 2016), adversarial alignment (Xu et al., 2019a; Liu & Tuzel, 2016; Tzeng et al., 2017; Liu et al., 2018a; Ganin & Lempitsky, 2015; Saito et al., 2018; Long et al., 2018), and reconstruction-based alignment (Yi et al., 2017; Zhu et al., 2017; Hoffman et al., 2018; Kim et al., 2017). While such models typically learn feature mapping from one domain (Φ(xs)) to another (Φ(xt)) or derive a joint representation across domains (Φ(xs)⊗ Φ(xt)), the developed models have limited capacities in deriving an optimal neural architecture specific for domain transfer.
To advance network designs, neural architecture search (NAS) automates the net architecture engineering process by reinforcement supervision (Zoph & Le, 2017) or through neuro-evlolution (Real et al., 2019a). Conventional NAS models aim to derive neural architecture α along with the network parameters w, by solving a bilevel optimization problem (Anandalingam & Friesz, 1992): Φα,w = arg minα Lval(w∗(α), α) s.t. w∗(α) = argminwLtrain(w,α), where Ltrain and Lval indicate the training and validation loss, respectively. While recent works demonstrate competitive performance on tasks such as image classification (Zoph et al., 2018; Liu et al., 2018c;b; Real et al., 2019b) and object detection (Zoph & Le, 2017), designs of existing NAS algorithms typically assume that the training and testing domain are sampled from the same distribution, neglecting the scenario where two data domains or multiple feature distributions are of interest.
To efficiently devise a neural architecture across different data domains, we propose a novel learning task called Neural Architecture Search for Domain Adaptation (NASDA). The ultimate goal of NASDA is to minimize the validation loss of the target domain (Ltval). We postulate that a solution to NASDA should not only minimize validation loss of the source domain (Lsval), but should also
reduce the domain gap between the source and target. To this end, we propose a new NAS learning schema:
Φα,w = argminαLsval(w∗(α), α) + disc(Φ∗(xs),Φ∗(xt)) (1) s.t. w∗(α) = argminw Lstrain(w,α) (2)
where Φ∗ = Φα,w∗(α), and disc(Φ∗(xs),Φ∗(xt)) denotes the domain discrepancy between the source and target. Note that in unsupervised domain adaptation, Lttrain and Ltval cannot be computed directly due to the lack of label in the target domain.
Inspired by the past works in NAS and unsupervised domain adaptation, we propose in this paper an instantiated NASDA model, which comprises of two training phases, as shown in Figure 1. The first is the neural architecture searching phase, aiming to derive an optimal neural architecture (α∗), following the learning schema of Equation 1,2. Inspired by Differentiable ARchiTecture Search (DARTS) (Liu et al., 2019a), we relax the search space to be continuous so that α can be optimized with respect to Lsval and disc(Φ(xs),Φ(xt)) by gradient descent. Specifically, we enhance the feature transferability by embedding the hidden representations of the task-specific layers to a reproducing kernel Hilbert space where the mean embeddings can be explicitly matched by minimizing disc(Φ(xs),Φ(xt)). We use multi-kernel Maximum Mean Discrepancy (MK-MMD) (Gretton et al., 2007) to evaluate the domain discrepancy.
The second training phase aims to learn a good feature generator with task-specific loss, based on the derived α∗ from the first phase. To establish this goal, we use the derived deep neural network (Φα∗ ) as the feature generator (G) and devise an adversarial training process between G and a batch of classifiers C. The high-level intuition is to first diversify C in the training process, and train G to generate features such that the diversified C can have similar outputs. The training process is similar to Maximum Classifier Discrepancy framework (MCD) (Saito et al., 2018) except that we extend the dual-classifier in MCD to an ensembling of multiple classifiers. Experiments on standard UDA benchmarks demonstrate the effectiveness of our derived NASDA model in achieving significant improvements over state-of-the-art methods.
Our contributions of this paper are highlighted as follows:
• We formulate a novel dual-objective task of Neural Architecture Search for Domain Adaptation (NASDA), which optimize neural architecture for unsupervised domain adaptation, concerning both source performance objective and transfer learning objective.
• We propose an instantiated NASDA model that comprises two training stages, aiming to derive optimal architecture parameters α∗ and feature extractor G, respectively. We are the first to show the effectiveness of MK-MMD in NAS process specified for domain adaptation.
• Extensive experiments on multiple cross-domain recognition tasks demonstrate that NASDA achieves significant improvements over traditional unsupervised domain adaptation models as well as state-of-the-art NAS-based methods.
2 RELATED WORK
Deep convolutional neural network has been dominating image recognition task. In recent years, many handcrafted architectures have been proposed, including VGG (Simonyan & Zisserman, 2014), ResNet (He et al., 2016), Inception (Szegedy et al., 2015), etc., all of which verifies the importance of human expertise in network design. Our work bridges domain adaptation and the emerging field of neural architecture search (NAS), a process of automating architecture engineering technique.
Neural Architecture Search Neural Architecture Search has become the mainstream approach to discover efficient and powerful network structures (Zoph & Le, 2017; Zoph et al., 2018). The automatically searched architectures have achieved highly competitive performance in tasks such as image classification (Liu et al., 2018c;b), object detection (Zoph et al., 2018), and semantic segmentation (Chen et al., 2018). Reinforce learning based NAS methods (Zoph & Le, 2017; Tan et al., 2019; Tan & Le, 2019) are usually computational intensive, thus hampering its usage with limited computational budget. To accelerate the search procedure, many techniques has been proposed and they mainly follow four directions: (1) estimating the actual performance with lower fidelities. Such lower fidelities include shorter training times (Zoph et al., 2018; Zela et al., 2018), training on a subset of the data (Klein et al., 2017), or on lower-resolution images. (2) estimating the performance based on the learning curve extrapolation. Domhan et al. (2015) propose to extrapolate initial learning curves and terminate those predicted to perform poorly. (3) initializing the novel architectures based on other well-trained architectures. Wei et al. (2016) introduce network morphisms to modify an architecture without changing the network objects, resulting in methods that only require a few GPU days (Elsken et al., 2017; Cai et al., 2018a; Jin et al., 2019; Cai et al., 2018b). (4) one-shot architecture search. One-shot NAS treats all architectures as different subgraphs of a supergraph and shares weights between architectures that have edges of this supergraph in common (Saxena & Verbeek, 2016; Liu et al., 2019b; Bender, 2018). DARTS (Liu et al., 2019a) places a mixture of candidate operations on each edge of the one-shot model and optimizes the weights of the candidate operations with a continuous relaxation of the search space. Inspired by DARTS (Liu et al., 2019a), our model employs differentiable architecture search to derive the optimal feature extractor for unsupervised domain adaptation.
Domain Adaptation Unsupervised domain adaptation (UDA) aims to transfer the knowledge learned from one or more labeled source domains to an unlabeled target domain. Various methods have been proposed, including discrepancy-based UDA approaches (Long et al., 2017; Tzeng et al., 2014; Ghifary et al., 2014; Peng & Saenko, 2018), adversary-based approaches (Liu & Tuzel, 2016; Tzeng et al., 2017; Liu et al., 2018a), and reconstruction-based approaches (Yi et al., 2017; Zhu et al., 2017; Hoffman et al., 2018; Kim et al., 2017). These models are typically designed to tackle single source to single target adaptation. Compared with single source adaptation, multi-source domain adaptation (MSDA) assumes that training data are collected from multiple sources. Originating from the theoretical analysis in (Ben-David et al., 2010; Mansour et al., 2009; Crammer et al., 2008), MSDA has been applied to many practical applications (Xu et al., 2018; Duan et al., 2012; Peng et al., 2019). Specifically, Ben-David et al. (2010) introduce an H∆H-divergence between the weighted combination of source domains and a target domain. These models are developed using the existing hand-crafted network architecture. This property limits the capacity and versatility of domain adaptation as the backbones to extract the features are fixed. In contrast, we tackle the UDA from a different perspective, not yet considered in the UDA literature. We propose a novel dual-objective model of NASDA, which optimize neural architecture for unsupervised domain adaptation. We are the first to show the effectiveness of MK-MMD in NAS process which is designed specifically for domain adaptation.
3 NEURAL ARCHITECTURE SEARCH FOR DOMAIN ADAPTATION
In unsupervised domain adaptation, we are given a source domain Ds = {(xsi ,ysi )} ns i=1 of ns labeled examples and a target domainDt = {xtj} nt j=1 of nt unlabeled examples. The source domain and target domain are sampled from joint distributions P (xs,ys) and Q(xt,yt), respectively. The goal of this paper is to leverage NAS to derive a deep networkG : x 7→ y, which is optimal for reducing the shifts in data distributions across domains, such that the target risk t (G) = E(xt,yt)∼Q [G (xt) 6= yt] is minimized. We will start by introducing some preliminary background in Section 3.1. We then describe how to incorporate the MK-MMD into the neural architecture searching framework in
Section 3.2. Finally, we introduce the adversarial training between our derived deep network and a batch of classifiers in Section 3.3. An overview of our model can be seen in Algorithm 1.
3.1 PRELIMINARY: DARTS
In this work, we leverage DARTS (Liu et al., 2019a) as our baseline framework. Our goal is to search for a robust cell and apply it to a network that is optimal to achieve domain alignment between Ds and Dt. Following Zoph et al. (2018), we search for a computation cell as the building block of the final architecture. The final convolutional network for domain adaptation can be stacked from the learned cell. A cell is defined as a directed acyclic graph (DAG) of L nodes, {xi}Ni=1, where each node x(i) is a latent representation and each directed edge e(i,j) is associated with some operation o(i,j) that transforms x(i). DARTS (Liu et al., 2019a) assumes that cells contain two input nodes and a single output node. To make the search space continuous, DARTS relaxes the categorical choice of a particular operation to a softmax over all possible operations and is thus formulated as:
ō(i,j)(x) = ∑ o∈O exp(α (i,j) o )∑ o′∈O exp(α (i,j) o′ ) o(x) (3)
where O denotes the set of candidate operations and i < j so that skip-connect can be applied. An intermediate node can be represented as xj = ∑ i<j o
(i,j)(xi). The task of architecture search then reduces to learning a set of continuous variables α = {α(i,j)}. At the end of search, a discrete architecture can be obtained by replacing each mixed operation ō(i,j) with the most likely operation, i.e., o∗ (i,j)
= argmaxo∈O α (i,j) o and α∗ = {o∗ (i,j)}.
3.2 SEARCHING NEURAL ARCHITECTURE
Denote by Ltrain and Lval the training loss and validation loss, respectively. Conventional neural architecture search models aim to derive Φα,w by solving a bilevel optimization problem (Anandalingam & Friesz, 1992): Φα,w = arg minα Lval(w∗(α), α) s.t. w∗(α) = argminwLtrain(w,α). While recent work (Zoph et al., 2018; Liu et al., 2018c) have show promising performance on tasks such as image classification and object detection, the existing models assume that the training data and testing data are sampled from the same distributions. Our goal is to jointly learn the architecture α and the weights w within all the mixed operations (e.g. weights of the convolution filters) so that the derived model Φw∗,α∗ can transfer knowledge fromDs toDt with some simple domain adapation guidence. Initialized by Equation 1, we leverage multi-kernel Maximum Mean Discrepancy (Gretton et al., 2007) to evaluate disc(Φ∗(xs),Φ∗(xt).
MK-MMD Denote by Hk be the Reproducing Kernel Hilbert Space (RKHS) endowed with a characteristic kernel k. The mean embedding of distribution p in Hk is a unique element µk(P ) such that Ex∼P f (x) = 〈f (x) , µk (P )〉Hk for all f ∈ Hk. The MK-MMD dk (P,Q) between probability distributions P and Q is defined as the RKHS distance between the mean embeddings of P and Q. The squared formulation of MK-MMD is defined as
d2k (P,Q) , ∥∥EP [Φα (xs)]−EQ [Φα (xt)]∥∥2Hk . (4)
In this paper, we consider the case of combining Gaussian kernels with injective functions fΦ, where k(x, x′) = exp(−‖fΦ(x) − fΦ(x)′‖2). Inspired by Long et al. (2015), the characteristic kernel associated with the feature map Φ, k (xs,xt) = 〈Φ (xs) ,Φ (xt)〉, is defined as the convex combination of n positive semidefinite kernels {ku},
K , { k =
n∑ u=1 βuku : n∑ u=1 βu = 1, βu > 0,∀u
} , (5)
where the constraints on {βu} are imposed to guarantee that the k is characteristic. In practice we use finite samples from distributions to estimate MMD distance. Given Xs = {xs1, · · · ,xsm} ∼ P and Xt = {xt1, · · · ,xtm} ∼ Q, one estimator of d2k(P,Q) is
d̂2k(P,Q) = 1( m 2 ) ∑ i 6=i′ k(xsi,x s′ i)− 2( m 2 ) ∑ i 6=j k(xsi ,x t j) + 1( m 2 ) ∑ j 6=j′ k(xtj ,x t′ j). (6)
Algorithm 1 Neural Architecture Search for Domain Adaptation Phase I: Searching Neural Architecture 1: Create a mixed operation o(i,j) parametrized by α(i,j) for each edge (i, j) 2: while not converged do 3: Update architecture α by ∂∂αL s val ( w − ξ ∂∂wL s train(w,α), α ) + λ ∂∂α ( d̂2k (Φ(x s),Φ(xt)) )
4: Update weights w by descending ∂∂wL s train(w,α) 5: end while 6: Derive the final architecture based on the learned α∗.
Phase II: Adversarial Training for Domain Adaptation 1: Stack feature generator G based on α∗, initialize classifiers C 2: while not converged do 3: Step one: Train G and C with Ls(xs,ys) = −E(xs,ys)∼Ds ∑K k=1 1[k=ys] log p(y
s|xs) 4: Step two: Fix G, train C with loss: Ls(xs,ys)− Ladv(xt)(Eq. 13) 5: Step three: Fix C, train G with loss: Ladv(xt) 6: end while
The merit of multi-kernel MMD lies in its differentiability such that it can be easily incorporated into the deep network. However, the computation of the d̂2k(P,Q) incurs a complexity of O(m
2), which is undesirable in the differentiable architecture search framework. In this paper, we use the unbiased estimation of MK-MMD (Gretton et al., 2012) which can be computed with linear complexity.
NAS for Domain Adaptation Denote by Lstrain and Lsval the training loss and validation loss on the source domain, respectively. Both losses are affected by the architecture α as well as by the weights w in the network. The goal for NASDA is to find α∗ that minimizes the validation loss Ltval(w∗, α∗) on the target domain, where the weights w∗ associated with the architecture are obtained by minimizing the training loss w∗ = argminw Lstrain(w,α∗). Due to the lack of labels in the target domain, it is prohibitive to compute Ltval directly, hampering the assumption of previous gradient-based NAS algorithms (Liu et al., 2019a; Chen et al., 2019). Instead, we derive α∗ by minimizing the validation loss Lsval(w∗, α∗) on the source domain plus the domain discrepancy, disc(Φ(xs),Φ(xt)), as shown in Equation 1.
Inspired by the gradient-based hyperparameter optimization (Franceschi et al., 2018; Pedregosa, 2016; Maclaurin et al., 2015), we set the architecture parameters α as a special type of hyperparameter. This implies a bilevel optimization problem (Anandalingam & Friesz, 1992) with α as the upper-level variable and w as the lower-level variable. In practice, we utilize the MK-MMD to evaluate the domain discrepancy. The optimization can be summarized as follows:
Φα,w = argminα ( Lsval(w∗(α), α) + λd̂2k ( Φ(xs),Φ(xt) ) ) (7)
s.t. w∗(α) = argminw Lstrain(w,α) (8)
where λ is the trade-off hyperparameter between the source validation loss and the MK-MMD loss.
Approximate Architecture Search Equation 7,8 imply that directly optimizing the architecture gradient is prohibitive due to the expensive inner optimization. Inspired by DARTS (Liu et al., 2019a), we approximate w∗(α) by adapting w using only a single training step, without solving the optimization in Equation 8 by training until convergence. This idea has been adopted and proven to be effective in meta-learning for model transfer (Finn et al., 2017), gradient-based hyperparameter tuning (Luketina et al., 2016) and unrolled generative adversarial networks. We therefore propose a simple approximation scheme as follows:
∂
∂α
( Lsval(w∗(α), α) + λd̂2k ( Φ(xs),Φ(xt) ) ) ≈ ∂ ∂α Lsval ( w − ξ ∂ ∂w Lstrain(w,α), α ) + λ ∂ ∂α ( d̂2k ( Φ(xs),Φ(xt) ) ) (9)
where w − ξ ∂∂wL s train(w,α) denotes weight for one-step forward model and ξ is the learning rate for a step of inner optimization. Note Equation 9 reduces to ∇αLval(w,α) if w is already a local optimum for the inner optimization and thus∇wLtrain(w,α) = 0.
The second term of Equation 9 can be computed directly with some forward and backward passes. For the first term, applying chain rule to the approximate architecture gradient yields
∂
∂α Lsval(w′, α)− ξ ( ∂2 ∂α∂w Lstrain(w,α) ∂ ∂w′ Lsval(w′, α) ) (10)
wherew′ = w−ξ ∂∂wLtrain(w,α). The expression above contains an expensive matrix-vector product in its second term. We leverage the central difference approximation to reduce the computation complexity. Specifically, let η be a small scalar and w± = w ± η ∂∂w′L s val(w ′, α). Then:
∂2
∂α∂w Lstrain(w,α)
∂
∂w′ Lsval(w′, α) ≈
∂ ∂αLtrain(w +, α)− ∂∂αLtrain(w −, α)
2η (11)
Evaluating the central difference only requires two forward passes for the weights and two backward passes for α, reducing the complexity from quadratic to linear.
3.3 ADVERSARIAL TRAINING FOR DOMAIN ADAPTATION
By neural architecture searching from Section 3.2, we have derived the optimal cell structure (α∗) for domain adaptation. We then stack the cells to derive our feature generator G. In this section, we describe how do we consolidate G by an adversarial training of G and the classifiers C. Assume C includes N independent classifiers {C(i)}Ni=1 and denote pi(y|x) as the K-way propabilistic outputs of C(i), where K is the category number.
The high-level intuition is to consolidate the feature generator G such that it can make the diversified C generate similar outputs. To this end, our training process include three steps: (1) train G and C on Ds to obtain task-specific features, (2) fix G and train C to make {C(i)}Ni=1 have diversified output, (3) fix C and train G to minimize the output discrepancy between C. Related techniques have been used in Saito et al. (2018); Kumar et al. (2018).
First, we train both G and C to classify the source samples correctly with cross-entropy loss. This step is crucial as it enables G and C to extract the task-specific features. The training objective is min G,C Ls(xs,ys) and the loss function is defined as follows:
Ls(xs,ys) = −E(xs,ys)∼Ds K∑ k=1 1[k=ys] log p(y s|xs) (12)
In the second step, we are aiming to diversify C. To establish this goal, we fix G and train C to increase the discrepancy of C’s output. To avoid mode collapse (e.g. C(1) outputs all zeros and C(2) output all ones), we add Ls(xs,ys) as a regularizer in the training process. The high-level intuition is that we do not expect C to forget the information learned in the first step in the training process. The training objective is min
C Ls(xs,ys)− Ladv(xt), where the adversarial loss is defined as:
Ladv(xt) = Ext∼Dt N−1∑ i=1 N∑ j=i+1 ‖(pi(y|xt)− pj(y|xt)‖1 (13)
In the last step, we are trying to consolidate the feature generator G by training G to extract generalizable representations such that the discrepancy of C’s output is minimized. To achieve this goal, we fix the diversified classifiers C and train G with the adversarial loss (defined in Equation 13). The training objective is min
G Ladv(xt).
4 EXPERIMENTS
We compare the proposed NASDA model with many stateof-the-art UDA baselines on multiple benchmarks. In the main paper, we only report major results; more details are provided in the supplementary material. All of our experiments are implemented in the PyTorch platform.
In the architecture search phase, we use λ=1 for all the searching experiments. We leverage the ReLU-Conv-BN order for convolutional operations, and each separable convolution is always applied twice. Our search space O includes the following operations: 3 × 3 and 5 × 5 separable convolutions, 3 × 3 and 5 × 5 dilated separable convolutions, 3 × 3 max pooling, identity, and zero. Our convolutional cell consists of N = 7 nodes. Cells located at the 13 and 2 3 of the total depth of the network are reduction cells. The architecture encoding therefore is
(αnormal, αreduce), where αnormal is shared by all the normal cells and αreduce is shared by all the reduction cells.
4.1 SETUP
Digits We investigate three digits datasets: MNIST, USPS, and Street View House Numbers (SVHN). We adopt the evaluation protocol of CyCADA (Hoffman et al., 2018) with three transfer tasks: USPS to MNIST (U→ M), MNIST to USPS (M→ U), and SVHN to MNIST (S→ M). We train our model using the training sets: MNIST (60,000), USPS (7,291), standard SVHN train (73,257).
STL→CIFAR10 Both CIFAR10 (Krizhevsky et al., 2009) and STL (Coates et al., 2011) are both 10-class image datasets. These two datasets contain nine overlapping classes. We remove the ‘frog’ class in CIFAR10 and the ‘monkey’ class in STL datasets as they have no equivalent in the other dataset, resulting in a 9-class problem. The STL images were down-scaled to 32×32 resolution to match that of CIFAR10.
SYN SIGNS→GTSRB We evaluated the adaptation from synthetic traffic sign dataset called SYN SIGNS (Moiseev et al., 2013) to real-world sign dataset called GTSRB (Stallkamp et al., 2011). These datasets contain 43 classes.
We compare our NASDA model with state-of-the-art DA methods: Deep Adaptation Network (DAN) (Long et al., 2015), Domain Adversarial Neural Network (DANN) (Ganin & Lempitsky, 2015), Domain Separation Network (DSN) (Bousmalis et al., 2016), Coupled Generative Adversarial Networks (CoGAN) (Liu & Tuzel, 2016), Maximum Classifier Discrepancy (MCD) (Saito et al., 2018), Generate to Adapt (G2A) (Sankaranarayanan et al., 2018), Stochastic Neighborhood Embedding (d-SNE) (Xu et al., 2019b), Associative Domain Adaptation (ASSOC) (Haeusser et al., 2017).
4.2 EMPIRICAL RESULTS
Neural Architecture Search Results We show the neural architecture search results in Figure 2. We can observe that our model contains more “avg_pool” and “5x5_conv” layer than other NAS model. This will make our model more generic as the average pooling method smooths out the image and hence the model is not congested with sharp features and domain-specific features. We also show that our NASDA model contains less parameters and takes less time to converge compared with state-of-the-art NAS architectures. Another interesting finding is that our NASDA contains more sequential connections in both Normal and Reduce cells when trained on MNIST→USPS. Unsupervised Domain Adaptation Results The UDA results for Digits and SYN SIGNS→GTSRB are reported in Table 2, with results of baselines directly reported from the original papers if the protocol is the same (numbers with ∗ indicates training on partial data). The NASDA model achieves a 98.4% average accuracy for Digits dataset, outperforming other baselines. For SYN SIGNS→GTSRB task, our model gets comparable results with state-of-the-art baselines. The results demonstrate the effectiveness of our NASDA model on small images.
The UDA results on the STL→CIFAR10 recognition task are reported in Table 1. Our model achieves a performance of 76.8%, outperforming all the baselines. To compare our search neural architecture with previous NAS models, we replace the neural architecture in G with other NAS models. Other training settings in the second phase are identical to our model. As such, we derive NASNet+Phase II (Zoph et al., 2018), AmoebaNet+Phase II (Shah et al., 2018) , DARTS+Phase II (Liu et al., 2019a), and PDARTS+Phase II (Chen et al., 2019) models. The results in Table 1 demonstrate that our model outperform other NAS based model by a large margin, which shows the effectiveness of our model in unsupervised domain adaptation. Specifically, we set PDARTS+Phase II as an ablation study to demonstrate the effectiveness of our task-specific design in learning domain-adaption aware features.
Analysis To dive deeper into the training process of our NASDA model, we plot the T-SNE embedding of the weights of C in USPS→MNIST in Figure 3. This is achieved by recording the weights of all the classifiers at each epoch. The black dot indicates epoch zero, which is the common starting point. The color from light to dark corresponds to the epoch number from small to large. The T-SNE plots clearly show that the classifiers are diverged from each other, demonstrating the effectiveness of the second step of our NASDA training described in Section 3.2.
5 CONCLUSION
In this paper, we first formulate a novel dual-objective task of Neural Architecture Search for Domain Adaptation (NASDA) to invigorate the design of transfer-aware network architectures. Towards tackling the NASDA task, we have proposed a novel learning framework that leverages MK-MMD to guide the neural architecture search process. Instead of aligning the features from existing handcrafted backbones, our model directly searches for the optimal neural architecture specific for domain adaptation. Furthermore, we have introduced the ways to consolidate the feature generator, which is stacked from the searched architecture, in order to boost the UDA performance. Extensive empirical evaluations on UDA benchmarks have demonstrated the efficacy of the proposed model against several state-of-the-art domain adaptation algorithms. | 1. What are the strengths and weaknesses of the paper regarding its contributions to unsupervised domain adaptation?
2. How does the reviewer assess the novelty of the two components introduced in the paper?
3. What is the recommendation of the reviewer regarding the acceptance or rejection of the paper, and why?
4. What evidence or proof does the reviewer suggest the authors provide to support their claims?
5. How does the reviewer propose the authors improve the comparison of the proposed method with other works in the field?
6. Are there any inaccuracies in the paper's descriptions that the reviewer noticed? If so, what are they? | Review | Review
Summarize what the paper claims to contribute.
This work introduces a two-step procedure for unsupervised domain adaptation; (1) neural architecture search for domain adaptation (NASDA) based on DARTS for neural architecture search and MK-MMD for differentiable unsupervised domain adaptation (2) adversarial training with a batch of classifiers. Based on induced architecture from the first component, it demonstrates unsupervised domain adaptation task supported by the second component.
List strong and weak points of the paper.
strengths
It is the first paper to adopt neural architecture search on unsupervised domain adaptation. The proposed combination, which is DARTS and MK-MMD, discovers a relevant neural architecture for unsupervised domain adaptation as shown in the experiment section.
weaknesses
The novelty of both components is limited. The novelty of the first component is not explained well in the paper. The first component seems to be a simple combination of DARTS with the additional loss function, MK-MMD. I can’t find any new modification on DARTS with MK-MMD in the paper. All descriptions on <section 3.2 searching neural architecture> are borrowed from the previous works, DARTS and MK-MMD. The novelty of the second component is not related to neural architecture search for domain adaptation but multiple discriminators for generative adversarial networks. In this aspect, the ablation study on the second component is not concluded.
Clearly state your recommendation (accept or reject) with one or two key reasons for this choice
I give “Ok but not good enough - rejection (4)” to this paper. Although the proposed method deals with new and fancy stuff, the novelty of both proposed components is limited.
I believe that the author should give more evidence to support the claims. If the author wants to insist that the first component, adopting MK-MMD on DARTS, is a good option, authors should prove that NASDA is effective for searching transferrable architecture by (1) giving a theoretical proof or (2) showing good performance on various practical benchmarks for domain adaptation. For example on argument (1), NASDA gives a tighter lower bound compared to a naive DA counterpart. For example on argument (2), NASDA shows good performance on several benchmarks such as office-31, ImageCLEF, and VisDA or multi-source domain benchmark.
If the author wants to prove the effectiveness of the second component, the author should compare the proposed method to (1) single discriminator and (2) the other multiple discriminator schemes. But even this comparison is not related to neural architecture search.
Provide additional feedback with the aim to improve the paper.
Inaccurate descriptions on the paper
The description of [conventional NAS ~ training and validation loss, respectively] in the second paragraph of the introduction section is that of DARTS (Liu, et. al., 2019), not generic NAS. For example, the work (Zoph & Le, 2017) generates a neural architecture to maximize the expected accuracy of the generated architecture on the validation set. |
ICLR | Title
Network Architecture Search for Domain Adaptation
Abstract
Deep networks have been used to learn transferable representations for domain adaptation. Existing deep domain adaptation methods systematically employ popular hand-crafted networks designed specifically for image-classification tasks, leading to sub-optimal domain adaptation performance. In this paper, we present Neural Architecture Search for Domain Adaptation (NASDA), a principle framework that leverages differentiable neural architecture search to derive the optimal network architecture for domain adaptation task. NASDA is designed with two novel training strategies: neural architecture search with multi-kernel Maximum Mean Discrepancy to derive the optimal architecture, and adversarial training between a feature generator and a batch of classifiers to consolidate the feature generator. We demonstrate experimentally that NASDA leads to state-of-the-art performance on several domain adaptation benchmarks.
1 INTRODUCTION
Supervised machine learning models (Φ) aim to minimize the empirical test error ( (Φ(x),y)) by optimizing Φ on training data (x) and ground truth labels (y), assuming that the training and testing data are sampled i.i.d from the same distribution. While in practical, the training and testing data are typically collected from related domains under different distributions, a phenomenon known as domain shift (or domain discrepancy) (Quionero-Candela et al., 2009). To avoid the cost of annotating each new test data, Unsupervised Domain Adaptation (UDA) tackles domain shift by transferring the knowledge learned from a rich-labeled source domain (P (xs,ys)) to the unlabeled target domain (Q(xt)). Recently unsupervised domain adaptation research has achieved significant progress with techniques like discrepancy alignment (Long et al., 2017; Tzeng et al., 2014; Ghifary et al., 2014; Peng & Saenko, 2018; Long et al., 2015; Sun & Saenko, 2016), adversarial alignment (Xu et al., 2019a; Liu & Tuzel, 2016; Tzeng et al., 2017; Liu et al., 2018a; Ganin & Lempitsky, 2015; Saito et al., 2018; Long et al., 2018), and reconstruction-based alignment (Yi et al., 2017; Zhu et al., 2017; Hoffman et al., 2018; Kim et al., 2017). While such models typically learn feature mapping from one domain (Φ(xs)) to another (Φ(xt)) or derive a joint representation across domains (Φ(xs)⊗ Φ(xt)), the developed models have limited capacities in deriving an optimal neural architecture specific for domain transfer.
To advance network designs, neural architecture search (NAS) automates the net architecture engineering process by reinforcement supervision (Zoph & Le, 2017) or through neuro-evlolution (Real et al., 2019a). Conventional NAS models aim to derive neural architecture α along with the network parameters w, by solving a bilevel optimization problem (Anandalingam & Friesz, 1992): Φα,w = arg minα Lval(w∗(α), α) s.t. w∗(α) = argminwLtrain(w,α), where Ltrain and Lval indicate the training and validation loss, respectively. While recent works demonstrate competitive performance on tasks such as image classification (Zoph et al., 2018; Liu et al., 2018c;b; Real et al., 2019b) and object detection (Zoph & Le, 2017), designs of existing NAS algorithms typically assume that the training and testing domain are sampled from the same distribution, neglecting the scenario where two data domains or multiple feature distributions are of interest.
To efficiently devise a neural architecture across different data domains, we propose a novel learning task called Neural Architecture Search for Domain Adaptation (NASDA). The ultimate goal of NASDA is to minimize the validation loss of the target domain (Ltval). We postulate that a solution to NASDA should not only minimize validation loss of the source domain (Lsval), but should also
reduce the domain gap between the source and target. To this end, we propose a new NAS learning schema:
Φα,w = argminαLsval(w∗(α), α) + disc(Φ∗(xs),Φ∗(xt)) (1) s.t. w∗(α) = argminw Lstrain(w,α) (2)
where Φ∗ = Φα,w∗(α), and disc(Φ∗(xs),Φ∗(xt)) denotes the domain discrepancy between the source and target. Note that in unsupervised domain adaptation, Lttrain and Ltval cannot be computed directly due to the lack of label in the target domain.
Inspired by the past works in NAS and unsupervised domain adaptation, we propose in this paper an instantiated NASDA model, which comprises of two training phases, as shown in Figure 1. The first is the neural architecture searching phase, aiming to derive an optimal neural architecture (α∗), following the learning schema of Equation 1,2. Inspired by Differentiable ARchiTecture Search (DARTS) (Liu et al., 2019a), we relax the search space to be continuous so that α can be optimized with respect to Lsval and disc(Φ(xs),Φ(xt)) by gradient descent. Specifically, we enhance the feature transferability by embedding the hidden representations of the task-specific layers to a reproducing kernel Hilbert space where the mean embeddings can be explicitly matched by minimizing disc(Φ(xs),Φ(xt)). We use multi-kernel Maximum Mean Discrepancy (MK-MMD) (Gretton et al., 2007) to evaluate the domain discrepancy.
The second training phase aims to learn a good feature generator with task-specific loss, based on the derived α∗ from the first phase. To establish this goal, we use the derived deep neural network (Φα∗ ) as the feature generator (G) and devise an adversarial training process between G and a batch of classifiers C. The high-level intuition is to first diversify C in the training process, and train G to generate features such that the diversified C can have similar outputs. The training process is similar to Maximum Classifier Discrepancy framework (MCD) (Saito et al., 2018) except that we extend the dual-classifier in MCD to an ensembling of multiple classifiers. Experiments on standard UDA benchmarks demonstrate the effectiveness of our derived NASDA model in achieving significant improvements over state-of-the-art methods.
Our contributions of this paper are highlighted as follows:
• We formulate a novel dual-objective task of Neural Architecture Search for Domain Adaptation (NASDA), which optimize neural architecture for unsupervised domain adaptation, concerning both source performance objective and transfer learning objective.
• We propose an instantiated NASDA model that comprises two training stages, aiming to derive optimal architecture parameters α∗ and feature extractor G, respectively. We are the first to show the effectiveness of MK-MMD in NAS process specified for domain adaptation.
• Extensive experiments on multiple cross-domain recognition tasks demonstrate that NASDA achieves significant improvements over traditional unsupervised domain adaptation models as well as state-of-the-art NAS-based methods.
2 RELATED WORK
Deep convolutional neural network has been dominating image recognition task. In recent years, many handcrafted architectures have been proposed, including VGG (Simonyan & Zisserman, 2014), ResNet (He et al., 2016), Inception (Szegedy et al., 2015), etc., all of which verifies the importance of human expertise in network design. Our work bridges domain adaptation and the emerging field of neural architecture search (NAS), a process of automating architecture engineering technique.
Neural Architecture Search Neural Architecture Search has become the mainstream approach to discover efficient and powerful network structures (Zoph & Le, 2017; Zoph et al., 2018). The automatically searched architectures have achieved highly competitive performance in tasks such as image classification (Liu et al., 2018c;b), object detection (Zoph et al., 2018), and semantic segmentation (Chen et al., 2018). Reinforce learning based NAS methods (Zoph & Le, 2017; Tan et al., 2019; Tan & Le, 2019) are usually computational intensive, thus hampering its usage with limited computational budget. To accelerate the search procedure, many techniques has been proposed and they mainly follow four directions: (1) estimating the actual performance with lower fidelities. Such lower fidelities include shorter training times (Zoph et al., 2018; Zela et al., 2018), training on a subset of the data (Klein et al., 2017), or on lower-resolution images. (2) estimating the performance based on the learning curve extrapolation. Domhan et al. (2015) propose to extrapolate initial learning curves and terminate those predicted to perform poorly. (3) initializing the novel architectures based on other well-trained architectures. Wei et al. (2016) introduce network morphisms to modify an architecture without changing the network objects, resulting in methods that only require a few GPU days (Elsken et al., 2017; Cai et al., 2018a; Jin et al., 2019; Cai et al., 2018b). (4) one-shot architecture search. One-shot NAS treats all architectures as different subgraphs of a supergraph and shares weights between architectures that have edges of this supergraph in common (Saxena & Verbeek, 2016; Liu et al., 2019b; Bender, 2018). DARTS (Liu et al., 2019a) places a mixture of candidate operations on each edge of the one-shot model and optimizes the weights of the candidate operations with a continuous relaxation of the search space. Inspired by DARTS (Liu et al., 2019a), our model employs differentiable architecture search to derive the optimal feature extractor for unsupervised domain adaptation.
Domain Adaptation Unsupervised domain adaptation (UDA) aims to transfer the knowledge learned from one or more labeled source domains to an unlabeled target domain. Various methods have been proposed, including discrepancy-based UDA approaches (Long et al., 2017; Tzeng et al., 2014; Ghifary et al., 2014; Peng & Saenko, 2018), adversary-based approaches (Liu & Tuzel, 2016; Tzeng et al., 2017; Liu et al., 2018a), and reconstruction-based approaches (Yi et al., 2017; Zhu et al., 2017; Hoffman et al., 2018; Kim et al., 2017). These models are typically designed to tackle single source to single target adaptation. Compared with single source adaptation, multi-source domain adaptation (MSDA) assumes that training data are collected from multiple sources. Originating from the theoretical analysis in (Ben-David et al., 2010; Mansour et al., 2009; Crammer et al., 2008), MSDA has been applied to many practical applications (Xu et al., 2018; Duan et al., 2012; Peng et al., 2019). Specifically, Ben-David et al. (2010) introduce an H∆H-divergence between the weighted combination of source domains and a target domain. These models are developed using the existing hand-crafted network architecture. This property limits the capacity and versatility of domain adaptation as the backbones to extract the features are fixed. In contrast, we tackle the UDA from a different perspective, not yet considered in the UDA literature. We propose a novel dual-objective model of NASDA, which optimize neural architecture for unsupervised domain adaptation. We are the first to show the effectiveness of MK-MMD in NAS process which is designed specifically for domain adaptation.
3 NEURAL ARCHITECTURE SEARCH FOR DOMAIN ADAPTATION
In unsupervised domain adaptation, we are given a source domain Ds = {(xsi ,ysi )} ns i=1 of ns labeled examples and a target domainDt = {xtj} nt j=1 of nt unlabeled examples. The source domain and target domain are sampled from joint distributions P (xs,ys) and Q(xt,yt), respectively. The goal of this paper is to leverage NAS to derive a deep networkG : x 7→ y, which is optimal for reducing the shifts in data distributions across domains, such that the target risk t (G) = E(xt,yt)∼Q [G (xt) 6= yt] is minimized. We will start by introducing some preliminary background in Section 3.1. We then describe how to incorporate the MK-MMD into the neural architecture searching framework in
Section 3.2. Finally, we introduce the adversarial training between our derived deep network and a batch of classifiers in Section 3.3. An overview of our model can be seen in Algorithm 1.
3.1 PRELIMINARY: DARTS
In this work, we leverage DARTS (Liu et al., 2019a) as our baseline framework. Our goal is to search for a robust cell and apply it to a network that is optimal to achieve domain alignment between Ds and Dt. Following Zoph et al. (2018), we search for a computation cell as the building block of the final architecture. The final convolutional network for domain adaptation can be stacked from the learned cell. A cell is defined as a directed acyclic graph (DAG) of L nodes, {xi}Ni=1, where each node x(i) is a latent representation and each directed edge e(i,j) is associated with some operation o(i,j) that transforms x(i). DARTS (Liu et al., 2019a) assumes that cells contain two input nodes and a single output node. To make the search space continuous, DARTS relaxes the categorical choice of a particular operation to a softmax over all possible operations and is thus formulated as:
ō(i,j)(x) = ∑ o∈O exp(α (i,j) o )∑ o′∈O exp(α (i,j) o′ ) o(x) (3)
where O denotes the set of candidate operations and i < j so that skip-connect can be applied. An intermediate node can be represented as xj = ∑ i<j o
(i,j)(xi). The task of architecture search then reduces to learning a set of continuous variables α = {α(i,j)}. At the end of search, a discrete architecture can be obtained by replacing each mixed operation ō(i,j) with the most likely operation, i.e., o∗ (i,j)
= argmaxo∈O α (i,j) o and α∗ = {o∗ (i,j)}.
3.2 SEARCHING NEURAL ARCHITECTURE
Denote by Ltrain and Lval the training loss and validation loss, respectively. Conventional neural architecture search models aim to derive Φα,w by solving a bilevel optimization problem (Anandalingam & Friesz, 1992): Φα,w = arg minα Lval(w∗(α), α) s.t. w∗(α) = argminwLtrain(w,α). While recent work (Zoph et al., 2018; Liu et al., 2018c) have show promising performance on tasks such as image classification and object detection, the existing models assume that the training data and testing data are sampled from the same distributions. Our goal is to jointly learn the architecture α and the weights w within all the mixed operations (e.g. weights of the convolution filters) so that the derived model Φw∗,α∗ can transfer knowledge fromDs toDt with some simple domain adapation guidence. Initialized by Equation 1, we leverage multi-kernel Maximum Mean Discrepancy (Gretton et al., 2007) to evaluate disc(Φ∗(xs),Φ∗(xt).
MK-MMD Denote by Hk be the Reproducing Kernel Hilbert Space (RKHS) endowed with a characteristic kernel k. The mean embedding of distribution p in Hk is a unique element µk(P ) such that Ex∼P f (x) = 〈f (x) , µk (P )〉Hk for all f ∈ Hk. The MK-MMD dk (P,Q) between probability distributions P and Q is defined as the RKHS distance between the mean embeddings of P and Q. The squared formulation of MK-MMD is defined as
d2k (P,Q) , ∥∥EP [Φα (xs)]−EQ [Φα (xt)]∥∥2Hk . (4)
In this paper, we consider the case of combining Gaussian kernels with injective functions fΦ, where k(x, x′) = exp(−‖fΦ(x) − fΦ(x)′‖2). Inspired by Long et al. (2015), the characteristic kernel associated with the feature map Φ, k (xs,xt) = 〈Φ (xs) ,Φ (xt)〉, is defined as the convex combination of n positive semidefinite kernels {ku},
K , { k =
n∑ u=1 βuku : n∑ u=1 βu = 1, βu > 0,∀u
} , (5)
where the constraints on {βu} are imposed to guarantee that the k is characteristic. In practice we use finite samples from distributions to estimate MMD distance. Given Xs = {xs1, · · · ,xsm} ∼ P and Xt = {xt1, · · · ,xtm} ∼ Q, one estimator of d2k(P,Q) is
d̂2k(P,Q) = 1( m 2 ) ∑ i 6=i′ k(xsi,x s′ i)− 2( m 2 ) ∑ i 6=j k(xsi ,x t j) + 1( m 2 ) ∑ j 6=j′ k(xtj ,x t′ j). (6)
Algorithm 1 Neural Architecture Search for Domain Adaptation Phase I: Searching Neural Architecture 1: Create a mixed operation o(i,j) parametrized by α(i,j) for each edge (i, j) 2: while not converged do 3: Update architecture α by ∂∂αL s val ( w − ξ ∂∂wL s train(w,α), α ) + λ ∂∂α ( d̂2k (Φ(x s),Φ(xt)) )
4: Update weights w by descending ∂∂wL s train(w,α) 5: end while 6: Derive the final architecture based on the learned α∗.
Phase II: Adversarial Training for Domain Adaptation 1: Stack feature generator G based on α∗, initialize classifiers C 2: while not converged do 3: Step one: Train G and C with Ls(xs,ys) = −E(xs,ys)∼Ds ∑K k=1 1[k=ys] log p(y
s|xs) 4: Step two: Fix G, train C with loss: Ls(xs,ys)− Ladv(xt)(Eq. 13) 5: Step three: Fix C, train G with loss: Ladv(xt) 6: end while
The merit of multi-kernel MMD lies in its differentiability such that it can be easily incorporated into the deep network. However, the computation of the d̂2k(P,Q) incurs a complexity of O(m
2), which is undesirable in the differentiable architecture search framework. In this paper, we use the unbiased estimation of MK-MMD (Gretton et al., 2012) which can be computed with linear complexity.
NAS for Domain Adaptation Denote by Lstrain and Lsval the training loss and validation loss on the source domain, respectively. Both losses are affected by the architecture α as well as by the weights w in the network. The goal for NASDA is to find α∗ that minimizes the validation loss Ltval(w∗, α∗) on the target domain, where the weights w∗ associated with the architecture are obtained by minimizing the training loss w∗ = argminw Lstrain(w,α∗). Due to the lack of labels in the target domain, it is prohibitive to compute Ltval directly, hampering the assumption of previous gradient-based NAS algorithms (Liu et al., 2019a; Chen et al., 2019). Instead, we derive α∗ by minimizing the validation loss Lsval(w∗, α∗) on the source domain plus the domain discrepancy, disc(Φ(xs),Φ(xt)), as shown in Equation 1.
Inspired by the gradient-based hyperparameter optimization (Franceschi et al., 2018; Pedregosa, 2016; Maclaurin et al., 2015), we set the architecture parameters α as a special type of hyperparameter. This implies a bilevel optimization problem (Anandalingam & Friesz, 1992) with α as the upper-level variable and w as the lower-level variable. In practice, we utilize the MK-MMD to evaluate the domain discrepancy. The optimization can be summarized as follows:
Φα,w = argminα ( Lsval(w∗(α), α) + λd̂2k ( Φ(xs),Φ(xt) ) ) (7)
s.t. w∗(α) = argminw Lstrain(w,α) (8)
where λ is the trade-off hyperparameter between the source validation loss and the MK-MMD loss.
Approximate Architecture Search Equation 7,8 imply that directly optimizing the architecture gradient is prohibitive due to the expensive inner optimization. Inspired by DARTS (Liu et al., 2019a), we approximate w∗(α) by adapting w using only a single training step, without solving the optimization in Equation 8 by training until convergence. This idea has been adopted and proven to be effective in meta-learning for model transfer (Finn et al., 2017), gradient-based hyperparameter tuning (Luketina et al., 2016) and unrolled generative adversarial networks. We therefore propose a simple approximation scheme as follows:
∂
∂α
( Lsval(w∗(α), α) + λd̂2k ( Φ(xs),Φ(xt) ) ) ≈ ∂ ∂α Lsval ( w − ξ ∂ ∂w Lstrain(w,α), α ) + λ ∂ ∂α ( d̂2k ( Φ(xs),Φ(xt) ) ) (9)
where w − ξ ∂∂wL s train(w,α) denotes weight for one-step forward model and ξ is the learning rate for a step of inner optimization. Note Equation 9 reduces to ∇αLval(w,α) if w is already a local optimum for the inner optimization and thus∇wLtrain(w,α) = 0.
The second term of Equation 9 can be computed directly with some forward and backward passes. For the first term, applying chain rule to the approximate architecture gradient yields
∂
∂α Lsval(w′, α)− ξ ( ∂2 ∂α∂w Lstrain(w,α) ∂ ∂w′ Lsval(w′, α) ) (10)
wherew′ = w−ξ ∂∂wLtrain(w,α). The expression above contains an expensive matrix-vector product in its second term. We leverage the central difference approximation to reduce the computation complexity. Specifically, let η be a small scalar and w± = w ± η ∂∂w′L s val(w ′, α). Then:
∂2
∂α∂w Lstrain(w,α)
∂
∂w′ Lsval(w′, α) ≈
∂ ∂αLtrain(w +, α)− ∂∂αLtrain(w −, α)
2η (11)
Evaluating the central difference only requires two forward passes for the weights and two backward passes for α, reducing the complexity from quadratic to linear.
3.3 ADVERSARIAL TRAINING FOR DOMAIN ADAPTATION
By neural architecture searching from Section 3.2, we have derived the optimal cell structure (α∗) for domain adaptation. We then stack the cells to derive our feature generator G. In this section, we describe how do we consolidate G by an adversarial training of G and the classifiers C. Assume C includes N independent classifiers {C(i)}Ni=1 and denote pi(y|x) as the K-way propabilistic outputs of C(i), where K is the category number.
The high-level intuition is to consolidate the feature generator G such that it can make the diversified C generate similar outputs. To this end, our training process include three steps: (1) train G and C on Ds to obtain task-specific features, (2) fix G and train C to make {C(i)}Ni=1 have diversified output, (3) fix C and train G to minimize the output discrepancy between C. Related techniques have been used in Saito et al. (2018); Kumar et al. (2018).
First, we train both G and C to classify the source samples correctly with cross-entropy loss. This step is crucial as it enables G and C to extract the task-specific features. The training objective is min G,C Ls(xs,ys) and the loss function is defined as follows:
Ls(xs,ys) = −E(xs,ys)∼Ds K∑ k=1 1[k=ys] log p(y s|xs) (12)
In the second step, we are aiming to diversify C. To establish this goal, we fix G and train C to increase the discrepancy of C’s output. To avoid mode collapse (e.g. C(1) outputs all zeros and C(2) output all ones), we add Ls(xs,ys) as a regularizer in the training process. The high-level intuition is that we do not expect C to forget the information learned in the first step in the training process. The training objective is min
C Ls(xs,ys)− Ladv(xt), where the adversarial loss is defined as:
Ladv(xt) = Ext∼Dt N−1∑ i=1 N∑ j=i+1 ‖(pi(y|xt)− pj(y|xt)‖1 (13)
In the last step, we are trying to consolidate the feature generator G by training G to extract generalizable representations such that the discrepancy of C’s output is minimized. To achieve this goal, we fix the diversified classifiers C and train G with the adversarial loss (defined in Equation 13). The training objective is min
G Ladv(xt).
4 EXPERIMENTS
We compare the proposed NASDA model with many stateof-the-art UDA baselines on multiple benchmarks. In the main paper, we only report major results; more details are provided in the supplementary material. All of our experiments are implemented in the PyTorch platform.
In the architecture search phase, we use λ=1 for all the searching experiments. We leverage the ReLU-Conv-BN order for convolutional operations, and each separable convolution is always applied twice. Our search space O includes the following operations: 3 × 3 and 5 × 5 separable convolutions, 3 × 3 and 5 × 5 dilated separable convolutions, 3 × 3 max pooling, identity, and zero. Our convolutional cell consists of N = 7 nodes. Cells located at the 13 and 2 3 of the total depth of the network are reduction cells. The architecture encoding therefore is
(αnormal, αreduce), where αnormal is shared by all the normal cells and αreduce is shared by all the reduction cells.
4.1 SETUP
Digits We investigate three digits datasets: MNIST, USPS, and Street View House Numbers (SVHN). We adopt the evaluation protocol of CyCADA (Hoffman et al., 2018) with three transfer tasks: USPS to MNIST (U→ M), MNIST to USPS (M→ U), and SVHN to MNIST (S→ M). We train our model using the training sets: MNIST (60,000), USPS (7,291), standard SVHN train (73,257).
STL→CIFAR10 Both CIFAR10 (Krizhevsky et al., 2009) and STL (Coates et al., 2011) are both 10-class image datasets. These two datasets contain nine overlapping classes. We remove the ‘frog’ class in CIFAR10 and the ‘monkey’ class in STL datasets as they have no equivalent in the other dataset, resulting in a 9-class problem. The STL images were down-scaled to 32×32 resolution to match that of CIFAR10.
SYN SIGNS→GTSRB We evaluated the adaptation from synthetic traffic sign dataset called SYN SIGNS (Moiseev et al., 2013) to real-world sign dataset called GTSRB (Stallkamp et al., 2011). These datasets contain 43 classes.
We compare our NASDA model with state-of-the-art DA methods: Deep Adaptation Network (DAN) (Long et al., 2015), Domain Adversarial Neural Network (DANN) (Ganin & Lempitsky, 2015), Domain Separation Network (DSN) (Bousmalis et al., 2016), Coupled Generative Adversarial Networks (CoGAN) (Liu & Tuzel, 2016), Maximum Classifier Discrepancy (MCD) (Saito et al., 2018), Generate to Adapt (G2A) (Sankaranarayanan et al., 2018), Stochastic Neighborhood Embedding (d-SNE) (Xu et al., 2019b), Associative Domain Adaptation (ASSOC) (Haeusser et al., 2017).
4.2 EMPIRICAL RESULTS
Neural Architecture Search Results We show the neural architecture search results in Figure 2. We can observe that our model contains more “avg_pool” and “5x5_conv” layer than other NAS model. This will make our model more generic as the average pooling method smooths out the image and hence the model is not congested with sharp features and domain-specific features. We also show that our NASDA model contains less parameters and takes less time to converge compared with state-of-the-art NAS architectures. Another interesting finding is that our NASDA contains more sequential connections in both Normal and Reduce cells when trained on MNIST→USPS. Unsupervised Domain Adaptation Results The UDA results for Digits and SYN SIGNS→GTSRB are reported in Table 2, with results of baselines directly reported from the original papers if the protocol is the same (numbers with ∗ indicates training on partial data). The NASDA model achieves a 98.4% average accuracy for Digits dataset, outperforming other baselines. For SYN SIGNS→GTSRB task, our model gets comparable results with state-of-the-art baselines. The results demonstrate the effectiveness of our NASDA model on small images.
The UDA results on the STL→CIFAR10 recognition task are reported in Table 1. Our model achieves a performance of 76.8%, outperforming all the baselines. To compare our search neural architecture with previous NAS models, we replace the neural architecture in G with other NAS models. Other training settings in the second phase are identical to our model. As such, we derive NASNet+Phase II (Zoph et al., 2018), AmoebaNet+Phase II (Shah et al., 2018) , DARTS+Phase II (Liu et al., 2019a), and PDARTS+Phase II (Chen et al., 2019) models. The results in Table 1 demonstrate that our model outperform other NAS based model by a large margin, which shows the effectiveness of our model in unsupervised domain adaptation. Specifically, we set PDARTS+Phase II as an ablation study to demonstrate the effectiveness of our task-specific design in learning domain-adaption aware features.
Analysis To dive deeper into the training process of our NASDA model, we plot the T-SNE embedding of the weights of C in USPS→MNIST in Figure 3. This is achieved by recording the weights of all the classifiers at each epoch. The black dot indicates epoch zero, which is the common starting point. The color from light to dark corresponds to the epoch number from small to large. The T-SNE plots clearly show that the classifiers are diverged from each other, demonstrating the effectiveness of the second step of our NASDA training described in Section 3.2.
5 CONCLUSION
In this paper, we first formulate a novel dual-objective task of Neural Architecture Search for Domain Adaptation (NASDA) to invigorate the design of transfer-aware network architectures. Towards tackling the NASDA task, we have proposed a novel learning framework that leverages MK-MMD to guide the neural architecture search process. Instead of aligning the features from existing handcrafted backbones, our model directly searches for the optimal neural architecture specific for domain adaptation. Furthermore, we have introduced the ways to consolidate the feature generator, which is stacked from the searched architecture, in order to boost the UDA performance. Extensive empirical evaluations on UDA benchmarks have demonstrated the efficacy of the proposed model against several state-of-the-art domain adaptation algorithms. | 1. What is the main contribution of the paper regarding domain adaptation?
2. What are the strengths and weaknesses of the proposed approach, particularly in its originality and methodology?
3. Do you have any concerns regarding the formulation of the loss function and its dependence on weights?
4. How does the reviewer assess the effectiveness of the proposed approach compared to other domain adaptation strategies?
5. What are the limitations of the experimental setup and comparisons made in the paper?
6. How does the reviewer evaluate the clarity and readability of the paper's content? | Review | Review
This paper introduces an approach to search for the best network architecture for a domain adaptation task. This is achieved by following a differentiable architecture search strategy in which an additional loss function is included to account for the domain shift. Specifically, the loss function aims to minimize the discrepancy between feature representations from the two domains.
Strengths:
To the best of my knowledge, this constitutes the first attempt at performing neural architecture search for domain adaptation
The results showing that accounting for the DA task during NAS outperforms simply taking a NAS model and applying a DA strategy to it are encouraging.
Weaknesses:
Originality:
While I like the general concept of designing a NAS method specifically for domain adaptation, the proposed method lacks originality. In essence, it complements a standard NAS approach with standard DA strategies. This is fine, and the results are encouraging, but this seems to be a small contribution for an ICLR paper.
Methodology:
Something bothers me in the formulation of Eqs. (1)-(2). While the discrepancy term of course depends on the architecture, it also depends on the weights w. It is therefore not very intuitive to my why the discrepancy term does not appear in the inner minimization problem of Eq. (2).
The other point that I find disturbing in the methodology is the fact that, during the second training stage, a different DA strategy, relying on adversarial training instead of MK-MMD, is used. Why not train the final model with MK-MMD? Alternatively, why not rely on adversarial training during the search phase? This unfortunately makes the overall method less principled.
As mentioned by the authors, one cannot optimized the target validation loss, because the target data is unsupervised. However, it would be interesting to evaluate what happens using pseudo-labeling, which has proven effective in DA, to approximate the target validation loss.
Experiments:
In the comparison with the state-of-the-art DA methods in Tables 1 and 2, what architectures do these methods rely on? Are the comparisons fair, in the sense that all methods use networks with similar capacity?
The datasets used here are a bit outdated, and I recommend the authors to rely on newer benchmarks, e.g., VisDA or office-home.
The set of baselines differs across the different experiments. For example, d-SNE, which got the second best results on the digits datasets, is not reported for traffic signs and STL->CIFAR; DIRT-T that got the second-best results in Table 1 is not reported in Table 2.
Clarity:
The paper is reasonably easy to understand, but could benefit from thorough proofreading. |
ICLR | Title
Network Architecture Search for Domain Adaptation
Abstract
Deep networks have been used to learn transferable representations for domain adaptation. Existing deep domain adaptation methods systematically employ popular hand-crafted networks designed specifically for image-classification tasks, leading to sub-optimal domain adaptation performance. In this paper, we present Neural Architecture Search for Domain Adaptation (NASDA), a principle framework that leverages differentiable neural architecture search to derive the optimal network architecture for domain adaptation task. NASDA is designed with two novel training strategies: neural architecture search with multi-kernel Maximum Mean Discrepancy to derive the optimal architecture, and adversarial training between a feature generator and a batch of classifiers to consolidate the feature generator. We demonstrate experimentally that NASDA leads to state-of-the-art performance on several domain adaptation benchmarks.
1 INTRODUCTION
Supervised machine learning models (Φ) aim to minimize the empirical test error ( (Φ(x),y)) by optimizing Φ on training data (x) and ground truth labels (y), assuming that the training and testing data are sampled i.i.d from the same distribution. While in practical, the training and testing data are typically collected from related domains under different distributions, a phenomenon known as domain shift (or domain discrepancy) (Quionero-Candela et al., 2009). To avoid the cost of annotating each new test data, Unsupervised Domain Adaptation (UDA) tackles domain shift by transferring the knowledge learned from a rich-labeled source domain (P (xs,ys)) to the unlabeled target domain (Q(xt)). Recently unsupervised domain adaptation research has achieved significant progress with techniques like discrepancy alignment (Long et al., 2017; Tzeng et al., 2014; Ghifary et al., 2014; Peng & Saenko, 2018; Long et al., 2015; Sun & Saenko, 2016), adversarial alignment (Xu et al., 2019a; Liu & Tuzel, 2016; Tzeng et al., 2017; Liu et al., 2018a; Ganin & Lempitsky, 2015; Saito et al., 2018; Long et al., 2018), and reconstruction-based alignment (Yi et al., 2017; Zhu et al., 2017; Hoffman et al., 2018; Kim et al., 2017). While such models typically learn feature mapping from one domain (Φ(xs)) to another (Φ(xt)) or derive a joint representation across domains (Φ(xs)⊗ Φ(xt)), the developed models have limited capacities in deriving an optimal neural architecture specific for domain transfer.
To advance network designs, neural architecture search (NAS) automates the net architecture engineering process by reinforcement supervision (Zoph & Le, 2017) or through neuro-evlolution (Real et al., 2019a). Conventional NAS models aim to derive neural architecture α along with the network parameters w, by solving a bilevel optimization problem (Anandalingam & Friesz, 1992): Φα,w = arg minα Lval(w∗(α), α) s.t. w∗(α) = argminwLtrain(w,α), where Ltrain and Lval indicate the training and validation loss, respectively. While recent works demonstrate competitive performance on tasks such as image classification (Zoph et al., 2018; Liu et al., 2018c;b; Real et al., 2019b) and object detection (Zoph & Le, 2017), designs of existing NAS algorithms typically assume that the training and testing domain are sampled from the same distribution, neglecting the scenario where two data domains or multiple feature distributions are of interest.
To efficiently devise a neural architecture across different data domains, we propose a novel learning task called Neural Architecture Search for Domain Adaptation (NASDA). The ultimate goal of NASDA is to minimize the validation loss of the target domain (Ltval). We postulate that a solution to NASDA should not only minimize validation loss of the source domain (Lsval), but should also
reduce the domain gap between the source and target. To this end, we propose a new NAS learning schema:
Φα,w = argminαLsval(w∗(α), α) + disc(Φ∗(xs),Φ∗(xt)) (1) s.t. w∗(α) = argminw Lstrain(w,α) (2)
where Φ∗ = Φα,w∗(α), and disc(Φ∗(xs),Φ∗(xt)) denotes the domain discrepancy between the source and target. Note that in unsupervised domain adaptation, Lttrain and Ltval cannot be computed directly due to the lack of label in the target domain.
Inspired by the past works in NAS and unsupervised domain adaptation, we propose in this paper an instantiated NASDA model, which comprises of two training phases, as shown in Figure 1. The first is the neural architecture searching phase, aiming to derive an optimal neural architecture (α∗), following the learning schema of Equation 1,2. Inspired by Differentiable ARchiTecture Search (DARTS) (Liu et al., 2019a), we relax the search space to be continuous so that α can be optimized with respect to Lsval and disc(Φ(xs),Φ(xt)) by gradient descent. Specifically, we enhance the feature transferability by embedding the hidden representations of the task-specific layers to a reproducing kernel Hilbert space where the mean embeddings can be explicitly matched by minimizing disc(Φ(xs),Φ(xt)). We use multi-kernel Maximum Mean Discrepancy (MK-MMD) (Gretton et al., 2007) to evaluate the domain discrepancy.
The second training phase aims to learn a good feature generator with task-specific loss, based on the derived α∗ from the first phase. To establish this goal, we use the derived deep neural network (Φα∗ ) as the feature generator (G) and devise an adversarial training process between G and a batch of classifiers C. The high-level intuition is to first diversify C in the training process, and train G to generate features such that the diversified C can have similar outputs. The training process is similar to Maximum Classifier Discrepancy framework (MCD) (Saito et al., 2018) except that we extend the dual-classifier in MCD to an ensembling of multiple classifiers. Experiments on standard UDA benchmarks demonstrate the effectiveness of our derived NASDA model in achieving significant improvements over state-of-the-art methods.
Our contributions of this paper are highlighted as follows:
• We formulate a novel dual-objective task of Neural Architecture Search for Domain Adaptation (NASDA), which optimize neural architecture for unsupervised domain adaptation, concerning both source performance objective and transfer learning objective.
• We propose an instantiated NASDA model that comprises two training stages, aiming to derive optimal architecture parameters α∗ and feature extractor G, respectively. We are the first to show the effectiveness of MK-MMD in NAS process specified for domain adaptation.
• Extensive experiments on multiple cross-domain recognition tasks demonstrate that NASDA achieves significant improvements over traditional unsupervised domain adaptation models as well as state-of-the-art NAS-based methods.
2 RELATED WORK
Deep convolutional neural network has been dominating image recognition task. In recent years, many handcrafted architectures have been proposed, including VGG (Simonyan & Zisserman, 2014), ResNet (He et al., 2016), Inception (Szegedy et al., 2015), etc., all of which verifies the importance of human expertise in network design. Our work bridges domain adaptation and the emerging field of neural architecture search (NAS), a process of automating architecture engineering technique.
Neural Architecture Search Neural Architecture Search has become the mainstream approach to discover efficient and powerful network structures (Zoph & Le, 2017; Zoph et al., 2018). The automatically searched architectures have achieved highly competitive performance in tasks such as image classification (Liu et al., 2018c;b), object detection (Zoph et al., 2018), and semantic segmentation (Chen et al., 2018). Reinforce learning based NAS methods (Zoph & Le, 2017; Tan et al., 2019; Tan & Le, 2019) are usually computational intensive, thus hampering its usage with limited computational budget. To accelerate the search procedure, many techniques has been proposed and they mainly follow four directions: (1) estimating the actual performance with lower fidelities. Such lower fidelities include shorter training times (Zoph et al., 2018; Zela et al., 2018), training on a subset of the data (Klein et al., 2017), or on lower-resolution images. (2) estimating the performance based on the learning curve extrapolation. Domhan et al. (2015) propose to extrapolate initial learning curves and terminate those predicted to perform poorly. (3) initializing the novel architectures based on other well-trained architectures. Wei et al. (2016) introduce network morphisms to modify an architecture without changing the network objects, resulting in methods that only require a few GPU days (Elsken et al., 2017; Cai et al., 2018a; Jin et al., 2019; Cai et al., 2018b). (4) one-shot architecture search. One-shot NAS treats all architectures as different subgraphs of a supergraph and shares weights between architectures that have edges of this supergraph in common (Saxena & Verbeek, 2016; Liu et al., 2019b; Bender, 2018). DARTS (Liu et al., 2019a) places a mixture of candidate operations on each edge of the one-shot model and optimizes the weights of the candidate operations with a continuous relaxation of the search space. Inspired by DARTS (Liu et al., 2019a), our model employs differentiable architecture search to derive the optimal feature extractor for unsupervised domain adaptation.
Domain Adaptation Unsupervised domain adaptation (UDA) aims to transfer the knowledge learned from one or more labeled source domains to an unlabeled target domain. Various methods have been proposed, including discrepancy-based UDA approaches (Long et al., 2017; Tzeng et al., 2014; Ghifary et al., 2014; Peng & Saenko, 2018), adversary-based approaches (Liu & Tuzel, 2016; Tzeng et al., 2017; Liu et al., 2018a), and reconstruction-based approaches (Yi et al., 2017; Zhu et al., 2017; Hoffman et al., 2018; Kim et al., 2017). These models are typically designed to tackle single source to single target adaptation. Compared with single source adaptation, multi-source domain adaptation (MSDA) assumes that training data are collected from multiple sources. Originating from the theoretical analysis in (Ben-David et al., 2010; Mansour et al., 2009; Crammer et al., 2008), MSDA has been applied to many practical applications (Xu et al., 2018; Duan et al., 2012; Peng et al., 2019). Specifically, Ben-David et al. (2010) introduce an H∆H-divergence between the weighted combination of source domains and a target domain. These models are developed using the existing hand-crafted network architecture. This property limits the capacity and versatility of domain adaptation as the backbones to extract the features are fixed. In contrast, we tackle the UDA from a different perspective, not yet considered in the UDA literature. We propose a novel dual-objective model of NASDA, which optimize neural architecture for unsupervised domain adaptation. We are the first to show the effectiveness of MK-MMD in NAS process which is designed specifically for domain adaptation.
3 NEURAL ARCHITECTURE SEARCH FOR DOMAIN ADAPTATION
In unsupervised domain adaptation, we are given a source domain Ds = {(xsi ,ysi )} ns i=1 of ns labeled examples and a target domainDt = {xtj} nt j=1 of nt unlabeled examples. The source domain and target domain are sampled from joint distributions P (xs,ys) and Q(xt,yt), respectively. The goal of this paper is to leverage NAS to derive a deep networkG : x 7→ y, which is optimal for reducing the shifts in data distributions across domains, such that the target risk t (G) = E(xt,yt)∼Q [G (xt) 6= yt] is minimized. We will start by introducing some preliminary background in Section 3.1. We then describe how to incorporate the MK-MMD into the neural architecture searching framework in
Section 3.2. Finally, we introduce the adversarial training between our derived deep network and a batch of classifiers in Section 3.3. An overview of our model can be seen in Algorithm 1.
3.1 PRELIMINARY: DARTS
In this work, we leverage DARTS (Liu et al., 2019a) as our baseline framework. Our goal is to search for a robust cell and apply it to a network that is optimal to achieve domain alignment between Ds and Dt. Following Zoph et al. (2018), we search for a computation cell as the building block of the final architecture. The final convolutional network for domain adaptation can be stacked from the learned cell. A cell is defined as a directed acyclic graph (DAG) of L nodes, {xi}Ni=1, where each node x(i) is a latent representation and each directed edge e(i,j) is associated with some operation o(i,j) that transforms x(i). DARTS (Liu et al., 2019a) assumes that cells contain two input nodes and a single output node. To make the search space continuous, DARTS relaxes the categorical choice of a particular operation to a softmax over all possible operations and is thus formulated as:
ō(i,j)(x) = ∑ o∈O exp(α (i,j) o )∑ o′∈O exp(α (i,j) o′ ) o(x) (3)
where O denotes the set of candidate operations and i < j so that skip-connect can be applied. An intermediate node can be represented as xj = ∑ i<j o
(i,j)(xi). The task of architecture search then reduces to learning a set of continuous variables α = {α(i,j)}. At the end of search, a discrete architecture can be obtained by replacing each mixed operation ō(i,j) with the most likely operation, i.e., o∗ (i,j)
= argmaxo∈O α (i,j) o and α∗ = {o∗ (i,j)}.
3.2 SEARCHING NEURAL ARCHITECTURE
Denote by Ltrain and Lval the training loss and validation loss, respectively. Conventional neural architecture search models aim to derive Φα,w by solving a bilevel optimization problem (Anandalingam & Friesz, 1992): Φα,w = arg minα Lval(w∗(α), α) s.t. w∗(α) = argminwLtrain(w,α). While recent work (Zoph et al., 2018; Liu et al., 2018c) have show promising performance on tasks such as image classification and object detection, the existing models assume that the training data and testing data are sampled from the same distributions. Our goal is to jointly learn the architecture α and the weights w within all the mixed operations (e.g. weights of the convolution filters) so that the derived model Φw∗,α∗ can transfer knowledge fromDs toDt with some simple domain adapation guidence. Initialized by Equation 1, we leverage multi-kernel Maximum Mean Discrepancy (Gretton et al., 2007) to evaluate disc(Φ∗(xs),Φ∗(xt).
MK-MMD Denote by Hk be the Reproducing Kernel Hilbert Space (RKHS) endowed with a characteristic kernel k. The mean embedding of distribution p in Hk is a unique element µk(P ) such that Ex∼P f (x) = 〈f (x) , µk (P )〉Hk for all f ∈ Hk. The MK-MMD dk (P,Q) between probability distributions P and Q is defined as the RKHS distance between the mean embeddings of P and Q. The squared formulation of MK-MMD is defined as
d2k (P,Q) , ∥∥EP [Φα (xs)]−EQ [Φα (xt)]∥∥2Hk . (4)
In this paper, we consider the case of combining Gaussian kernels with injective functions fΦ, where k(x, x′) = exp(−‖fΦ(x) − fΦ(x)′‖2). Inspired by Long et al. (2015), the characteristic kernel associated with the feature map Φ, k (xs,xt) = 〈Φ (xs) ,Φ (xt)〉, is defined as the convex combination of n positive semidefinite kernels {ku},
K , { k =
n∑ u=1 βuku : n∑ u=1 βu = 1, βu > 0,∀u
} , (5)
where the constraints on {βu} are imposed to guarantee that the k is characteristic. In practice we use finite samples from distributions to estimate MMD distance. Given Xs = {xs1, · · · ,xsm} ∼ P and Xt = {xt1, · · · ,xtm} ∼ Q, one estimator of d2k(P,Q) is
d̂2k(P,Q) = 1( m 2 ) ∑ i 6=i′ k(xsi,x s′ i)− 2( m 2 ) ∑ i 6=j k(xsi ,x t j) + 1( m 2 ) ∑ j 6=j′ k(xtj ,x t′ j). (6)
Algorithm 1 Neural Architecture Search for Domain Adaptation Phase I: Searching Neural Architecture 1: Create a mixed operation o(i,j) parametrized by α(i,j) for each edge (i, j) 2: while not converged do 3: Update architecture α by ∂∂αL s val ( w − ξ ∂∂wL s train(w,α), α ) + λ ∂∂α ( d̂2k (Φ(x s),Φ(xt)) )
4: Update weights w by descending ∂∂wL s train(w,α) 5: end while 6: Derive the final architecture based on the learned α∗.
Phase II: Adversarial Training for Domain Adaptation 1: Stack feature generator G based on α∗, initialize classifiers C 2: while not converged do 3: Step one: Train G and C with Ls(xs,ys) = −E(xs,ys)∼Ds ∑K k=1 1[k=ys] log p(y
s|xs) 4: Step two: Fix G, train C with loss: Ls(xs,ys)− Ladv(xt)(Eq. 13) 5: Step three: Fix C, train G with loss: Ladv(xt) 6: end while
The merit of multi-kernel MMD lies in its differentiability such that it can be easily incorporated into the deep network. However, the computation of the d̂2k(P,Q) incurs a complexity of O(m
2), which is undesirable in the differentiable architecture search framework. In this paper, we use the unbiased estimation of MK-MMD (Gretton et al., 2012) which can be computed with linear complexity.
NAS for Domain Adaptation Denote by Lstrain and Lsval the training loss and validation loss on the source domain, respectively. Both losses are affected by the architecture α as well as by the weights w in the network. The goal for NASDA is to find α∗ that minimizes the validation loss Ltval(w∗, α∗) on the target domain, where the weights w∗ associated with the architecture are obtained by minimizing the training loss w∗ = argminw Lstrain(w,α∗). Due to the lack of labels in the target domain, it is prohibitive to compute Ltval directly, hampering the assumption of previous gradient-based NAS algorithms (Liu et al., 2019a; Chen et al., 2019). Instead, we derive α∗ by minimizing the validation loss Lsval(w∗, α∗) on the source domain plus the domain discrepancy, disc(Φ(xs),Φ(xt)), as shown in Equation 1.
Inspired by the gradient-based hyperparameter optimization (Franceschi et al., 2018; Pedregosa, 2016; Maclaurin et al., 2015), we set the architecture parameters α as a special type of hyperparameter. This implies a bilevel optimization problem (Anandalingam & Friesz, 1992) with α as the upper-level variable and w as the lower-level variable. In practice, we utilize the MK-MMD to evaluate the domain discrepancy. The optimization can be summarized as follows:
Φα,w = argminα ( Lsval(w∗(α), α) + λd̂2k ( Φ(xs),Φ(xt) ) ) (7)
s.t. w∗(α) = argminw Lstrain(w,α) (8)
where λ is the trade-off hyperparameter between the source validation loss and the MK-MMD loss.
Approximate Architecture Search Equation 7,8 imply that directly optimizing the architecture gradient is prohibitive due to the expensive inner optimization. Inspired by DARTS (Liu et al., 2019a), we approximate w∗(α) by adapting w using only a single training step, without solving the optimization in Equation 8 by training until convergence. This idea has been adopted and proven to be effective in meta-learning for model transfer (Finn et al., 2017), gradient-based hyperparameter tuning (Luketina et al., 2016) and unrolled generative adversarial networks. We therefore propose a simple approximation scheme as follows:
∂
∂α
( Lsval(w∗(α), α) + λd̂2k ( Φ(xs),Φ(xt) ) ) ≈ ∂ ∂α Lsval ( w − ξ ∂ ∂w Lstrain(w,α), α ) + λ ∂ ∂α ( d̂2k ( Φ(xs),Φ(xt) ) ) (9)
where w − ξ ∂∂wL s train(w,α) denotes weight for one-step forward model and ξ is the learning rate for a step of inner optimization. Note Equation 9 reduces to ∇αLval(w,α) if w is already a local optimum for the inner optimization and thus∇wLtrain(w,α) = 0.
The second term of Equation 9 can be computed directly with some forward and backward passes. For the first term, applying chain rule to the approximate architecture gradient yields
∂
∂α Lsval(w′, α)− ξ ( ∂2 ∂α∂w Lstrain(w,α) ∂ ∂w′ Lsval(w′, α) ) (10)
wherew′ = w−ξ ∂∂wLtrain(w,α). The expression above contains an expensive matrix-vector product in its second term. We leverage the central difference approximation to reduce the computation complexity. Specifically, let η be a small scalar and w± = w ± η ∂∂w′L s val(w ′, α). Then:
∂2
∂α∂w Lstrain(w,α)
∂
∂w′ Lsval(w′, α) ≈
∂ ∂αLtrain(w +, α)− ∂∂αLtrain(w −, α)
2η (11)
Evaluating the central difference only requires two forward passes for the weights and two backward passes for α, reducing the complexity from quadratic to linear.
3.3 ADVERSARIAL TRAINING FOR DOMAIN ADAPTATION
By neural architecture searching from Section 3.2, we have derived the optimal cell structure (α∗) for domain adaptation. We then stack the cells to derive our feature generator G. In this section, we describe how do we consolidate G by an adversarial training of G and the classifiers C. Assume C includes N independent classifiers {C(i)}Ni=1 and denote pi(y|x) as the K-way propabilistic outputs of C(i), where K is the category number.
The high-level intuition is to consolidate the feature generator G such that it can make the diversified C generate similar outputs. To this end, our training process include three steps: (1) train G and C on Ds to obtain task-specific features, (2) fix G and train C to make {C(i)}Ni=1 have diversified output, (3) fix C and train G to minimize the output discrepancy between C. Related techniques have been used in Saito et al. (2018); Kumar et al. (2018).
First, we train both G and C to classify the source samples correctly with cross-entropy loss. This step is crucial as it enables G and C to extract the task-specific features. The training objective is min G,C Ls(xs,ys) and the loss function is defined as follows:
Ls(xs,ys) = −E(xs,ys)∼Ds K∑ k=1 1[k=ys] log p(y s|xs) (12)
In the second step, we are aiming to diversify C. To establish this goal, we fix G and train C to increase the discrepancy of C’s output. To avoid mode collapse (e.g. C(1) outputs all zeros and C(2) output all ones), we add Ls(xs,ys) as a regularizer in the training process. The high-level intuition is that we do not expect C to forget the information learned in the first step in the training process. The training objective is min
C Ls(xs,ys)− Ladv(xt), where the adversarial loss is defined as:
Ladv(xt) = Ext∼Dt N−1∑ i=1 N∑ j=i+1 ‖(pi(y|xt)− pj(y|xt)‖1 (13)
In the last step, we are trying to consolidate the feature generator G by training G to extract generalizable representations such that the discrepancy of C’s output is minimized. To achieve this goal, we fix the diversified classifiers C and train G with the adversarial loss (defined in Equation 13). The training objective is min
G Ladv(xt).
4 EXPERIMENTS
We compare the proposed NASDA model with many stateof-the-art UDA baselines on multiple benchmarks. In the main paper, we only report major results; more details are provided in the supplementary material. All of our experiments are implemented in the PyTorch platform.
In the architecture search phase, we use λ=1 for all the searching experiments. We leverage the ReLU-Conv-BN order for convolutional operations, and each separable convolution is always applied twice. Our search space O includes the following operations: 3 × 3 and 5 × 5 separable convolutions, 3 × 3 and 5 × 5 dilated separable convolutions, 3 × 3 max pooling, identity, and zero. Our convolutional cell consists of N = 7 nodes. Cells located at the 13 and 2 3 of the total depth of the network are reduction cells. The architecture encoding therefore is
(αnormal, αreduce), where αnormal is shared by all the normal cells and αreduce is shared by all the reduction cells.
4.1 SETUP
Digits We investigate three digits datasets: MNIST, USPS, and Street View House Numbers (SVHN). We adopt the evaluation protocol of CyCADA (Hoffman et al., 2018) with three transfer tasks: USPS to MNIST (U→ M), MNIST to USPS (M→ U), and SVHN to MNIST (S→ M). We train our model using the training sets: MNIST (60,000), USPS (7,291), standard SVHN train (73,257).
STL→CIFAR10 Both CIFAR10 (Krizhevsky et al., 2009) and STL (Coates et al., 2011) are both 10-class image datasets. These two datasets contain nine overlapping classes. We remove the ‘frog’ class in CIFAR10 and the ‘monkey’ class in STL datasets as they have no equivalent in the other dataset, resulting in a 9-class problem. The STL images were down-scaled to 32×32 resolution to match that of CIFAR10.
SYN SIGNS→GTSRB We evaluated the adaptation from synthetic traffic sign dataset called SYN SIGNS (Moiseev et al., 2013) to real-world sign dataset called GTSRB (Stallkamp et al., 2011). These datasets contain 43 classes.
We compare our NASDA model with state-of-the-art DA methods: Deep Adaptation Network (DAN) (Long et al., 2015), Domain Adversarial Neural Network (DANN) (Ganin & Lempitsky, 2015), Domain Separation Network (DSN) (Bousmalis et al., 2016), Coupled Generative Adversarial Networks (CoGAN) (Liu & Tuzel, 2016), Maximum Classifier Discrepancy (MCD) (Saito et al., 2018), Generate to Adapt (G2A) (Sankaranarayanan et al., 2018), Stochastic Neighborhood Embedding (d-SNE) (Xu et al., 2019b), Associative Domain Adaptation (ASSOC) (Haeusser et al., 2017).
4.2 EMPIRICAL RESULTS
Neural Architecture Search Results We show the neural architecture search results in Figure 2. We can observe that our model contains more “avg_pool” and “5x5_conv” layer than other NAS model. This will make our model more generic as the average pooling method smooths out the image and hence the model is not congested with sharp features and domain-specific features. We also show that our NASDA model contains less parameters and takes less time to converge compared with state-of-the-art NAS architectures. Another interesting finding is that our NASDA contains more sequential connections in both Normal and Reduce cells when trained on MNIST→USPS. Unsupervised Domain Adaptation Results The UDA results for Digits and SYN SIGNS→GTSRB are reported in Table 2, with results of baselines directly reported from the original papers if the protocol is the same (numbers with ∗ indicates training on partial data). The NASDA model achieves a 98.4% average accuracy for Digits dataset, outperforming other baselines. For SYN SIGNS→GTSRB task, our model gets comparable results with state-of-the-art baselines. The results demonstrate the effectiveness of our NASDA model on small images.
The UDA results on the STL→CIFAR10 recognition task are reported in Table 1. Our model achieves a performance of 76.8%, outperforming all the baselines. To compare our search neural architecture with previous NAS models, we replace the neural architecture in G with other NAS models. Other training settings in the second phase are identical to our model. As such, we derive NASNet+Phase II (Zoph et al., 2018), AmoebaNet+Phase II (Shah et al., 2018) , DARTS+Phase II (Liu et al., 2019a), and PDARTS+Phase II (Chen et al., 2019) models. The results in Table 1 demonstrate that our model outperform other NAS based model by a large margin, which shows the effectiveness of our model in unsupervised domain adaptation. Specifically, we set PDARTS+Phase II as an ablation study to demonstrate the effectiveness of our task-specific design in learning domain-adaption aware features.
Analysis To dive deeper into the training process of our NASDA model, we plot the T-SNE embedding of the weights of C in USPS→MNIST in Figure 3. This is achieved by recording the weights of all the classifiers at each epoch. The black dot indicates epoch zero, which is the common starting point. The color from light to dark corresponds to the epoch number from small to large. The T-SNE plots clearly show that the classifiers are diverged from each other, demonstrating the effectiveness of the second step of our NASDA training described in Section 3.2.
5 CONCLUSION
In this paper, we first formulate a novel dual-objective task of Neural Architecture Search for Domain Adaptation (NASDA) to invigorate the design of transfer-aware network architectures. Towards tackling the NASDA task, we have proposed a novel learning framework that leverages MK-MMD to guide the neural architecture search process. Instead of aligning the features from existing handcrafted backbones, our model directly searches for the optimal neural architecture specific for domain adaptation. Furthermore, we have introduced the ways to consolidate the feature generator, which is stacked from the searched architecture, in order to boost the UDA performance. Extensive empirical evaluations on UDA benchmarks have demonstrated the efficacy of the proposed model against several state-of-the-art domain adaptation algorithms. | 1. What is the main contribution of the paper regarding neural architecture search for domain adaptation?
2. What are the strengths and weaknesses of the proposed approach in terms of its ability to guarantee a small target error and ensure high target accuracy?
3. How does the reviewer assess the technique novelty and incremental nature of the first phase of the proposed method?
4. What are the concerns regarding the extension of MCD to multiple classifiers, and how does the reviewer suggest improving the performance of the model?
5. Why do other NAS models with the same phase II have extremely low results, and what are the differences between these models and NASDA that make a big difference in the results?
6. How can the authors justify that NAS can find more domain-invariant features than classical architectures including VGG, ResNet, DenseNet?
7. What are the suggestions for rigorous experimentation to study the usefulness of the proposed method on standard domain adaptation datasets? | Review | Review
In the work, the authors aim at improving the transferability of domain adaptation models from the perspective of neural architecture search. It consists of two phases, in particular, the first phase searches a neural architecture for domain adaptation based on a famous differentiable NAS method named DARTs, and the second phase develops an adversarial training method for domain adaptation by extending MCD to a multiple classifiers version. The empirical study evaluates the performance on some UDA tasks and shows the effectiveness of the introduced method.
Pros: 1. Tackling the problem of domain adaptation from the perspective of neural architecture search is interesting, and the authors successfully combine them together.
They not only minimize the validation loss of the source domain but also reduce the domain gap between the source and target since the labels in the target domain are not available. Though somewhat incremental, this design works in some NAS benchmarks. On the contrary, the extension of MCD to multiple classifiers is somewhat straightforward.
This paper is well-written and easy to follow, and the empirical result on one small-scale benchmark named STL → CIFAR10 is impressive.
Concerns: 1. As for the first phase, jointly minimizing the validation loss of the source domain and the MK-MMD based domain gap between the source and target, however, can NOT guarantee a small target error according to the analysis of [1], in which a simple counterexample is given in Figure 1 of [1]. My question is, by searching a differentiable neural architecture with the objective function defined in Equation 7, how can we guarantee it is suitable in the target domain and ensure a high target accuracy. The authors may want to search for a not so bad architecture first and then align the features across domains, however, the main purpose of this paper is the former but not the latter. Therefore, ablation studies are necessary to verify the effectiveness of the MK-MMD loss, such as comparing the architecture of the proposed method with that of DARTs. Note that, in my opinion, this comparison should be done without the second phase and different from the protocol of Table 1 to avoid the interference of the second phase.
The technique novelty of the first phase of combining DARTS with MK-MMD is somewhat incremental and limited. It seems that the first phase is just an instance of famous DARTS (proposed in the field of NAS) with the additional popular MK-MMD loss function which also has already been proposed in the field of domain adaptation. For me, neither new ideas in the field of DA nor new techniques in the field of NAS are seen.
As for the second phase, the extension of MCD to multiple classifiers is straightforward and somewhat incremental. Meanwhile, I am still curious about the performance if we increase the classifier number from 2 to 4 or some larger. If we simply ensemble several classifiers without adversarial training, we have reasons to believe that we can achieve higher accuracy than the model with two classifiers since the effectiveness of ensemble learning has been verified in literature.
As shown in Table 2, the results of other NAS models with the same phase II are extremely low, however, there is no much difference between the architecture of NASDA with that of other NAS models. Especially, the architecture of NASNet [2] is also designed to be transferable across datasets (from Cifar to Imagenet), I am quite curious why this kind of architectures except NASDA are so weak. Would you mind giving some convincing analysis or explanations? It would be useful to shed more light on the differences with other NAS models, it is still unclear why these differences make such a big difference in the results. Meanwhile, the author has better show some evidence to justify that NAS can find more domain-invariant features than the classical architecture including VGG, ResNet, DenseNet.
Another key concern about the paper is the lack of rigorous experimentation to study the usefulness of the proposed method. State-of-the-art performances are more impressive and convincing when the benchmark adopts realistic settings. The typical domain adaptation benchmarks, such as Office-31, ImageCLEF-DA, VisDA, Office-Home, and DomainNet, as well as the newly proposed ImageNet-Sketch, have various category number and dataset scale. It would be better to conduct some experiments on these standard domain adaptation datasets and compare the proposed method with numerous baselines of these benchmarks.
[1] Han Zhao⋆, Remi Tachet des Combes†, Kun Zhang⋆, Geoffrey J. Gordon⋆,† Carnegie Mellon University⋆, Microsoft Research Montrea. On Learning Invariant Representation for Domain Adaptation. In ICML 2019.
[2] Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8697– 8710, 2018 |
ICLR | Title
CLIPSep: Learning Text-queried Sound Separation with Noisy Unlabeled Videos
Abstract
Recent years have seen progress beyond domain-specific sound separation for speech or music towards universal sound separation for arbitrary sounds. Prior work on universal sound separation has investigated separating a target sound out of an audio mixture given a text query. Such text-queried sound separation systems provide a natural and scalable interface for specifying arbitrary target sounds. However, supervised text-queried sound separation systems require costly labeled audio-text pairs for training. Moreover, the audio provided in existing datasets is often recorded in a controlled environment, causing a considerable generalization gap to noisy audio in the wild. In this work, we aim to approach text-queried universal sound separation by using only unlabeled data. We propose to leverage the visual modality as a bridge to learn the desired audio-textual correspondence. The proposed CLIPSep model first encodes the input query into a query vector using the contrastive language-image pretraining (CLIP) model, and the query vector is then used to condition an audio separation model to separate out the target sound. While the model is trained on image-audio pairs extracted from unlabeled videos, at test time we can instead query the model with text inputs in a zero-shot setting, thanks to the joint language-image embedding learned by the CLIP model. Further, videos in the wild often contain off-screen sounds and background noise that may hinder the model from learning the desired audio-textual correspondence. To address this problem, we further propose an approach called noise invariant training for training a query-based sound separation model on noisy data. Experimental results show that the proposed models successfully learn text-queried universal sound separation using only noisy unlabeled videos, even achieving competitive performance against a supervised model in some settings.
1 INTRODUCTION
Humans can focus on to a specific sound in the environment and describe it using language. Such abilities are learned using multiple modalities—auditory for selective listening, vision for learning the concepts of sounding objects, and language for describing the objects or scenes for communication. In machine listening, selective listening is often cast as the problem of sound separation, which aims to separate sound sources from an audio mixture (Cherry, 1953; Bach & Jordan, 2005). While text queries offer a natural interface for humans to specify the target sound to separate from a mixture (Liu et al., 2022; Kilgour et al., 2022), training a text-queried sound separation model in a supervised manner requires labeled audio-text paired data of single-source recordings of a vast number of sound types, which can be costly to acquire. Moreover, such isolated sounds are often recorded in controlled environments and have a considerable domain gap to recordings in the wild, which usually contain arbitrary noise and reverberations. In contrast, humans often leverage the visual modality to assist learning the sounds of various objects (Baillargeon, 2002). For instance, by observing a dog barking, a human can associate the sound with the dog, and can separately learn that the animal is called a “dog.” Further, such learning is possible even if the sound is observed in a noisy environment, e.g.,
∗Work done during an internship at Sony Group Corporation †Corresponding author
when a car is passing by or someone is talking nearby, where humans can still associate the barking sound solely with the dog. Prior work in psychophysics also suggests the intertwined cognition of vision and hearing (Sekuler et al., 1997; Shimojo & Shams, 2001; Rahne et al., 2007).
Motivated by this observation, we aim to tackle text-queried sound separation using only unlabeled videos in the wild. We propose a text-queried sound separation model called CLIPSep that leverages abundant unlabeled video data resources by utilizing the contrastive image-language pretraining (CLIP) (Radford et al., 2021) model to bridge the audio and text modalities. As illustrated in Figure 1, during training, the image feature extracted from a video frame by the CLIP-image encoder is used to condition a sound separation model, and the model is trained to separate the sound that corresponds to the image query in a self-supervised setting. Thanks to the properties of the CLIP model, which projects corresponding text and images to close embeddings, at test time we instead use the text feature obtained by the CLIP-text encoder from a text query in a zero-shot setting.
However, such zero-shot modality transfer can be challenging when we use videos in the wild for training as they often contain off-screen sounds and voice overs that can lead to undesired audiovisual associations. To address this problem, we propose the noise invariant training (NIT), where query-based separation heads and permutation invariant separation heads jointly estimate the noisy target sounds. We validate in our experiments that the proposed noise invariant training reduces
the zero-shot modality transfer gap when the model is trained on a noisy dataset, sometimes achieving competitive results against a fully supervised text-queried sound separation system.
Our contributions can be summarized as follows: 1) We propose the first text-queried universal sound separation model that can be trained on unlabeled videos. 2) We propose a new approach called noise invariant training for training a query-based sound separation model on noisy data in the wild. Audio samples can be found on an our demo website.1 For reproducibility, all source code, hyperparameters and pretrained models are available at: https://github.com/sony/CLIPSep.
2 RELATED WORK
Universal sound separation Much prior work on sound separation focuses on separating sounds for a specific domain such as speech (Wang & Chen, 2018) or music (Takahashi & Mitsufuji, 2021; Mitsufuji et al., 2021). Recent advances in domain specific sound separation lead several attempts to generalize to arbitrary sound classes. Kavalerov et al. (2019) reported successful results on separating arbitrary sounds with a fixed number of sources by adopting the permutation invariant training (PIT) (Yu et al., 2017), which was originally proposed for speech separation. While this approach does not require labeled data for training, a post-selection process is required as we cannot not tell what sounds are included in each separated result. Follow-up work (Ochiai et al., 2020; Kong et al., 2020) addressed this issue by conditioning the separation model with a class label to specify the target sound in a supervised setting. However, these approaches still require labeled data for training, and the interface for selecting the target class becomes cumbersome when we need a large number of classes to handle open-domain data. Wisdom et al. (2020) later proposed an unsupervised method called mixture invariant training (MixIT) for learning sound separation on noisy data. MixIT is designed to separate all sources at a time and also requires a post-selection process such as using a pre-trained sound classifier (Scott et al., 2021), which requires labeled data for training, to identify the target sounds. We summarize and compare related work in Table 1.
Query-based sound separation Visual information has been used for selecting the target sound in speech (Ephrat et al., 2019; Afouras et al., 2020), music (Zhao et al., 2018; 2019; Tian et al., 2021) and universal sounds (Owens & Efros, 2018; Gao et al., 2018; Rouditchenko et al., 2019). While many image-queried sound separation approaches require clean video data that contains isolated sources, Tzinis et al. (2021) introduced an unsupervised method called AudioScope for separating on-screen sounds using noisy videos based on the MixIT model. While image queries can serve as a
1https://sony.github.io/CLIPSep/
natural interface for specifying the target sound in certain use cases, images of target sounds become unavailable in low-light conditions and for sounds from out-of-screen objects.
Another line of research uses the audio modality to query acoustically similar sounds. Chen et al. (2022) showed that such approach can generalize to unseen sounds. Later, Gfeller et al. (2021) cropped two disjoint segments from single recording and used them as a query-target pair to train a sound separation model, assuming both segments contain the same sound source. However, in many cases, it is impractical to prepare a reference audio sample for the desired sound as the query.
Most recently, text-queried sound separation has been studied as it provides a natural and scalable interface for specifying arbitrary target sounds as compared to systems that use a fixed set of class labels. Liu et al. (2022) employed a pretrained language model to encode the text query, and condition the model to separate the corresponding sounds. Kilgour et al. (2022) proposed a model that accepts audio or text queries in a hybrid manner. These approaches, however, require labeled text-audio paired data for training. Different from prior work, our goal is to learn text-queried sound separation for arbitrary sound without labeled data, specifically using unlabeled noisy videos in the wild.
Contrastive language-image-audio pretraining The CLIP model (Radford et al., 2021) has been used as a pretraining of joint embedding spaces among text, image and audio modalities for downstream tasks such as audio classification (Wu et al., 2022; Guzhov et al., 2022) and sound guided image manipulation (Lee et al., 2022). Pretraining is done either in a supervised manner using labels (Guzhov et al., 2022; Lee et al., 2022) or in a self-supervised manner by training an additional audio encoder to map input audio to the pretrained CLIP embedding space (Wu et al., 2022). In contrast, we explore the zero-shot modality transfer capability of the CLIP model by freezing the pre-trained CLIP model and directly optimizing the rest of the model for the target sound separation task.
3 METHOD
3.1 CLIPSEP—LEARNING TEXT-QUERIED SOUND SEPARATION WITHOUT LABELED DATA
In this section, we propose the CLIPSep model for text-queried sound separation without using labeled data. We base the CLIPSep model on Sound-of-Pixels (SOP) (Zhao et al., 2018) and replace the video analysis network of the SOP model. As illustrated in Figure 2, during training, the model takes as inputs an audio mixture x = ∑n i=1 si, where s1, . . . , sn are the n audio tracks, along with their corresponding images y1, . . . ,yn extracted from the videos. We first transform the audio mixture x into a magnitude spectrogram X and pass the spectrogram through an audio U-Net (Ronneberger et al., 2015; Jansson et al., 2017) to produce k (≥ n) intermediate masks M̃1, . . . , M̃k. On the other stream, each image is encoded by the pretrained CLIP model (Radford et al., 2021) into an embedding ei ∈ R512. The CLIP embedding ei will further be projected to a query vector
qi ∈ Rk by a projection layer, which is expected to extract only audio-relevant information from ei.2 Finally, the query vector qi will be used to mix the intermediate masks into the final predicted masks M̂i = ∑k j=1 σ ( wijqijM̃j + bi ) , where wi ∈ Rk is a learnable scale vector, bi ∈ R a learnable bias, and σ(·) the sigmoid function. Now, suppose Mi is the ground truth mask for source si. The training objective of the model is the sum of the weighted binary cross entropy losses for each source:
LCLIPSep = n∑
i=1
WBCE(Mi, M̂i) = n∑ i=1 X ⊙ ( −Mi log M̂i − (1−Mi) log ( 1− M̂i )) . (1)
At test time, thanks to the joint image-text embedding offered by the CLIP model, we feed a text query instead of an image to the query model to obtain the query vector and separate the target sounds accordingly (see Appendix A for an illustration). As suggested by Radford et al. (2021), we prefix the text query into the form of “a photo of [user input query]” to reduce the generalization gap.3
3.2 NOISE INVARIANT TRAINING—HANDLING NOISY DATA IN THE WILD
While the CLIPSep model can separate sounds given image or text queries, it assumes that the sources are clean and contain few query-irrelevant sounds. However, this assumption does not hold for videos in the wild as many of them contain out-of-screen sounds and various background noises. Inspired by the mixture invariant training (MixIT) proposed by Wisdom et al. (2020), we further propose the noise invariant training (NIT) to tackle the challenge of training with noisy data. As illustrated in Figure 3, we introduce n additional permutation invariant heads called noise heads to the CLIPSep model, where the masks predicted by these heads are interchangeable during loss computation. Specifically, we introduce n additional projection layers, and each of them takes as input the sum of all query vectors produced by the query heads (i.e., ∑n i=1 qi) and produce a vector that is later used to mix the intermediate masks into the predicted noise mask. In principle, the query masks produced by the query vectors are expected to extract query-relevant sounds due to their stronger correlations to their corresponding queries, while the interchangeable noise masks should ‘soak up’ other sounds.
2We extract three frames with 1-sec intervals and compute their mean CLIP embedding as the input to the projection layer to reduce the negative effects when the selected frame does not contain the objects of interest.
3Similar to how we prepare the image queries, we create four queries from the input text query using four query templates (see Appendix B) and take their mean CLIP embedding as the input to the projection layer.
Mathematically, let MQ1 , . . . ,M Q n be the predicted query masks and M N 1 , . . . ,M N n be the predicted noise masks. Then, the noise invariant loss is defined as:
LNIT = min (j1,...,jn)∈Σn n∑ i=1 WBCE ( Mi,min ( 1, M̂Qi + M̂ N ji )) , (2)
where Σn denotes the set of all permutations of {1, . . . , n}.4 Take n = 2 for example.5 We consider the two possible ways for combining the query heads and the noise heads:
(Arrangement 1) M̂1 = min ( 1, M̂Q1 + M̂ N 1 ) , M̂2 = min ( 1, M̂Q2 + M̂ N 2 ) , (3)
(Arrangement 2) M̂ ′1 = min ( 1, M̂Q1 + M̂ N 2 ) , M̂ ′2 = min ( 1, M̂Q2 + M̂ N 1 ) . (4)
Then, the noise invariant loss is defined as the smallest loss achievable: L(2)NIT = min ( WBCE ( M1, M̂1 ) +WBCE ( M2, M̂2 ) ,WBCE ( M1, M̂ ′ 1 ) +WBCE ( M2, M̂ ′ 2 )) . (5)
Once the model is trained, we discard the noise heads and use only the query heads for inference (see Appendix A for an illustration). Unlike the MixIT model (Wisdom et al., 2020), our proposed noise invariant training still allows us to specify the target sound by an input query, and it does not require any post-selection process as we only use the query heads during inference.
In practice, we find that the model tends to assign part of the target sounds to the noise heads as these heads can freely enjoy the optimal permutation to minimize the loss. Hence, we further introduce a regularization term to penalize producing high activations on the noise masks:
LREG = max ( 0, n∑ i=1 mean ( M̂Ni ) − γ ) , (6)
where γ ∈ [0, n] is a hyperparameter that we will refer to as the noise regularization level. The proposed regularization has no effect when the sum of the means of all the noise masks is lower than a predefined threshold γ, while having a linearly growing penalty when the sum is higher than γ. Finally, the training objective of the CLIPSep-NIT model is a weighted sum of the noise invariant loss and regularization term: LCLIPSep-NIT = LNIT +λLREG , where λ ∈ R is a weight hyperparameter. We set λ = 0.1 for all experiments, which we find work well across different settings.
4We note that CLIPSep-NIT considers 2n sources in total as the model has n queried heads and n noise heads. While PIT (Yu et al., 2017) and MixIT (Wisdom et al., 2020) respectively require O((2n)!) and O(22n) search to consider 2n sources, the proposed NIT only requires O(n!) permutation in the loss computation.
5Since our goal is not to further separate the noise into individual sources but to separate the sounds that correspond to the query, n may not need to be large. In practice, we find that the CLIPSep-NIT model with n = 2 already learns to handle the noise properly and can successfully transfer to the text-queried mode. Thus, we use n = 2 throughout this paper and leave the testing on larger n as future work.
4 EXPERIMENTS
We base our implementations on the code provided by Zhao et al. (2018) (https://github.com/ hangzhaomit/Sound-of-Pixels). Implementation details can be found in Appendix C.
4.1 EXPERIMENTS ON CLEAN DATA
We first evaluate the proposed CLIPSep model without the noise invariant training on musical instrument sound separation task using the MUSIC dataset, as done in (Zhao et al., 2018). This experiment is designed to focus on evaluating the quality of the learned query vectors and the zeroshot modality transferability of the CLIPSep model on a small, clean dataset rather than showing its ability to separate arbitrary sounds. The MUSIC dataset is a collection of 536 video recordings of people playing a musical instrument out of 11 instrument classes. Since no existing work has trained a text-queried sound separation model using only unlabeled data to our knowledge, we compare the proposed CLIPSep model with two baselines that serve as upper bounds—the PIT model (Yu et al., 2017, see Appendix D for an illustration) and a version of the CLIPSep model where the query model is replaced by learnable embeddings for the labels, which we will refer to as the LabelSep model. In addition, we also include the SOP model (Zhao et al., 2018) to investigate the quality of the query vectors as the CLIPSep and SOP models share the same network architecture except the query model.
We report the results in Table 2. Our proposed CLIPSep model achieves a mean signal-to-distortion ratio (SDR) (Vincent et al., 2006) of 5.49 dB and a median SDR of 4.97 dB using text queries in a zero-shot modality transfer setting. When using image queries, the performance of the CLIPSep model is comparable to that of the SOP model. This indicates that the CLIP embeddings are as informative as those produced by the SOP model. The performance difference between the CLIPSep model using text and image queries at test time indicates the zero-shot modality transfer gap. We observe 1.54 dB and 0.88 dB differences on the mean and median SDRs, respectively. Moreover,
we also report in Table 2 and Figure 4 the performance of the CLIPSep models trained on different modalities to investigate their modality transferability in different settings. We notice that when we train the CLIPSep model using text queries, dubbed as CLIPSep-Text, the mean SDR using text queries increases to 7.91 dB. However, when we test this model using image queries, we observe a 1.66 dB difference on the mean SDR as compared to that using text queries, which is close to
the mean SDR difference we observe for the model trained with image queries. Finally, we train a CLIPSep model using both text and image queries in alternation, dubbed as CLIPSep-Hybrid. We see that it leads to the best test performance for both text and image modalities, and there is only a mean SDR difference of 0.30 dB between using text and image queries. As a reference, the LabelSep model trained with labeled data performs worse than the CLIPSep-Hybrid model using text queries. Further, the PIT model achieves a mean SDR of 8.68 dB and a median SDR of 7.67 dB, but it requires post-processing to figure out the correct assignments.
4.2 EXPERIMENTS ON NOISY DATA
Next, we evaluate the proposed method on a large-scale dataset aiming at universal sound separation. We use the VGGSound dataset (Chen et al., 2020), a large-scale audio-visual dataset containing more than 190,000 10-second videos in the wild out of more than 300 classes. We find that the audio in the VGGSound dataset is often noisy and contains off-screen sounds and background noise. Although we train the models on such noisy data, it is not suitable to use the noisy data as targets for evaluation because it fails to provide reliable results. For example, if the target sound labeled as “dog barking” also contains human speech, separating only the dog barking sound provides a lower SDR value than separating the mixture of dog barking sound and human speech even though the text query is “dog barking”. (Note that we use the labels only for evaluation but not for training.) To avoid this issue, we consider the following two evaluation settings:
• MUSIC+: Samples in the MUSIC dataset are used as clean targets and mixed with a sample in the VGGSound dataset as an interference. The separation quality is evaluated on the clean target from the MUSIC dataset. As we do not use the MUSIC dataset for training, this can be considered as zero-shot transfer to a new data domain containing unseen sounds (Radford et al., 2019; Brown et al., 2020). To avoid the unexpected overlap of the target sound types in the MUSIC and VGGSound datasets caused by the label mismatch, we exclude all the musical instrument playing videos from the VGGSound dataset in this setting.
• VGGSound-Clean+: We manually collect 100 clean samples that contain distinct target sounds from the VGGSound test set, which we will refer to as VGGSound-Clean. We mix an audio sample in VGGSound-Clean with another in the test set of VGGSound. Similarly, we consider the VGGSound audio as an interference sound added to the relatively cleaner VGGSound-Clean audio and evaluate the separation quality on the VGGSound-Clean stem.
Table 3 shows the evaluation results. First, CLIPSep successfully learns text-queried sound separation even with noisy unlabeled data, achieving 5.22 dB and 3.53 dB SDR improvements over the mixture on MUSIC+ and VGGSound-Clean+, respectively. By comparing CLIPSep and CLIPSep-NIT, we observe that NIT improves the mean SDRs in both settings. Moreover, on MUSIC+, CLIPSep-NIT’s performance matches that of CLIPSep-Text, which utilizes labels for training, achieving only a 0.46 dB lower mean SDR and even a 0.05 dB higher median SDR. This result suggests that the proposed self-supervised text-queried sound separation method can learn separation capability competitive with the fully supervised model in some target sounds. In contrast, there is still a gap between them on VGGSound-Clean+, possibly because the videos of non-music-instrument objects are more noisy in both audio and visual domains, thus resulting in a more challenging zero-shot modality transfer. This hypothesis is also supported by the higher zero-shot modality transfer gap (mean SDR difference of image- and text-queried mode) of 1.79 dB on VGGSound-Clean+ than that of 1.01 dB on MUSIC+ for CLIPSep-NIT. In addition, we consider another baseline model that replaces the CLIP model in CLIPSep with a BERT encoder (Devlin et al., 2019), which we call BERTSep. Interestingly, although BERTSep performs similarly to CLIPSep-Text on VGGSound-Clean+, the performance of BERTSep is significantly lower than that of CLIPSep-Text on MUISC+, indicating that BERTSep fails to generalize to unseen text queries. We hypothesize that the CLIP text embedding captures the timbral similarity of musical instruments better than the BERT embedding do, because the CLIP model is aware of the visual similarity between musical instruments during training. Moreover, it is interesting to see that CLIPSep outperforms CLIPSep-NIT when an image query is used at test time (domain-matched condition), possibly because images contain richer context information such as objects nearby and backgrounds than labels, and the models can use such information to better separate the target sound. While CLIPSep has to fully utilize such information, CLIPSep-NIT can use the noise heads to model sounds that are less relevant to the image query. Since we remove the noise heads from CLIPSep-NIT during the evaluation, it can rely less on such information from the image, thus improving the zero-shot modality transferability. Figure 5 shows an example of the separation results on MUSIC+ (see Figures 12 to 15 for more examples). We observe that the two noise heads contain mostly background noise. Audio samples can be found on our demo website.1
4.3 EXAMINING THE EFFECTS OF THE NOISE REGULARIZATION LEVEL γ
In this experiment, we examine the effects of the noise regularization level γ in Equation (6) by changing the value from 0 to 1. As we can see from Figure 6 (a) and (b), CLIPSep-NIT with γ = 0.25 achieves the highest SDR on both evaluation settings. This suggests that the optimal γ value is not sensitive to the evaluation dataset. Further, we also report in Figure 6 (c) the total mean noise head activation, ∑n i=1 mean(M̂ N i ), on the validation set. As M̂ N i is the mask estimate for the noise, the total mean noise head activation value indicates to what extent signals are assigned to the noise head. We observe that the proposed regularizer successfully keeps the total mean noise head activation close to the desired level, γ, for γ ≤ 0.5. Interestingly, the total mean noise head activation is still around 0.5 when γ = 1.0, suggesting that the model inherently tries to use both the query-heads and the noise heads to predict the noisy target sounds. Moreover, while we discard the noise heads during evaluation in our experiments, keeping the noise heads can lead to a higher SDR as shown in
Figure 6 (a) and (b), which can be helpful in certain use cases where a post-processing procedure similar to the PIT model (Yu et al., 2017) is acceptable.
5 DISCUSSIONS
For the experiments presented in this paper, we work on labeled datasets so that we can evaluate the performance of the proposed models. However, our proposed models do not require any labeled data for training, and can thus be trained on larger unlabeled video collections in the wild. Moreover, we observe that the proposed model shows the capability of combing multiple queries, e.g., “a photo of [query A] and [query B],” to extract multiple target sounds, and we report the results on the demo website. This offers a more natural user interface against having to separate each target sound and mix them via an additional post-processing step. We also show in Appendix G that our proposed model is robust to different text queries and can extract the desired sounds.
In our experiments, we often observe a modality transfer gap greater than 1 dB difference of SDR. A future research direction is to explore different approaches to reduce the modality transfer gap. For example, the CLIP model is pretrained on a different dataset, and thus finetuning the CLIP model on the target dataset can help improve the underlying modality transferability within the CLIP model. Further, while the proposed noise invariant training is shown to improve the training on noisy data and reduce the modality transfer gap, it still requires a sufficient audio-visual correspondence for training video. In other words, if the audio and images are irrelevant in most videos, the model will struggle to learn the correspondence between the query and target sound. In practice, we find that the data in the VGGSound dataset often contains off-screen sounds and the labels sometimes correspond to only part of the video content. Hence, filtering on the training data to enhance its audio-visual correspondence can also help reduce the modality transfer gap. This can be achieved by self-supervised audio-visual correspondence prediction (Arandjelović & Zisserman, 2017a;b) or temporal synchronization (Korbar et al., 2018; Owens & Efros, 2018).
Another future direction is to explore the semi-supervised setting where a small subset of labeled data can be used to improve the modality transferability. We can also consider the proposed method as a pretraining on unlabeled data for other separation tasks in the low-resource regime. We include in Appendix H a preliminary experiment in this aspect using the ESC-50 dataset (Piczak, 2015).
6 CONCLUSION
In this work, we have presented a novel text-queried universal sound separation model that can be trained on noisy unlabeled videos. In this end, we have proposed to use the contrastive imagelanguage pretraining to bridge the audio and text modalities, and proposed the noise invariant training for training a query-based sound separation model on noisy data. We have shown that the proposed models can learn to separate an arbitrary sound specified by a text query out of a mixture, even achieving competitive performance against a fully supervised model in some settings. We believe our proposed approach closes the gap between the ways humans and machines learn to focus on a sound in a mixture, namely, the multi-modal self-supervised learning paradigm of humans against the supervised learning paradigm adopted by existing label-based machine learning approaches.
ACKNOWLEDGEMENTS
We would like to thank Stefan Uhlich, Giorgio Fabbro and Woosung Choi for their helpful comments during the preparation of this manuscript. We also thank Mayank Kumar Singh for supporting the setup of the subjective test in Appendix F. Hao-Wen thank J. Yang and Family Foundation and Taiwan Ministry of Education for supporting his PhD study.
B QUERY ENSEMBLING
Radford et al. (2021) suggest that using a prompt template in the form of “a photo of [user input query]” helps bridge the distribution gap between text queries used for zero-shot image classification and text in the training dataset for the CLIP model. They further show that the ensemble of various prompt templates improve the generalizability. Motivated by this observation, we adopt a similar idea and use several query templates at test time (see Table 4). These query templates are heuristically chosen to handle the noisy images extracted from videos.
C IMPLEMENTATION DETAILS
We implement the audio model as a 7-layer U-Net (Ronneberger et al., 2015). We use k = 32. We use binary masks as the ground truth masks during training while using the raw, real-valued masks for evaluation. We train all the models for 200,000 steps with a batch size of 32. We use the Adam optimizer (Kingma & Ba, 2015) with β1 = 0.9, β2 = 0.999 and ϵ = 10−8. In addition, we clip the norm of the gradients to 1.0 (Zhang et al., 2020). We adopt the following learning rate schedule with a warm-up—the learning rate starts from 0 and grows to 0.001 after 5,000 steps, and then it linearly drops to 0.0001 at 100,000 steps and keeps this value thereafter. We validate the model every 10,000 steps using image queries as we do not assume labeled data is available for the validation set. We use a sampling rate of 16,000 Hz and work on audio clips of length 65,535 samples (≈ 4 seconds). During training, we randomly sample a center frame from a video and extract three frames (images) with 1-sec intervals and 4-sec audio around the center frame. During inference, for image-queried models, we extract three frames with 1-sec intervals around the center of the test clip. For the spectrogram computation, we use a filter length of 1024, a hop length of 256 and a window size of 1024 in the short-time Fourier transform (STFT). We resize images extracted from video to a size of 224-by-224 pixels. For the CLIPSep-Hybrid model, we alternatively train the model with text and image queries, i.e., one batch with all image queries and next with all text queries, and so on. We implement all the models using the PyTorch library (Paszke et al., 2019). We compute the signal-to-distortion ratio (SDR) using museval (Stöter et al., 2018).
In our preliminary experiments, we also tried directly predicting the final mask by conditioning the audio model on the query vector. We applied this modification for both SOP and CLIPSep models, however, we observe that this architecture is prone to overfitting. We hypothesize that this is because the audio model is powerful enough to remember the subtle clues in the query vector, which hinder the generalization to a new sound and query. In contrast, the proposed architecture first predicts over-determined masks and then combines them on the basis of the query vector, which avoids the overfitting problem due to the simple fusion step.
D PERMUTATION INVARIANT TRAINING
Figure 8 illustrates the permuatation invariant training (PIT) model (Yu et al., 2017). The permutation invariant loss is defined as follows for n = 2.
LPIT = min ( WBCE(M1, M̂1) +WBCE(M2, M̂2),WBCE(M1, M̂2) +WBCE(M2, M̂1) ) , (7)
where M̂1 and M̂2 are the predicted masks. Note that the PIT model requires an additional postselection step to obtain the target sound.
E QUALITATIVE EXAMPLE RESULTS
We show in Figures 12 to 15 some example results. More results and audio samples can be found at https://sony.github.io/CLIPSep/.
F SUBJECTIVE EVALUATION
We conduct a subjective test to evaluate whether the SDR results aligned with perceptual quality. As done in the Sound of Pixel (Zhao et al., 2018), separated audio samples are randomly presented to evaluators, and the following question is asked: “Which sound do you hear? 1. A, 2. B, 3. Both, or 4. None of them”. Here A and B are replaced by labels of their mixture sources, e.g. A=accordion, B=engine accelerating. Ten samples (including naturally occurring mixture) are evaluated for each model and 16 evaluators have participated in the evaluation. Table 5 shows the percentages of samples which are correctly identified the target sound class (Correct), which are incorrectly identified the target sound sources (Wrong), which are selected as both sounds are audible (Both), and which are selected as neither of the sounds are audible (None). The results indicate that the evaluators more often choose the correct sound source for CLIPSep-NIT (83.8%) than CLIPSep (66.3%) with text queries. Notably, CLIPSep-NIT with text-query obtained a higher correct score than that with image-query, which matches the training mode. This is probably because image queries often contain information about backgrounds and environments, hence, some noise and off-screen sounds are also suggested by the image-queries and leak to the query head. In contrast, text-queries purely contain the information of target sounds, thus, the query head more aggressively extract the target sounds.
G ROBUSTNESS TO DIFFERENT QUERIES
To examine the model’s robustness to different queries, we take the same input mixture and query the model with different text queries. We use the CLIPSep-NIT model on the MUSIC+ dataset and
report in Figure 16 the results. We see that the model is robust to different text queries and can extract the desired sounds. Audio samples can be found at https://sony.github.io/CLIPSep/.
H FINETUNING EXPERIMENTS ON THE ESC-50 DATASET
In this experiment, we aim to examine the possibilities of having a clean dataset for further finetuning. We consider the ESC-50 dataset (Piczak, 2015), a collection of 2,000 high-quality environmental audio recordings, as the clean dataset here.6 We report the experimental results in Table 6. We can see that the model pretrained on VGGSound does not generalize well to the ESC-50 dataset as the ESC-50 contains much cleaner sounds, i.e., without query-irrelevant sounds and background noise. Further, if we train the CLIPSep model from scratch on the ESC-50 dataset, it can only achieve a mean SDR of 5.18 dB and a median SDR of 5.09 dB. However, if we take the model pretrained on the VGGSound dataset and finetune it on the ESC-50 dataset, it can achieve a mean SDR of 6.73 dB and a median SDR of 4.89 dB, resulting in an improvement of 1.55 dB on the mean SDR.
I TRAINING BEHAVIORS
We present in Figure 9 the training and validation losses along the training progress. Please note that we only show the results obtained using text queries for reference but do not use them for choosing the best model. We also evaluate the intermediate checkpoints every 10,000 steps and present in Figure 10 the test SDR along the training progress. In addition, for the CLIPSep-NIT model, we visualize in Figure 11 the total mean noise head activation, ∑n i=1 mean(M̂ N i ), along the training progress. We can see that the total mean noise head activation stays around the desired level for γ = 0.1, 0.25. For γ = 0.5 and the unregularized version, the total mean noise head activation converges to a similar value around 0.55.
6https://github.com/karolpiczak/ESC-50 | 1. What is the focus and contribution of the paper regarding semantic correspondence?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are the questions raised by the reviewer regarding the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Summary
This paper proposes a text-queried universal sound separation model that can be trained on noisy in-the-wild videos (i.e. videos that contain both on-screen and off-screen sounds). Two versions are proposed: CLIPSep and CLIPSep-NIT (CLIPSep with noise invariant training).
CLIPSep: during training, mix audio from two videos. Extract the CLIP embedding of an image frame; from the spectrogram of the audio mixture, predict k masks; predict a k-dim query vector q_i from the CLIP embedding; predict overall mask for source i using query vector q_i to combine across the k masks, with an additional k-dimensional scaling weight w_i and scalar bias b_i; audio is reconstructed using inverse STFT on masked STFT. Training loss is weighted binary cross-entropy between estimated mask and ground-truth mask (so training requires isolated source audio from on-screen-only video). During inference, CLIP embedding is computed from text (assuming this will be close to CLIP embedding of image), and just one mask is predicted for the source described by the text.
CLIPSep-NIT: same as CLIPSep, except that for each of the n sources during training, an additional "noise" mask is predicted, which is an additional query vector that combines the k predicted masks with a noise query vector. Then during training, all permutations of the noise masks added to the source masks are considered, and the permutation with the minimum error is used. It seems the purpose of the noise masks is to "soak up" sounds not related to the CLIP embedding. At test time, the noise masks are discarded.
Contributions
First text-driven separation model (to my knowledge) that can be trained on noisy videos, enabled by the NIT trick.
NIT is a contribution, though I feel its novelty is relatively minor, since it's just a constrained version of permutation invariant training (PIT).
Strengths And Weaknesses
Strengths
To my knowledge, this is the first method to train text-queried separation on noisy mixtures.
The evaluation is done on both MUSIC+ and VGGSound-Clean+, measuring performance on both music separation and universal separation, and these results are convincing.
Paper includes link to anonymized demo page, which is convincing.
Weaknesses
I think the paper makes the post-selection step required for a MixIT model to be harder than it actually is. For a MixIT-trained model with N outputs, it's pretty easy to pick a source, e.g. with a sound classification network. This setup was actually proposed with a classification-regularized loss in: Wisdom, Scott, Aren Jansen, Ron J. Weiss, Hakan Erdogan, and John R. Hershey. "Sparse, efficient, and semantic mixture invariant training: Taming in-the-wild unsupervised sound separation." In 2021 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pp. 51-55. IEEE, 2021. (https://arxiv.org/pdf/2106.00847.pdf) Another advantage of MixIT is that the outputs are more interpretable, compared to models that rely on conditioning, such as the one described in this paper. Thus, I think it may be good to discuss the pros and cons of separate-then-select versus conditional separation in the paper.
This statement is a bit incorrect:
"However, AudioScope still requires a post-selection process if there is more than one predicted on-screen channel."
The goal of AudioScope is to recover all on-screen sounds in a single channel, which is what the model does: it uses on-screen probabilities as mixing weights across the sources.
"where s1, . . . , sn are the n audio sources,": in practice, these are mixtures, right? The model is just assuming that they are single sources. it might be good to refine the terminology here a bit.
Some explanation of why k masks are predicted, then combined, would be good. I think this is kind of analogous to the multiple output sources in MixIT, which can be combined for a particular user interface or output goal, e.g. AudioScope combines with on-screen probabilities to get an estimate of on-screen sound.
The equation for computing the overall source mask from the k masks is confusing. What does the \odot versus the \cdot mean? If w_i is k-dimensional, I don't see a sum over k, since it's \odot'ed with scalar q_{ij} times \tilde{M}j. Should this actually be w{i,j}? Please specify how this is done.
The model uses mask-based losses, which, in my own experience, are often suboptimal compared to signal based losses (i.e. computed loss in time domain, backpropping through iSTFT applied to masked mixture STFT). Also, in the NIT loss, adding masks together and applying a ceil of 1 does not exactly correspond to adding signals in the time domain, because of STFT consistency. it would be interesting to try time-domain based losses for this network, and see if that provides any improvement. Also, the architecture in the MixIT paper used mixture consistency, so that output sources sum up to the original input mixture. This might also be a useful constraint on the architecture here.
I think best practice for reporting units in decibels is to use only one decimal place. Humans can often not even hear 0.1 dB of difference. Thanks, by the way, for reporting std dev from the mean and median.
More explanation of the motivation of NIT would be very welcome. My intuition is that it helps "soak up" extra noise by providing additional output sources, but this might not be right. Please add some explicit discussion of the motivation.
Typos and minor comments
a. "For eaxmple," -> "For example,"
Clarity, Quality, Novelty And Reproducibility
Clarity: the paper is very clear. I only have minor suggestions for improvement (see weaknesses)
Quality: high quality. Evaluation is solid and compares to relevant baselines. Some nice additional information is provided in the appendices.
Novelty: paper is novel, in that it proposes a text-driven separation method that can be trained on noisy data, and minor novelty in the noise invariant training.
Reproducibility: the code and models are made available. |
ICLR | Title
CLIPSep: Learning Text-queried Sound Separation with Noisy Unlabeled Videos
Abstract
Recent years have seen progress beyond domain-specific sound separation for speech or music towards universal sound separation for arbitrary sounds. Prior work on universal sound separation has investigated separating a target sound out of an audio mixture given a text query. Such text-queried sound separation systems provide a natural and scalable interface for specifying arbitrary target sounds. However, supervised text-queried sound separation systems require costly labeled audio-text pairs for training. Moreover, the audio provided in existing datasets is often recorded in a controlled environment, causing a considerable generalization gap to noisy audio in the wild. In this work, we aim to approach text-queried universal sound separation by using only unlabeled data. We propose to leverage the visual modality as a bridge to learn the desired audio-textual correspondence. The proposed CLIPSep model first encodes the input query into a query vector using the contrastive language-image pretraining (CLIP) model, and the query vector is then used to condition an audio separation model to separate out the target sound. While the model is trained on image-audio pairs extracted from unlabeled videos, at test time we can instead query the model with text inputs in a zero-shot setting, thanks to the joint language-image embedding learned by the CLIP model. Further, videos in the wild often contain off-screen sounds and background noise that may hinder the model from learning the desired audio-textual correspondence. To address this problem, we further propose an approach called noise invariant training for training a query-based sound separation model on noisy data. Experimental results show that the proposed models successfully learn text-queried universal sound separation using only noisy unlabeled videos, even achieving competitive performance against a supervised model in some settings.
1 INTRODUCTION
Humans can focus on to a specific sound in the environment and describe it using language. Such abilities are learned using multiple modalities—auditory for selective listening, vision for learning the concepts of sounding objects, and language for describing the objects or scenes for communication. In machine listening, selective listening is often cast as the problem of sound separation, which aims to separate sound sources from an audio mixture (Cherry, 1953; Bach & Jordan, 2005). While text queries offer a natural interface for humans to specify the target sound to separate from a mixture (Liu et al., 2022; Kilgour et al., 2022), training a text-queried sound separation model in a supervised manner requires labeled audio-text paired data of single-source recordings of a vast number of sound types, which can be costly to acquire. Moreover, such isolated sounds are often recorded in controlled environments and have a considerable domain gap to recordings in the wild, which usually contain arbitrary noise and reverberations. In contrast, humans often leverage the visual modality to assist learning the sounds of various objects (Baillargeon, 2002). For instance, by observing a dog barking, a human can associate the sound with the dog, and can separately learn that the animal is called a “dog.” Further, such learning is possible even if the sound is observed in a noisy environment, e.g.,
∗Work done during an internship at Sony Group Corporation †Corresponding author
when a car is passing by or someone is talking nearby, where humans can still associate the barking sound solely with the dog. Prior work in psychophysics also suggests the intertwined cognition of vision and hearing (Sekuler et al., 1997; Shimojo & Shams, 2001; Rahne et al., 2007).
Motivated by this observation, we aim to tackle text-queried sound separation using only unlabeled videos in the wild. We propose a text-queried sound separation model called CLIPSep that leverages abundant unlabeled video data resources by utilizing the contrastive image-language pretraining (CLIP) (Radford et al., 2021) model to bridge the audio and text modalities. As illustrated in Figure 1, during training, the image feature extracted from a video frame by the CLIP-image encoder is used to condition a sound separation model, and the model is trained to separate the sound that corresponds to the image query in a self-supervised setting. Thanks to the properties of the CLIP model, which projects corresponding text and images to close embeddings, at test time we instead use the text feature obtained by the CLIP-text encoder from a text query in a zero-shot setting.
However, such zero-shot modality transfer can be challenging when we use videos in the wild for training as they often contain off-screen sounds and voice overs that can lead to undesired audiovisual associations. To address this problem, we propose the noise invariant training (NIT), where query-based separation heads and permutation invariant separation heads jointly estimate the noisy target sounds. We validate in our experiments that the proposed noise invariant training reduces
the zero-shot modality transfer gap when the model is trained on a noisy dataset, sometimes achieving competitive results against a fully supervised text-queried sound separation system.
Our contributions can be summarized as follows: 1) We propose the first text-queried universal sound separation model that can be trained on unlabeled videos. 2) We propose a new approach called noise invariant training for training a query-based sound separation model on noisy data in the wild. Audio samples can be found on an our demo website.1 For reproducibility, all source code, hyperparameters and pretrained models are available at: https://github.com/sony/CLIPSep.
2 RELATED WORK
Universal sound separation Much prior work on sound separation focuses on separating sounds for a specific domain such as speech (Wang & Chen, 2018) or music (Takahashi & Mitsufuji, 2021; Mitsufuji et al., 2021). Recent advances in domain specific sound separation lead several attempts to generalize to arbitrary sound classes. Kavalerov et al. (2019) reported successful results on separating arbitrary sounds with a fixed number of sources by adopting the permutation invariant training (PIT) (Yu et al., 2017), which was originally proposed for speech separation. While this approach does not require labeled data for training, a post-selection process is required as we cannot not tell what sounds are included in each separated result. Follow-up work (Ochiai et al., 2020; Kong et al., 2020) addressed this issue by conditioning the separation model with a class label to specify the target sound in a supervised setting. However, these approaches still require labeled data for training, and the interface for selecting the target class becomes cumbersome when we need a large number of classes to handle open-domain data. Wisdom et al. (2020) later proposed an unsupervised method called mixture invariant training (MixIT) for learning sound separation on noisy data. MixIT is designed to separate all sources at a time and also requires a post-selection process such as using a pre-trained sound classifier (Scott et al., 2021), which requires labeled data for training, to identify the target sounds. We summarize and compare related work in Table 1.
Query-based sound separation Visual information has been used for selecting the target sound in speech (Ephrat et al., 2019; Afouras et al., 2020), music (Zhao et al., 2018; 2019; Tian et al., 2021) and universal sounds (Owens & Efros, 2018; Gao et al., 2018; Rouditchenko et al., 2019). While many image-queried sound separation approaches require clean video data that contains isolated sources, Tzinis et al. (2021) introduced an unsupervised method called AudioScope for separating on-screen sounds using noisy videos based on the MixIT model. While image queries can serve as a
1https://sony.github.io/CLIPSep/
natural interface for specifying the target sound in certain use cases, images of target sounds become unavailable in low-light conditions and for sounds from out-of-screen objects.
Another line of research uses the audio modality to query acoustically similar sounds. Chen et al. (2022) showed that such approach can generalize to unseen sounds. Later, Gfeller et al. (2021) cropped two disjoint segments from single recording and used them as a query-target pair to train a sound separation model, assuming both segments contain the same sound source. However, in many cases, it is impractical to prepare a reference audio sample for the desired sound as the query.
Most recently, text-queried sound separation has been studied as it provides a natural and scalable interface for specifying arbitrary target sounds as compared to systems that use a fixed set of class labels. Liu et al. (2022) employed a pretrained language model to encode the text query, and condition the model to separate the corresponding sounds. Kilgour et al. (2022) proposed a model that accepts audio or text queries in a hybrid manner. These approaches, however, require labeled text-audio paired data for training. Different from prior work, our goal is to learn text-queried sound separation for arbitrary sound without labeled data, specifically using unlabeled noisy videos in the wild.
Contrastive language-image-audio pretraining The CLIP model (Radford et al., 2021) has been used as a pretraining of joint embedding spaces among text, image and audio modalities for downstream tasks such as audio classification (Wu et al., 2022; Guzhov et al., 2022) and sound guided image manipulation (Lee et al., 2022). Pretraining is done either in a supervised manner using labels (Guzhov et al., 2022; Lee et al., 2022) or in a self-supervised manner by training an additional audio encoder to map input audio to the pretrained CLIP embedding space (Wu et al., 2022). In contrast, we explore the zero-shot modality transfer capability of the CLIP model by freezing the pre-trained CLIP model and directly optimizing the rest of the model for the target sound separation task.
3 METHOD
3.1 CLIPSEP—LEARNING TEXT-QUERIED SOUND SEPARATION WITHOUT LABELED DATA
In this section, we propose the CLIPSep model for text-queried sound separation without using labeled data. We base the CLIPSep model on Sound-of-Pixels (SOP) (Zhao et al., 2018) and replace the video analysis network of the SOP model. As illustrated in Figure 2, during training, the model takes as inputs an audio mixture x = ∑n i=1 si, where s1, . . . , sn are the n audio tracks, along with their corresponding images y1, . . . ,yn extracted from the videos. We first transform the audio mixture x into a magnitude spectrogram X and pass the spectrogram through an audio U-Net (Ronneberger et al., 2015; Jansson et al., 2017) to produce k (≥ n) intermediate masks M̃1, . . . , M̃k. On the other stream, each image is encoded by the pretrained CLIP model (Radford et al., 2021) into an embedding ei ∈ R512. The CLIP embedding ei will further be projected to a query vector
qi ∈ Rk by a projection layer, which is expected to extract only audio-relevant information from ei.2 Finally, the query vector qi will be used to mix the intermediate masks into the final predicted masks M̂i = ∑k j=1 σ ( wijqijM̃j + bi ) , where wi ∈ Rk is a learnable scale vector, bi ∈ R a learnable bias, and σ(·) the sigmoid function. Now, suppose Mi is the ground truth mask for source si. The training objective of the model is the sum of the weighted binary cross entropy losses for each source:
LCLIPSep = n∑
i=1
WBCE(Mi, M̂i) = n∑ i=1 X ⊙ ( −Mi log M̂i − (1−Mi) log ( 1− M̂i )) . (1)
At test time, thanks to the joint image-text embedding offered by the CLIP model, we feed a text query instead of an image to the query model to obtain the query vector and separate the target sounds accordingly (see Appendix A for an illustration). As suggested by Radford et al. (2021), we prefix the text query into the form of “a photo of [user input query]” to reduce the generalization gap.3
3.2 NOISE INVARIANT TRAINING—HANDLING NOISY DATA IN THE WILD
While the CLIPSep model can separate sounds given image or text queries, it assumes that the sources are clean and contain few query-irrelevant sounds. However, this assumption does not hold for videos in the wild as many of them contain out-of-screen sounds and various background noises. Inspired by the mixture invariant training (MixIT) proposed by Wisdom et al. (2020), we further propose the noise invariant training (NIT) to tackle the challenge of training with noisy data. As illustrated in Figure 3, we introduce n additional permutation invariant heads called noise heads to the CLIPSep model, where the masks predicted by these heads are interchangeable during loss computation. Specifically, we introduce n additional projection layers, and each of them takes as input the sum of all query vectors produced by the query heads (i.e., ∑n i=1 qi) and produce a vector that is later used to mix the intermediate masks into the predicted noise mask. In principle, the query masks produced by the query vectors are expected to extract query-relevant sounds due to their stronger correlations to their corresponding queries, while the interchangeable noise masks should ‘soak up’ other sounds.
2We extract three frames with 1-sec intervals and compute their mean CLIP embedding as the input to the projection layer to reduce the negative effects when the selected frame does not contain the objects of interest.
3Similar to how we prepare the image queries, we create four queries from the input text query using four query templates (see Appendix B) and take their mean CLIP embedding as the input to the projection layer.
Mathematically, let MQ1 , . . . ,M Q n be the predicted query masks and M N 1 , . . . ,M N n be the predicted noise masks. Then, the noise invariant loss is defined as:
LNIT = min (j1,...,jn)∈Σn n∑ i=1 WBCE ( Mi,min ( 1, M̂Qi + M̂ N ji )) , (2)
where Σn denotes the set of all permutations of {1, . . . , n}.4 Take n = 2 for example.5 We consider the two possible ways for combining the query heads and the noise heads:
(Arrangement 1) M̂1 = min ( 1, M̂Q1 + M̂ N 1 ) , M̂2 = min ( 1, M̂Q2 + M̂ N 2 ) , (3)
(Arrangement 2) M̂ ′1 = min ( 1, M̂Q1 + M̂ N 2 ) , M̂ ′2 = min ( 1, M̂Q2 + M̂ N 1 ) . (4)
Then, the noise invariant loss is defined as the smallest loss achievable: L(2)NIT = min ( WBCE ( M1, M̂1 ) +WBCE ( M2, M̂2 ) ,WBCE ( M1, M̂ ′ 1 ) +WBCE ( M2, M̂ ′ 2 )) . (5)
Once the model is trained, we discard the noise heads and use only the query heads for inference (see Appendix A for an illustration). Unlike the MixIT model (Wisdom et al., 2020), our proposed noise invariant training still allows us to specify the target sound by an input query, and it does not require any post-selection process as we only use the query heads during inference.
In practice, we find that the model tends to assign part of the target sounds to the noise heads as these heads can freely enjoy the optimal permutation to minimize the loss. Hence, we further introduce a regularization term to penalize producing high activations on the noise masks:
LREG = max ( 0, n∑ i=1 mean ( M̂Ni ) − γ ) , (6)
where γ ∈ [0, n] is a hyperparameter that we will refer to as the noise regularization level. The proposed regularization has no effect when the sum of the means of all the noise masks is lower than a predefined threshold γ, while having a linearly growing penalty when the sum is higher than γ. Finally, the training objective of the CLIPSep-NIT model is a weighted sum of the noise invariant loss and regularization term: LCLIPSep-NIT = LNIT +λLREG , where λ ∈ R is a weight hyperparameter. We set λ = 0.1 for all experiments, which we find work well across different settings.
4We note that CLIPSep-NIT considers 2n sources in total as the model has n queried heads and n noise heads. While PIT (Yu et al., 2017) and MixIT (Wisdom et al., 2020) respectively require O((2n)!) and O(22n) search to consider 2n sources, the proposed NIT only requires O(n!) permutation in the loss computation.
5Since our goal is not to further separate the noise into individual sources but to separate the sounds that correspond to the query, n may not need to be large. In practice, we find that the CLIPSep-NIT model with n = 2 already learns to handle the noise properly and can successfully transfer to the text-queried mode. Thus, we use n = 2 throughout this paper and leave the testing on larger n as future work.
4 EXPERIMENTS
We base our implementations on the code provided by Zhao et al. (2018) (https://github.com/ hangzhaomit/Sound-of-Pixels). Implementation details can be found in Appendix C.
4.1 EXPERIMENTS ON CLEAN DATA
We first evaluate the proposed CLIPSep model without the noise invariant training on musical instrument sound separation task using the MUSIC dataset, as done in (Zhao et al., 2018). This experiment is designed to focus on evaluating the quality of the learned query vectors and the zeroshot modality transferability of the CLIPSep model on a small, clean dataset rather than showing its ability to separate arbitrary sounds. The MUSIC dataset is a collection of 536 video recordings of people playing a musical instrument out of 11 instrument classes. Since no existing work has trained a text-queried sound separation model using only unlabeled data to our knowledge, we compare the proposed CLIPSep model with two baselines that serve as upper bounds—the PIT model (Yu et al., 2017, see Appendix D for an illustration) and a version of the CLIPSep model where the query model is replaced by learnable embeddings for the labels, which we will refer to as the LabelSep model. In addition, we also include the SOP model (Zhao et al., 2018) to investigate the quality of the query vectors as the CLIPSep and SOP models share the same network architecture except the query model.
We report the results in Table 2. Our proposed CLIPSep model achieves a mean signal-to-distortion ratio (SDR) (Vincent et al., 2006) of 5.49 dB and a median SDR of 4.97 dB using text queries in a zero-shot modality transfer setting. When using image queries, the performance of the CLIPSep model is comparable to that of the SOP model. This indicates that the CLIP embeddings are as informative as those produced by the SOP model. The performance difference between the CLIPSep model using text and image queries at test time indicates the zero-shot modality transfer gap. We observe 1.54 dB and 0.88 dB differences on the mean and median SDRs, respectively. Moreover,
we also report in Table 2 and Figure 4 the performance of the CLIPSep models trained on different modalities to investigate their modality transferability in different settings. We notice that when we train the CLIPSep model using text queries, dubbed as CLIPSep-Text, the mean SDR using text queries increases to 7.91 dB. However, when we test this model using image queries, we observe a 1.66 dB difference on the mean SDR as compared to that using text queries, which is close to
the mean SDR difference we observe for the model trained with image queries. Finally, we train a CLIPSep model using both text and image queries in alternation, dubbed as CLIPSep-Hybrid. We see that it leads to the best test performance for both text and image modalities, and there is only a mean SDR difference of 0.30 dB between using text and image queries. As a reference, the LabelSep model trained with labeled data performs worse than the CLIPSep-Hybrid model using text queries. Further, the PIT model achieves a mean SDR of 8.68 dB and a median SDR of 7.67 dB, but it requires post-processing to figure out the correct assignments.
4.2 EXPERIMENTS ON NOISY DATA
Next, we evaluate the proposed method on a large-scale dataset aiming at universal sound separation. We use the VGGSound dataset (Chen et al., 2020), a large-scale audio-visual dataset containing more than 190,000 10-second videos in the wild out of more than 300 classes. We find that the audio in the VGGSound dataset is often noisy and contains off-screen sounds and background noise. Although we train the models on such noisy data, it is not suitable to use the noisy data as targets for evaluation because it fails to provide reliable results. For example, if the target sound labeled as “dog barking” also contains human speech, separating only the dog barking sound provides a lower SDR value than separating the mixture of dog barking sound and human speech even though the text query is “dog barking”. (Note that we use the labels only for evaluation but not for training.) To avoid this issue, we consider the following two evaluation settings:
• MUSIC+: Samples in the MUSIC dataset are used as clean targets and mixed with a sample in the VGGSound dataset as an interference. The separation quality is evaluated on the clean target from the MUSIC dataset. As we do not use the MUSIC dataset for training, this can be considered as zero-shot transfer to a new data domain containing unseen sounds (Radford et al., 2019; Brown et al., 2020). To avoid the unexpected overlap of the target sound types in the MUSIC and VGGSound datasets caused by the label mismatch, we exclude all the musical instrument playing videos from the VGGSound dataset in this setting.
• VGGSound-Clean+: We manually collect 100 clean samples that contain distinct target sounds from the VGGSound test set, which we will refer to as VGGSound-Clean. We mix an audio sample in VGGSound-Clean with another in the test set of VGGSound. Similarly, we consider the VGGSound audio as an interference sound added to the relatively cleaner VGGSound-Clean audio and evaluate the separation quality on the VGGSound-Clean stem.
Table 3 shows the evaluation results. First, CLIPSep successfully learns text-queried sound separation even with noisy unlabeled data, achieving 5.22 dB and 3.53 dB SDR improvements over the mixture on MUSIC+ and VGGSound-Clean+, respectively. By comparing CLIPSep and CLIPSep-NIT, we observe that NIT improves the mean SDRs in both settings. Moreover, on MUSIC+, CLIPSep-NIT’s performance matches that of CLIPSep-Text, which utilizes labels for training, achieving only a 0.46 dB lower mean SDR and even a 0.05 dB higher median SDR. This result suggests that the proposed self-supervised text-queried sound separation method can learn separation capability competitive with the fully supervised model in some target sounds. In contrast, there is still a gap between them on VGGSound-Clean+, possibly because the videos of non-music-instrument objects are more noisy in both audio and visual domains, thus resulting in a more challenging zero-shot modality transfer. This hypothesis is also supported by the higher zero-shot modality transfer gap (mean SDR difference of image- and text-queried mode) of 1.79 dB on VGGSound-Clean+ than that of 1.01 dB on MUSIC+ for CLIPSep-NIT. In addition, we consider another baseline model that replaces the CLIP model in CLIPSep with a BERT encoder (Devlin et al., 2019), which we call BERTSep. Interestingly, although BERTSep performs similarly to CLIPSep-Text on VGGSound-Clean+, the performance of BERTSep is significantly lower than that of CLIPSep-Text on MUISC+, indicating that BERTSep fails to generalize to unseen text queries. We hypothesize that the CLIP text embedding captures the timbral similarity of musical instruments better than the BERT embedding do, because the CLIP model is aware of the visual similarity between musical instruments during training. Moreover, it is interesting to see that CLIPSep outperforms CLIPSep-NIT when an image query is used at test time (domain-matched condition), possibly because images contain richer context information such as objects nearby and backgrounds than labels, and the models can use such information to better separate the target sound. While CLIPSep has to fully utilize such information, CLIPSep-NIT can use the noise heads to model sounds that are less relevant to the image query. Since we remove the noise heads from CLIPSep-NIT during the evaluation, it can rely less on such information from the image, thus improving the zero-shot modality transferability. Figure 5 shows an example of the separation results on MUSIC+ (see Figures 12 to 15 for more examples). We observe that the two noise heads contain mostly background noise. Audio samples can be found on our demo website.1
4.3 EXAMINING THE EFFECTS OF THE NOISE REGULARIZATION LEVEL γ
In this experiment, we examine the effects of the noise regularization level γ in Equation (6) by changing the value from 0 to 1. As we can see from Figure 6 (a) and (b), CLIPSep-NIT with γ = 0.25 achieves the highest SDR on both evaluation settings. This suggests that the optimal γ value is not sensitive to the evaluation dataset. Further, we also report in Figure 6 (c) the total mean noise head activation, ∑n i=1 mean(M̂ N i ), on the validation set. As M̂ N i is the mask estimate for the noise, the total mean noise head activation value indicates to what extent signals are assigned to the noise head. We observe that the proposed regularizer successfully keeps the total mean noise head activation close to the desired level, γ, for γ ≤ 0.5. Interestingly, the total mean noise head activation is still around 0.5 when γ = 1.0, suggesting that the model inherently tries to use both the query-heads and the noise heads to predict the noisy target sounds. Moreover, while we discard the noise heads during evaluation in our experiments, keeping the noise heads can lead to a higher SDR as shown in
Figure 6 (a) and (b), which can be helpful in certain use cases where a post-processing procedure similar to the PIT model (Yu et al., 2017) is acceptable.
5 DISCUSSIONS
For the experiments presented in this paper, we work on labeled datasets so that we can evaluate the performance of the proposed models. However, our proposed models do not require any labeled data for training, and can thus be trained on larger unlabeled video collections in the wild. Moreover, we observe that the proposed model shows the capability of combing multiple queries, e.g., “a photo of [query A] and [query B],” to extract multiple target sounds, and we report the results on the demo website. This offers a more natural user interface against having to separate each target sound and mix them via an additional post-processing step. We also show in Appendix G that our proposed model is robust to different text queries and can extract the desired sounds.
In our experiments, we often observe a modality transfer gap greater than 1 dB difference of SDR. A future research direction is to explore different approaches to reduce the modality transfer gap. For example, the CLIP model is pretrained on a different dataset, and thus finetuning the CLIP model on the target dataset can help improve the underlying modality transferability within the CLIP model. Further, while the proposed noise invariant training is shown to improve the training on noisy data and reduce the modality transfer gap, it still requires a sufficient audio-visual correspondence for training video. In other words, if the audio and images are irrelevant in most videos, the model will struggle to learn the correspondence between the query and target sound. In practice, we find that the data in the VGGSound dataset often contains off-screen sounds and the labels sometimes correspond to only part of the video content. Hence, filtering on the training data to enhance its audio-visual correspondence can also help reduce the modality transfer gap. This can be achieved by self-supervised audio-visual correspondence prediction (Arandjelović & Zisserman, 2017a;b) or temporal synchronization (Korbar et al., 2018; Owens & Efros, 2018).
Another future direction is to explore the semi-supervised setting where a small subset of labeled data can be used to improve the modality transferability. We can also consider the proposed method as a pretraining on unlabeled data for other separation tasks in the low-resource regime. We include in Appendix H a preliminary experiment in this aspect using the ESC-50 dataset (Piczak, 2015).
6 CONCLUSION
In this work, we have presented a novel text-queried universal sound separation model that can be trained on noisy unlabeled videos. In this end, we have proposed to use the contrastive imagelanguage pretraining to bridge the audio and text modalities, and proposed the noise invariant training for training a query-based sound separation model on noisy data. We have shown that the proposed models can learn to separate an arbitrary sound specified by a text query out of a mixture, even achieving competitive performance against a fully supervised model in some settings. We believe our proposed approach closes the gap between the ways humans and machines learn to focus on a sound in a mixture, namely, the multi-modal self-supervised learning paradigm of humans against the supervised learning paradigm adopted by existing label-based machine learning approaches.
ACKNOWLEDGEMENTS
We would like to thank Stefan Uhlich, Giorgio Fabbro and Woosung Choi for their helpful comments during the preparation of this manuscript. We also thank Mayank Kumar Singh for supporting the setup of the subjective test in Appendix F. Hao-Wen thank J. Yang and Family Foundation and Taiwan Ministry of Education for supporting his PhD study.
B QUERY ENSEMBLING
Radford et al. (2021) suggest that using a prompt template in the form of “a photo of [user input query]” helps bridge the distribution gap between text queries used for zero-shot image classification and text in the training dataset for the CLIP model. They further show that the ensemble of various prompt templates improve the generalizability. Motivated by this observation, we adopt a similar idea and use several query templates at test time (see Table 4). These query templates are heuristically chosen to handle the noisy images extracted from videos.
C IMPLEMENTATION DETAILS
We implement the audio model as a 7-layer U-Net (Ronneberger et al., 2015). We use k = 32. We use binary masks as the ground truth masks during training while using the raw, real-valued masks for evaluation. We train all the models for 200,000 steps with a batch size of 32. We use the Adam optimizer (Kingma & Ba, 2015) with β1 = 0.9, β2 = 0.999 and ϵ = 10−8. In addition, we clip the norm of the gradients to 1.0 (Zhang et al., 2020). We adopt the following learning rate schedule with a warm-up—the learning rate starts from 0 and grows to 0.001 after 5,000 steps, and then it linearly drops to 0.0001 at 100,000 steps and keeps this value thereafter. We validate the model every 10,000 steps using image queries as we do not assume labeled data is available for the validation set. We use a sampling rate of 16,000 Hz and work on audio clips of length 65,535 samples (≈ 4 seconds). During training, we randomly sample a center frame from a video and extract three frames (images) with 1-sec intervals and 4-sec audio around the center frame. During inference, for image-queried models, we extract three frames with 1-sec intervals around the center of the test clip. For the spectrogram computation, we use a filter length of 1024, a hop length of 256 and a window size of 1024 in the short-time Fourier transform (STFT). We resize images extracted from video to a size of 224-by-224 pixels. For the CLIPSep-Hybrid model, we alternatively train the model with text and image queries, i.e., one batch with all image queries and next with all text queries, and so on. We implement all the models using the PyTorch library (Paszke et al., 2019). We compute the signal-to-distortion ratio (SDR) using museval (Stöter et al., 2018).
In our preliminary experiments, we also tried directly predicting the final mask by conditioning the audio model on the query vector. We applied this modification for both SOP and CLIPSep models, however, we observe that this architecture is prone to overfitting. We hypothesize that this is because the audio model is powerful enough to remember the subtle clues in the query vector, which hinder the generalization to a new sound and query. In contrast, the proposed architecture first predicts over-determined masks and then combines them on the basis of the query vector, which avoids the overfitting problem due to the simple fusion step.
D PERMUTATION INVARIANT TRAINING
Figure 8 illustrates the permuatation invariant training (PIT) model (Yu et al., 2017). The permutation invariant loss is defined as follows for n = 2.
LPIT = min ( WBCE(M1, M̂1) +WBCE(M2, M̂2),WBCE(M1, M̂2) +WBCE(M2, M̂1) ) , (7)
where M̂1 and M̂2 are the predicted masks. Note that the PIT model requires an additional postselection step to obtain the target sound.
E QUALITATIVE EXAMPLE RESULTS
We show in Figures 12 to 15 some example results. More results and audio samples can be found at https://sony.github.io/CLIPSep/.
F SUBJECTIVE EVALUATION
We conduct a subjective test to evaluate whether the SDR results aligned with perceptual quality. As done in the Sound of Pixel (Zhao et al., 2018), separated audio samples are randomly presented to evaluators, and the following question is asked: “Which sound do you hear? 1. A, 2. B, 3. Both, or 4. None of them”. Here A and B are replaced by labels of their mixture sources, e.g. A=accordion, B=engine accelerating. Ten samples (including naturally occurring mixture) are evaluated for each model and 16 evaluators have participated in the evaluation. Table 5 shows the percentages of samples which are correctly identified the target sound class (Correct), which are incorrectly identified the target sound sources (Wrong), which are selected as both sounds are audible (Both), and which are selected as neither of the sounds are audible (None). The results indicate that the evaluators more often choose the correct sound source for CLIPSep-NIT (83.8%) than CLIPSep (66.3%) with text queries. Notably, CLIPSep-NIT with text-query obtained a higher correct score than that with image-query, which matches the training mode. This is probably because image queries often contain information about backgrounds and environments, hence, some noise and off-screen sounds are also suggested by the image-queries and leak to the query head. In contrast, text-queries purely contain the information of target sounds, thus, the query head more aggressively extract the target sounds.
G ROBUSTNESS TO DIFFERENT QUERIES
To examine the model’s robustness to different queries, we take the same input mixture and query the model with different text queries. We use the CLIPSep-NIT model on the MUSIC+ dataset and
report in Figure 16 the results. We see that the model is robust to different text queries and can extract the desired sounds. Audio samples can be found at https://sony.github.io/CLIPSep/.
H FINETUNING EXPERIMENTS ON THE ESC-50 DATASET
In this experiment, we aim to examine the possibilities of having a clean dataset for further finetuning. We consider the ESC-50 dataset (Piczak, 2015), a collection of 2,000 high-quality environmental audio recordings, as the clean dataset here.6 We report the experimental results in Table 6. We can see that the model pretrained on VGGSound does not generalize well to the ESC-50 dataset as the ESC-50 contains much cleaner sounds, i.e., without query-irrelevant sounds and background noise. Further, if we train the CLIPSep model from scratch on the ESC-50 dataset, it can only achieve a mean SDR of 5.18 dB and a median SDR of 5.09 dB. However, if we take the model pretrained on the VGGSound dataset and finetune it on the ESC-50 dataset, it can achieve a mean SDR of 6.73 dB and a median SDR of 4.89 dB, resulting in an improvement of 1.55 dB on the mean SDR.
I TRAINING BEHAVIORS
We present in Figure 9 the training and validation losses along the training progress. Please note that we only show the results obtained using text queries for reference but do not use them for choosing the best model. We also evaluate the intermediate checkpoints every 10,000 steps and present in Figure 10 the test SDR along the training progress. In addition, for the CLIPSep-NIT model, we visualize in Figure 11 the total mean noise head activation, ∑n i=1 mean(M̂ N i ), along the training progress. We can see that the total mean noise head activation stays around the desired level for γ = 0.1, 0.25. For γ = 0.5 and the unregularized version, the total mean noise head activation converges to a similar value around 0.55.
6https://github.com/karolpiczak/ESC-50 | 1. What is the main contribution of the paper regarding audio source separation using text queries?
2. What are the strengths and weaknesses of the proposed approach, particularly in its comparison to prior works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any suggestions for improvements or additional experiments to enhance the paper's findings? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper under review proposes a method of selecting a single sound source from a mixture of sounds in a video via a text description of the visual component of the video. The system can be trained on unlabeled data, aka unsupervised training. This is a novel configuration of using a pre-trained audio-visual correspondence model to allow text queries to select the single audio source to separate from a mixture in the video. Unlike what is claimed in the paper though in section 4.1, work was published this year on querying by text to separate a source from an audio mixture (this is understandable given timing). There is also a contribution of a form of noise invariant training that allows for the model to account for sounds in the mixture that have no correspondence in the video. The results are conducted on test sets, MUSIC and VGGSound-Clean, that have audio collected from the wild (YouTube), however they have been artificially mixed to yield multiple sound sources. The results are competitive with PIT, although PIT has a "post-processing" requirement.
Strengths And Weaknesses
Strengths:
A new configuration of querying by text to separate out an audio source in a video with sources that have corresponding audio and visual signals.
Shows performance competitive with state-of-the-art in sound separation
Weaknesses
Tests are made by artificially combining samples of YouTube videos. Can you conduct test results on naturally occurring mixtures?
Results report an automatically computed quantitative metric, ie SDR. It is unclear whether how this corresponds to actual user preferences. Since the results are close, could a qualitative survey be conducting comparing the results of PIT with CLIPSep, similar to how they were done in Sound of Pixels using Mechanical Turk?
Clarity, Quality, Novelty And Reproducibility
The clarity and quality is good and generally well written. It lacks a certain level of final polish to make 1. how it differs previous, comparable work and 2. the findings absolutely clear. Most of the details can be found in the text, but summaries and figures could make it more obvious. For example Figure 4, showing mean SDR for image and text inputs in test, for models training with different modalities. This would be clearer in a table, ie | Test Modality | Train Modality | Image | Text | Both | ClipSep (Image) | 7.5 | 5.5 | ? | ClipSep (Text) | 6.2 | 8.1 | ? | ClipSep (Both) | 8.1 | 8.2 | ? |
#s are approximately estimated from figure 4.
Here one can see how good the model is if the train/test modalities are matched. There's more lost when trained on image and tested on text (unfortunately the main goal of the paper). Using both in train help significantly. Could you test with both? Would be an interesting result.
The paper is novel in a narrow sense, since the field has a lot of work in audio separation via query and addressing unsupervised separation of audio sources. The unsupervised separation of audio by query is similar to the work in:
Liu et al., Separate What You Describe: Language-Queried Audio Source Separation, Proc Interspeech 2022
text queries are used to select a source to separate in audio-only samples
the paper under review has the addition of a visual modality to improve the correspondence between text and the input modes.
Zhao et al. The Sounds of Pixels. ECCV 2108 (cited by paper and base implementation)
unsupervised audio-visual source separation in videos with musicians playing music, selection/query by image~
the paper under review adds a text query component to select the source to separate out, and a Noise Invariant Training scheme to cope with (audio) noise sources that have no correspondence in the video. it also focuses on unconstrained sound vs only music in Zhao.
Wisdom et al. Unsupervised Sound Separation Using Mixture Invariant Training
unsupervised audio separation, mixture of mixtures invariant training
doesn't provide a means to select a single source to extract (separates all sources)
The paper uses publicly presented data sources and published github repositories. The paper should be relatively easy to reproduce.
Minor comments
are the masks used in the paper binary or ratio? Zhao mentions that both are possible.
4th line in Conclusion has a typo "language pretraining". |
ICLR | Title
CLIPSep: Learning Text-queried Sound Separation with Noisy Unlabeled Videos
Abstract
Recent years have seen progress beyond domain-specific sound separation for speech or music towards universal sound separation for arbitrary sounds. Prior work on universal sound separation has investigated separating a target sound out of an audio mixture given a text query. Such text-queried sound separation systems provide a natural and scalable interface for specifying arbitrary target sounds. However, supervised text-queried sound separation systems require costly labeled audio-text pairs for training. Moreover, the audio provided in existing datasets is often recorded in a controlled environment, causing a considerable generalization gap to noisy audio in the wild. In this work, we aim to approach text-queried universal sound separation by using only unlabeled data. We propose to leverage the visual modality as a bridge to learn the desired audio-textual correspondence. The proposed CLIPSep model first encodes the input query into a query vector using the contrastive language-image pretraining (CLIP) model, and the query vector is then used to condition an audio separation model to separate out the target sound. While the model is trained on image-audio pairs extracted from unlabeled videos, at test time we can instead query the model with text inputs in a zero-shot setting, thanks to the joint language-image embedding learned by the CLIP model. Further, videos in the wild often contain off-screen sounds and background noise that may hinder the model from learning the desired audio-textual correspondence. To address this problem, we further propose an approach called noise invariant training for training a query-based sound separation model on noisy data. Experimental results show that the proposed models successfully learn text-queried universal sound separation using only noisy unlabeled videos, even achieving competitive performance against a supervised model in some settings.
1 INTRODUCTION
Humans can focus on to a specific sound in the environment and describe it using language. Such abilities are learned using multiple modalities—auditory for selective listening, vision for learning the concepts of sounding objects, and language for describing the objects or scenes for communication. In machine listening, selective listening is often cast as the problem of sound separation, which aims to separate sound sources from an audio mixture (Cherry, 1953; Bach & Jordan, 2005). While text queries offer a natural interface for humans to specify the target sound to separate from a mixture (Liu et al., 2022; Kilgour et al., 2022), training a text-queried sound separation model in a supervised manner requires labeled audio-text paired data of single-source recordings of a vast number of sound types, which can be costly to acquire. Moreover, such isolated sounds are often recorded in controlled environments and have a considerable domain gap to recordings in the wild, which usually contain arbitrary noise and reverberations. In contrast, humans often leverage the visual modality to assist learning the sounds of various objects (Baillargeon, 2002). For instance, by observing a dog barking, a human can associate the sound with the dog, and can separately learn that the animal is called a “dog.” Further, such learning is possible even if the sound is observed in a noisy environment, e.g.,
∗Work done during an internship at Sony Group Corporation †Corresponding author
when a car is passing by or someone is talking nearby, where humans can still associate the barking sound solely with the dog. Prior work in psychophysics also suggests the intertwined cognition of vision and hearing (Sekuler et al., 1997; Shimojo & Shams, 2001; Rahne et al., 2007).
Motivated by this observation, we aim to tackle text-queried sound separation using only unlabeled videos in the wild. We propose a text-queried sound separation model called CLIPSep that leverages abundant unlabeled video data resources by utilizing the contrastive image-language pretraining (CLIP) (Radford et al., 2021) model to bridge the audio and text modalities. As illustrated in Figure 1, during training, the image feature extracted from a video frame by the CLIP-image encoder is used to condition a sound separation model, and the model is trained to separate the sound that corresponds to the image query in a self-supervised setting. Thanks to the properties of the CLIP model, which projects corresponding text and images to close embeddings, at test time we instead use the text feature obtained by the CLIP-text encoder from a text query in a zero-shot setting.
However, such zero-shot modality transfer can be challenging when we use videos in the wild for training as they often contain off-screen sounds and voice overs that can lead to undesired audiovisual associations. To address this problem, we propose the noise invariant training (NIT), where query-based separation heads and permutation invariant separation heads jointly estimate the noisy target sounds. We validate in our experiments that the proposed noise invariant training reduces
the zero-shot modality transfer gap when the model is trained on a noisy dataset, sometimes achieving competitive results against a fully supervised text-queried sound separation system.
Our contributions can be summarized as follows: 1) We propose the first text-queried universal sound separation model that can be trained on unlabeled videos. 2) We propose a new approach called noise invariant training for training a query-based sound separation model on noisy data in the wild. Audio samples can be found on an our demo website.1 For reproducibility, all source code, hyperparameters and pretrained models are available at: https://github.com/sony/CLIPSep.
2 RELATED WORK
Universal sound separation Much prior work on sound separation focuses on separating sounds for a specific domain such as speech (Wang & Chen, 2018) or music (Takahashi & Mitsufuji, 2021; Mitsufuji et al., 2021). Recent advances in domain specific sound separation lead several attempts to generalize to arbitrary sound classes. Kavalerov et al. (2019) reported successful results on separating arbitrary sounds with a fixed number of sources by adopting the permutation invariant training (PIT) (Yu et al., 2017), which was originally proposed for speech separation. While this approach does not require labeled data for training, a post-selection process is required as we cannot not tell what sounds are included in each separated result. Follow-up work (Ochiai et al., 2020; Kong et al., 2020) addressed this issue by conditioning the separation model with a class label to specify the target sound in a supervised setting. However, these approaches still require labeled data for training, and the interface for selecting the target class becomes cumbersome when we need a large number of classes to handle open-domain data. Wisdom et al. (2020) later proposed an unsupervised method called mixture invariant training (MixIT) for learning sound separation on noisy data. MixIT is designed to separate all sources at a time and also requires a post-selection process such as using a pre-trained sound classifier (Scott et al., 2021), which requires labeled data for training, to identify the target sounds. We summarize and compare related work in Table 1.
Query-based sound separation Visual information has been used for selecting the target sound in speech (Ephrat et al., 2019; Afouras et al., 2020), music (Zhao et al., 2018; 2019; Tian et al., 2021) and universal sounds (Owens & Efros, 2018; Gao et al., 2018; Rouditchenko et al., 2019). While many image-queried sound separation approaches require clean video data that contains isolated sources, Tzinis et al. (2021) introduced an unsupervised method called AudioScope for separating on-screen sounds using noisy videos based on the MixIT model. While image queries can serve as a
1https://sony.github.io/CLIPSep/
natural interface for specifying the target sound in certain use cases, images of target sounds become unavailable in low-light conditions and for sounds from out-of-screen objects.
Another line of research uses the audio modality to query acoustically similar sounds. Chen et al. (2022) showed that such approach can generalize to unseen sounds. Later, Gfeller et al. (2021) cropped two disjoint segments from single recording and used them as a query-target pair to train a sound separation model, assuming both segments contain the same sound source. However, in many cases, it is impractical to prepare a reference audio sample for the desired sound as the query.
Most recently, text-queried sound separation has been studied as it provides a natural and scalable interface for specifying arbitrary target sounds as compared to systems that use a fixed set of class labels. Liu et al. (2022) employed a pretrained language model to encode the text query, and condition the model to separate the corresponding sounds. Kilgour et al. (2022) proposed a model that accepts audio or text queries in a hybrid manner. These approaches, however, require labeled text-audio paired data for training. Different from prior work, our goal is to learn text-queried sound separation for arbitrary sound without labeled data, specifically using unlabeled noisy videos in the wild.
Contrastive language-image-audio pretraining The CLIP model (Radford et al., 2021) has been used as a pretraining of joint embedding spaces among text, image and audio modalities for downstream tasks such as audio classification (Wu et al., 2022; Guzhov et al., 2022) and sound guided image manipulation (Lee et al., 2022). Pretraining is done either in a supervised manner using labels (Guzhov et al., 2022; Lee et al., 2022) or in a self-supervised manner by training an additional audio encoder to map input audio to the pretrained CLIP embedding space (Wu et al., 2022). In contrast, we explore the zero-shot modality transfer capability of the CLIP model by freezing the pre-trained CLIP model and directly optimizing the rest of the model for the target sound separation task.
3 METHOD
3.1 CLIPSEP—LEARNING TEXT-QUERIED SOUND SEPARATION WITHOUT LABELED DATA
In this section, we propose the CLIPSep model for text-queried sound separation without using labeled data. We base the CLIPSep model on Sound-of-Pixels (SOP) (Zhao et al., 2018) and replace the video analysis network of the SOP model. As illustrated in Figure 2, during training, the model takes as inputs an audio mixture x = ∑n i=1 si, where s1, . . . , sn are the n audio tracks, along with their corresponding images y1, . . . ,yn extracted from the videos. We first transform the audio mixture x into a magnitude spectrogram X and pass the spectrogram through an audio U-Net (Ronneberger et al., 2015; Jansson et al., 2017) to produce k (≥ n) intermediate masks M̃1, . . . , M̃k. On the other stream, each image is encoded by the pretrained CLIP model (Radford et al., 2021) into an embedding ei ∈ R512. The CLIP embedding ei will further be projected to a query vector
qi ∈ Rk by a projection layer, which is expected to extract only audio-relevant information from ei.2 Finally, the query vector qi will be used to mix the intermediate masks into the final predicted masks M̂i = ∑k j=1 σ ( wijqijM̃j + bi ) , where wi ∈ Rk is a learnable scale vector, bi ∈ R a learnable bias, and σ(·) the sigmoid function. Now, suppose Mi is the ground truth mask for source si. The training objective of the model is the sum of the weighted binary cross entropy losses for each source:
LCLIPSep = n∑
i=1
WBCE(Mi, M̂i) = n∑ i=1 X ⊙ ( −Mi log M̂i − (1−Mi) log ( 1− M̂i )) . (1)
At test time, thanks to the joint image-text embedding offered by the CLIP model, we feed a text query instead of an image to the query model to obtain the query vector and separate the target sounds accordingly (see Appendix A for an illustration). As suggested by Radford et al. (2021), we prefix the text query into the form of “a photo of [user input query]” to reduce the generalization gap.3
3.2 NOISE INVARIANT TRAINING—HANDLING NOISY DATA IN THE WILD
While the CLIPSep model can separate sounds given image or text queries, it assumes that the sources are clean and contain few query-irrelevant sounds. However, this assumption does not hold for videos in the wild as many of them contain out-of-screen sounds and various background noises. Inspired by the mixture invariant training (MixIT) proposed by Wisdom et al. (2020), we further propose the noise invariant training (NIT) to tackle the challenge of training with noisy data. As illustrated in Figure 3, we introduce n additional permutation invariant heads called noise heads to the CLIPSep model, where the masks predicted by these heads are interchangeable during loss computation. Specifically, we introduce n additional projection layers, and each of them takes as input the sum of all query vectors produced by the query heads (i.e., ∑n i=1 qi) and produce a vector that is later used to mix the intermediate masks into the predicted noise mask. In principle, the query masks produced by the query vectors are expected to extract query-relevant sounds due to their stronger correlations to their corresponding queries, while the interchangeable noise masks should ‘soak up’ other sounds.
2We extract three frames with 1-sec intervals and compute their mean CLIP embedding as the input to the projection layer to reduce the negative effects when the selected frame does not contain the objects of interest.
3Similar to how we prepare the image queries, we create four queries from the input text query using four query templates (see Appendix B) and take their mean CLIP embedding as the input to the projection layer.
Mathematically, let MQ1 , . . . ,M Q n be the predicted query masks and M N 1 , . . . ,M N n be the predicted noise masks. Then, the noise invariant loss is defined as:
LNIT = min (j1,...,jn)∈Σn n∑ i=1 WBCE ( Mi,min ( 1, M̂Qi + M̂ N ji )) , (2)
where Σn denotes the set of all permutations of {1, . . . , n}.4 Take n = 2 for example.5 We consider the two possible ways for combining the query heads and the noise heads:
(Arrangement 1) M̂1 = min ( 1, M̂Q1 + M̂ N 1 ) , M̂2 = min ( 1, M̂Q2 + M̂ N 2 ) , (3)
(Arrangement 2) M̂ ′1 = min ( 1, M̂Q1 + M̂ N 2 ) , M̂ ′2 = min ( 1, M̂Q2 + M̂ N 1 ) . (4)
Then, the noise invariant loss is defined as the smallest loss achievable: L(2)NIT = min ( WBCE ( M1, M̂1 ) +WBCE ( M2, M̂2 ) ,WBCE ( M1, M̂ ′ 1 ) +WBCE ( M2, M̂ ′ 2 )) . (5)
Once the model is trained, we discard the noise heads and use only the query heads for inference (see Appendix A for an illustration). Unlike the MixIT model (Wisdom et al., 2020), our proposed noise invariant training still allows us to specify the target sound by an input query, and it does not require any post-selection process as we only use the query heads during inference.
In practice, we find that the model tends to assign part of the target sounds to the noise heads as these heads can freely enjoy the optimal permutation to minimize the loss. Hence, we further introduce a regularization term to penalize producing high activations on the noise masks:
LREG = max ( 0, n∑ i=1 mean ( M̂Ni ) − γ ) , (6)
where γ ∈ [0, n] is a hyperparameter that we will refer to as the noise regularization level. The proposed regularization has no effect when the sum of the means of all the noise masks is lower than a predefined threshold γ, while having a linearly growing penalty when the sum is higher than γ. Finally, the training objective of the CLIPSep-NIT model is a weighted sum of the noise invariant loss and regularization term: LCLIPSep-NIT = LNIT +λLREG , where λ ∈ R is a weight hyperparameter. We set λ = 0.1 for all experiments, which we find work well across different settings.
4We note that CLIPSep-NIT considers 2n sources in total as the model has n queried heads and n noise heads. While PIT (Yu et al., 2017) and MixIT (Wisdom et al., 2020) respectively require O((2n)!) and O(22n) search to consider 2n sources, the proposed NIT only requires O(n!) permutation in the loss computation.
5Since our goal is not to further separate the noise into individual sources but to separate the sounds that correspond to the query, n may not need to be large. In practice, we find that the CLIPSep-NIT model with n = 2 already learns to handle the noise properly and can successfully transfer to the text-queried mode. Thus, we use n = 2 throughout this paper and leave the testing on larger n as future work.
4 EXPERIMENTS
We base our implementations on the code provided by Zhao et al. (2018) (https://github.com/ hangzhaomit/Sound-of-Pixels). Implementation details can be found in Appendix C.
4.1 EXPERIMENTS ON CLEAN DATA
We first evaluate the proposed CLIPSep model without the noise invariant training on musical instrument sound separation task using the MUSIC dataset, as done in (Zhao et al., 2018). This experiment is designed to focus on evaluating the quality of the learned query vectors and the zeroshot modality transferability of the CLIPSep model on a small, clean dataset rather than showing its ability to separate arbitrary sounds. The MUSIC dataset is a collection of 536 video recordings of people playing a musical instrument out of 11 instrument classes. Since no existing work has trained a text-queried sound separation model using only unlabeled data to our knowledge, we compare the proposed CLIPSep model with two baselines that serve as upper bounds—the PIT model (Yu et al., 2017, see Appendix D for an illustration) and a version of the CLIPSep model where the query model is replaced by learnable embeddings for the labels, which we will refer to as the LabelSep model. In addition, we also include the SOP model (Zhao et al., 2018) to investigate the quality of the query vectors as the CLIPSep and SOP models share the same network architecture except the query model.
We report the results in Table 2. Our proposed CLIPSep model achieves a mean signal-to-distortion ratio (SDR) (Vincent et al., 2006) of 5.49 dB and a median SDR of 4.97 dB using text queries in a zero-shot modality transfer setting. When using image queries, the performance of the CLIPSep model is comparable to that of the SOP model. This indicates that the CLIP embeddings are as informative as those produced by the SOP model. The performance difference between the CLIPSep model using text and image queries at test time indicates the zero-shot modality transfer gap. We observe 1.54 dB and 0.88 dB differences on the mean and median SDRs, respectively. Moreover,
we also report in Table 2 and Figure 4 the performance of the CLIPSep models trained on different modalities to investigate their modality transferability in different settings. We notice that when we train the CLIPSep model using text queries, dubbed as CLIPSep-Text, the mean SDR using text queries increases to 7.91 dB. However, when we test this model using image queries, we observe a 1.66 dB difference on the mean SDR as compared to that using text queries, which is close to
the mean SDR difference we observe for the model trained with image queries. Finally, we train a CLIPSep model using both text and image queries in alternation, dubbed as CLIPSep-Hybrid. We see that it leads to the best test performance for both text and image modalities, and there is only a mean SDR difference of 0.30 dB between using text and image queries. As a reference, the LabelSep model trained with labeled data performs worse than the CLIPSep-Hybrid model using text queries. Further, the PIT model achieves a mean SDR of 8.68 dB and a median SDR of 7.67 dB, but it requires post-processing to figure out the correct assignments.
4.2 EXPERIMENTS ON NOISY DATA
Next, we evaluate the proposed method on a large-scale dataset aiming at universal sound separation. We use the VGGSound dataset (Chen et al., 2020), a large-scale audio-visual dataset containing more than 190,000 10-second videos in the wild out of more than 300 classes. We find that the audio in the VGGSound dataset is often noisy and contains off-screen sounds and background noise. Although we train the models on such noisy data, it is not suitable to use the noisy data as targets for evaluation because it fails to provide reliable results. For example, if the target sound labeled as “dog barking” also contains human speech, separating only the dog barking sound provides a lower SDR value than separating the mixture of dog barking sound and human speech even though the text query is “dog barking”. (Note that we use the labels only for evaluation but not for training.) To avoid this issue, we consider the following two evaluation settings:
• MUSIC+: Samples in the MUSIC dataset are used as clean targets and mixed with a sample in the VGGSound dataset as an interference. The separation quality is evaluated on the clean target from the MUSIC dataset. As we do not use the MUSIC dataset for training, this can be considered as zero-shot transfer to a new data domain containing unseen sounds (Radford et al., 2019; Brown et al., 2020). To avoid the unexpected overlap of the target sound types in the MUSIC and VGGSound datasets caused by the label mismatch, we exclude all the musical instrument playing videos from the VGGSound dataset in this setting.
• VGGSound-Clean+: We manually collect 100 clean samples that contain distinct target sounds from the VGGSound test set, which we will refer to as VGGSound-Clean. We mix an audio sample in VGGSound-Clean with another in the test set of VGGSound. Similarly, we consider the VGGSound audio as an interference sound added to the relatively cleaner VGGSound-Clean audio and evaluate the separation quality on the VGGSound-Clean stem.
Table 3 shows the evaluation results. First, CLIPSep successfully learns text-queried sound separation even with noisy unlabeled data, achieving 5.22 dB and 3.53 dB SDR improvements over the mixture on MUSIC+ and VGGSound-Clean+, respectively. By comparing CLIPSep and CLIPSep-NIT, we observe that NIT improves the mean SDRs in both settings. Moreover, on MUSIC+, CLIPSep-NIT’s performance matches that of CLIPSep-Text, which utilizes labels for training, achieving only a 0.46 dB lower mean SDR and even a 0.05 dB higher median SDR. This result suggests that the proposed self-supervised text-queried sound separation method can learn separation capability competitive with the fully supervised model in some target sounds. In contrast, there is still a gap between them on VGGSound-Clean+, possibly because the videos of non-music-instrument objects are more noisy in both audio and visual domains, thus resulting in a more challenging zero-shot modality transfer. This hypothesis is also supported by the higher zero-shot modality transfer gap (mean SDR difference of image- and text-queried mode) of 1.79 dB on VGGSound-Clean+ than that of 1.01 dB on MUSIC+ for CLIPSep-NIT. In addition, we consider another baseline model that replaces the CLIP model in CLIPSep with a BERT encoder (Devlin et al., 2019), which we call BERTSep. Interestingly, although BERTSep performs similarly to CLIPSep-Text on VGGSound-Clean+, the performance of BERTSep is significantly lower than that of CLIPSep-Text on MUISC+, indicating that BERTSep fails to generalize to unseen text queries. We hypothesize that the CLIP text embedding captures the timbral similarity of musical instruments better than the BERT embedding do, because the CLIP model is aware of the visual similarity between musical instruments during training. Moreover, it is interesting to see that CLIPSep outperforms CLIPSep-NIT when an image query is used at test time (domain-matched condition), possibly because images contain richer context information such as objects nearby and backgrounds than labels, and the models can use such information to better separate the target sound. While CLIPSep has to fully utilize such information, CLIPSep-NIT can use the noise heads to model sounds that are less relevant to the image query. Since we remove the noise heads from CLIPSep-NIT during the evaluation, it can rely less on such information from the image, thus improving the zero-shot modality transferability. Figure 5 shows an example of the separation results on MUSIC+ (see Figures 12 to 15 for more examples). We observe that the two noise heads contain mostly background noise. Audio samples can be found on our demo website.1
4.3 EXAMINING THE EFFECTS OF THE NOISE REGULARIZATION LEVEL γ
In this experiment, we examine the effects of the noise regularization level γ in Equation (6) by changing the value from 0 to 1. As we can see from Figure 6 (a) and (b), CLIPSep-NIT with γ = 0.25 achieves the highest SDR on both evaluation settings. This suggests that the optimal γ value is not sensitive to the evaluation dataset. Further, we also report in Figure 6 (c) the total mean noise head activation, ∑n i=1 mean(M̂ N i ), on the validation set. As M̂ N i is the mask estimate for the noise, the total mean noise head activation value indicates to what extent signals are assigned to the noise head. We observe that the proposed regularizer successfully keeps the total mean noise head activation close to the desired level, γ, for γ ≤ 0.5. Interestingly, the total mean noise head activation is still around 0.5 when γ = 1.0, suggesting that the model inherently tries to use both the query-heads and the noise heads to predict the noisy target sounds. Moreover, while we discard the noise heads during evaluation in our experiments, keeping the noise heads can lead to a higher SDR as shown in
Figure 6 (a) and (b), which can be helpful in certain use cases where a post-processing procedure similar to the PIT model (Yu et al., 2017) is acceptable.
5 DISCUSSIONS
For the experiments presented in this paper, we work on labeled datasets so that we can evaluate the performance of the proposed models. However, our proposed models do not require any labeled data for training, and can thus be trained on larger unlabeled video collections in the wild. Moreover, we observe that the proposed model shows the capability of combing multiple queries, e.g., “a photo of [query A] and [query B],” to extract multiple target sounds, and we report the results on the demo website. This offers a more natural user interface against having to separate each target sound and mix them via an additional post-processing step. We also show in Appendix G that our proposed model is robust to different text queries and can extract the desired sounds.
In our experiments, we often observe a modality transfer gap greater than 1 dB difference of SDR. A future research direction is to explore different approaches to reduce the modality transfer gap. For example, the CLIP model is pretrained on a different dataset, and thus finetuning the CLIP model on the target dataset can help improve the underlying modality transferability within the CLIP model. Further, while the proposed noise invariant training is shown to improve the training on noisy data and reduce the modality transfer gap, it still requires a sufficient audio-visual correspondence for training video. In other words, if the audio and images are irrelevant in most videos, the model will struggle to learn the correspondence between the query and target sound. In practice, we find that the data in the VGGSound dataset often contains off-screen sounds and the labels sometimes correspond to only part of the video content. Hence, filtering on the training data to enhance its audio-visual correspondence can also help reduce the modality transfer gap. This can be achieved by self-supervised audio-visual correspondence prediction (Arandjelović & Zisserman, 2017a;b) or temporal synchronization (Korbar et al., 2018; Owens & Efros, 2018).
Another future direction is to explore the semi-supervised setting where a small subset of labeled data can be used to improve the modality transferability. We can also consider the proposed method as a pretraining on unlabeled data for other separation tasks in the low-resource regime. We include in Appendix H a preliminary experiment in this aspect using the ESC-50 dataset (Piczak, 2015).
6 CONCLUSION
In this work, we have presented a novel text-queried universal sound separation model that can be trained on noisy unlabeled videos. In this end, we have proposed to use the contrastive imagelanguage pretraining to bridge the audio and text modalities, and proposed the noise invariant training for training a query-based sound separation model on noisy data. We have shown that the proposed models can learn to separate an arbitrary sound specified by a text query out of a mixture, even achieving competitive performance against a fully supervised model in some settings. We believe our proposed approach closes the gap between the ways humans and machines learn to focus on a sound in a mixture, namely, the multi-modal self-supervised learning paradigm of humans against the supervised learning paradigm adopted by existing label-based machine learning approaches.
ACKNOWLEDGEMENTS
We would like to thank Stefan Uhlich, Giorgio Fabbro and Woosung Choi for their helpful comments during the preparation of this manuscript. We also thank Mayank Kumar Singh for supporting the setup of the subjective test in Appendix F. Hao-Wen thank J. Yang and Family Foundation and Taiwan Ministry of Education for supporting his PhD study.
B QUERY ENSEMBLING
Radford et al. (2021) suggest that using a prompt template in the form of “a photo of [user input query]” helps bridge the distribution gap between text queries used for zero-shot image classification and text in the training dataset for the CLIP model. They further show that the ensemble of various prompt templates improve the generalizability. Motivated by this observation, we adopt a similar idea and use several query templates at test time (see Table 4). These query templates are heuristically chosen to handle the noisy images extracted from videos.
C IMPLEMENTATION DETAILS
We implement the audio model as a 7-layer U-Net (Ronneberger et al., 2015). We use k = 32. We use binary masks as the ground truth masks during training while using the raw, real-valued masks for evaluation. We train all the models for 200,000 steps with a batch size of 32. We use the Adam optimizer (Kingma & Ba, 2015) with β1 = 0.9, β2 = 0.999 and ϵ = 10−8. In addition, we clip the norm of the gradients to 1.0 (Zhang et al., 2020). We adopt the following learning rate schedule with a warm-up—the learning rate starts from 0 and grows to 0.001 after 5,000 steps, and then it linearly drops to 0.0001 at 100,000 steps and keeps this value thereafter. We validate the model every 10,000 steps using image queries as we do not assume labeled data is available for the validation set. We use a sampling rate of 16,000 Hz and work on audio clips of length 65,535 samples (≈ 4 seconds). During training, we randomly sample a center frame from a video and extract three frames (images) with 1-sec intervals and 4-sec audio around the center frame. During inference, for image-queried models, we extract three frames with 1-sec intervals around the center of the test clip. For the spectrogram computation, we use a filter length of 1024, a hop length of 256 and a window size of 1024 in the short-time Fourier transform (STFT). We resize images extracted from video to a size of 224-by-224 pixels. For the CLIPSep-Hybrid model, we alternatively train the model with text and image queries, i.e., one batch with all image queries and next with all text queries, and so on. We implement all the models using the PyTorch library (Paszke et al., 2019). We compute the signal-to-distortion ratio (SDR) using museval (Stöter et al., 2018).
In our preliminary experiments, we also tried directly predicting the final mask by conditioning the audio model on the query vector. We applied this modification for both SOP and CLIPSep models, however, we observe that this architecture is prone to overfitting. We hypothesize that this is because the audio model is powerful enough to remember the subtle clues in the query vector, which hinder the generalization to a new sound and query. In contrast, the proposed architecture first predicts over-determined masks and then combines them on the basis of the query vector, which avoids the overfitting problem due to the simple fusion step.
D PERMUTATION INVARIANT TRAINING
Figure 8 illustrates the permuatation invariant training (PIT) model (Yu et al., 2017). The permutation invariant loss is defined as follows for n = 2.
LPIT = min ( WBCE(M1, M̂1) +WBCE(M2, M̂2),WBCE(M1, M̂2) +WBCE(M2, M̂1) ) , (7)
where M̂1 and M̂2 are the predicted masks. Note that the PIT model requires an additional postselection step to obtain the target sound.
E QUALITATIVE EXAMPLE RESULTS
We show in Figures 12 to 15 some example results. More results and audio samples can be found at https://sony.github.io/CLIPSep/.
F SUBJECTIVE EVALUATION
We conduct a subjective test to evaluate whether the SDR results aligned with perceptual quality. As done in the Sound of Pixel (Zhao et al., 2018), separated audio samples are randomly presented to evaluators, and the following question is asked: “Which sound do you hear? 1. A, 2. B, 3. Both, or 4. None of them”. Here A and B are replaced by labels of their mixture sources, e.g. A=accordion, B=engine accelerating. Ten samples (including naturally occurring mixture) are evaluated for each model and 16 evaluators have participated in the evaluation. Table 5 shows the percentages of samples which are correctly identified the target sound class (Correct), which are incorrectly identified the target sound sources (Wrong), which are selected as both sounds are audible (Both), and which are selected as neither of the sounds are audible (None). The results indicate that the evaluators more often choose the correct sound source for CLIPSep-NIT (83.8%) than CLIPSep (66.3%) with text queries. Notably, CLIPSep-NIT with text-query obtained a higher correct score than that with image-query, which matches the training mode. This is probably because image queries often contain information about backgrounds and environments, hence, some noise and off-screen sounds are also suggested by the image-queries and leak to the query head. In contrast, text-queries purely contain the information of target sounds, thus, the query head more aggressively extract the target sounds.
G ROBUSTNESS TO DIFFERENT QUERIES
To examine the model’s robustness to different queries, we take the same input mixture and query the model with different text queries. We use the CLIPSep-NIT model on the MUSIC+ dataset and
report in Figure 16 the results. We see that the model is robust to different text queries and can extract the desired sounds. Audio samples can be found at https://sony.github.io/CLIPSep/.
H FINETUNING EXPERIMENTS ON THE ESC-50 DATASET
In this experiment, we aim to examine the possibilities of having a clean dataset for further finetuning. We consider the ESC-50 dataset (Piczak, 2015), a collection of 2,000 high-quality environmental audio recordings, as the clean dataset here.6 We report the experimental results in Table 6. We can see that the model pretrained on VGGSound does not generalize well to the ESC-50 dataset as the ESC-50 contains much cleaner sounds, i.e., without query-irrelevant sounds and background noise. Further, if we train the CLIPSep model from scratch on the ESC-50 dataset, it can only achieve a mean SDR of 5.18 dB and a median SDR of 5.09 dB. However, if we take the model pretrained on the VGGSound dataset and finetune it on the ESC-50 dataset, it can achieve a mean SDR of 6.73 dB and a median SDR of 4.89 dB, resulting in an improvement of 1.55 dB on the mean SDR.
I TRAINING BEHAVIORS
We present in Figure 9 the training and validation losses along the training progress. Please note that we only show the results obtained using text queries for reference but do not use them for choosing the best model. We also evaluate the intermediate checkpoints every 10,000 steps and present in Figure 10 the test SDR along the training progress. In addition, for the CLIPSep-NIT model, we visualize in Figure 11 the total mean noise head activation, ∑n i=1 mean(M̂ N i ), along the training progress. We can see that the total mean noise head activation stays around the desired level for γ = 0.1, 0.25. For γ = 0.5 and the unregularized version, the total mean noise head activation converges to a similar value around 0.55.
6https://github.com/karolpiczak/ESC-50 | 1. What is the main contribution of the paper regarding sound separation using a frozen pre-trained CLIP?
2. What are the strengths and weaknesses of the proposed method, particularly in its explanation and clarity?
3. Do you have any questions or concerns regarding the paper's novelty and reproducibility? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper describes a self-supervised way to do sound separation using a frozen pre-trained CLIP, along with video data (assumed to also have audio).
The core method of CLIPSep is shown in Fig 2. During training, they run the frames of two different videos through CLIP in order to independently get embeddings, which are then projected into another space by a learnable projection mapping. In parallel, they add together the audio streams of both videos, encode this as a spectrogram, and then run that through an Audio UNet. They then independently combine the output of the UNet with each video's projections in order to predict an audio mask. That audio mask is compared against the true audio mask for the video in order to get a loss.
Figure 3 expands on CLIPSep and introduces CLIPSep-NIT in order to better account for noisy streams of audio. It's more complicated, but the gist is to create audio masks that account for the noise found in in-the-wild videos. This is patterned after the MixIT approach from Wisdom et al.
They then show that this self-supervised approach can be comparable to supervised datasets on two different tasks involving mixing test VGGSound and eval MUSIC+ with VGGSound.
Strengths And Weaknesses
Strengths:
The main strength is that the method is novel. I like this idea a lot and think there's something materially interesting if you ramp up the dataset size.
The comparisons are also clear. The tables show the delineations between the models that you compare and I don't have trouble understanding what's going on wrt numbers.
Weaknesses:
The explanation of the model feels like some info is left out, notably from where the images are extracted with respect to the audio. As I understand, there is a singular image per video (2 total to be exact), but it's unclear how the audio is determined around that. It can't be instantaneous. Is it 10 seconds around it? Maybe I'm missing it, but this seems important for reproduction.
There should be audio samples here. It's hard to truly evaluate what's going on without audio samples. I don't see any such links in the paper.
I don't understand at all what is section 4.1. What is the task? I read through it a few times and it's unclear to me what you're actually doing there.
Clarity, Quality, Novelty And Reproducibility
Clarity
What's up with the Figure 3 graphic? The clarity of this paper would be helped a lot if you made the 2nd half of this better because it's hard to grok what's going on in the text itself. As an example, why is part of it greyed out? If that's supposed to be inference, then it doesn't match w the blue text that describes inference before. Another example is if the greyed out dotted lines from projection --> predicted noise mask are using the black line, very unclear. Then in the dark blue directional arrows from predicted noise mask to the noise invariant training we have a similar issue. Add something text to make this clear, it's unfortunately harming what is an interesting section.
Please clarify what's going on in 4.1.
Quality
I get that the authors tested all the models on their hybrid approach in 4.2 and it came back w at least the order I'd expect. That was cool. However, it does seem strange that they did this mixing of datasets. Is that what other papers are doing? I'm not as familiar w this field as I'd like to be to question that, but it is does seem kind of strange.
Otherwise, the Quality was good imo.
Novelty
This is where the paper shines. I like the idea a lot and think there is merit in pushing this further. It's an interesting way to create an original interface at test time.
Reproducibility:
There should be more details about the image + audio pairings. I see in the Appendix that they use 4 second audio clips, but where is the image drawn from?
Also see comment above in Clarity about CLIPSep-NIT. |
ICLR | Title
CLIPSep: Learning Text-queried Sound Separation with Noisy Unlabeled Videos
Abstract
Recent years have seen progress beyond domain-specific sound separation for speech or music towards universal sound separation for arbitrary sounds. Prior work on universal sound separation has investigated separating a target sound out of an audio mixture given a text query. Such text-queried sound separation systems provide a natural and scalable interface for specifying arbitrary target sounds. However, supervised text-queried sound separation systems require costly labeled audio-text pairs for training. Moreover, the audio provided in existing datasets is often recorded in a controlled environment, causing a considerable generalization gap to noisy audio in the wild. In this work, we aim to approach text-queried universal sound separation by using only unlabeled data. We propose to leverage the visual modality as a bridge to learn the desired audio-textual correspondence. The proposed CLIPSep model first encodes the input query into a query vector using the contrastive language-image pretraining (CLIP) model, and the query vector is then used to condition an audio separation model to separate out the target sound. While the model is trained on image-audio pairs extracted from unlabeled videos, at test time we can instead query the model with text inputs in a zero-shot setting, thanks to the joint language-image embedding learned by the CLIP model. Further, videos in the wild often contain off-screen sounds and background noise that may hinder the model from learning the desired audio-textual correspondence. To address this problem, we further propose an approach called noise invariant training for training a query-based sound separation model on noisy data. Experimental results show that the proposed models successfully learn text-queried universal sound separation using only noisy unlabeled videos, even achieving competitive performance against a supervised model in some settings.
1 INTRODUCTION
Humans can focus on to a specific sound in the environment and describe it using language. Such abilities are learned using multiple modalities—auditory for selective listening, vision for learning the concepts of sounding objects, and language for describing the objects or scenes for communication. In machine listening, selective listening is often cast as the problem of sound separation, which aims to separate sound sources from an audio mixture (Cherry, 1953; Bach & Jordan, 2005). While text queries offer a natural interface for humans to specify the target sound to separate from a mixture (Liu et al., 2022; Kilgour et al., 2022), training a text-queried sound separation model in a supervised manner requires labeled audio-text paired data of single-source recordings of a vast number of sound types, which can be costly to acquire. Moreover, such isolated sounds are often recorded in controlled environments and have a considerable domain gap to recordings in the wild, which usually contain arbitrary noise and reverberations. In contrast, humans often leverage the visual modality to assist learning the sounds of various objects (Baillargeon, 2002). For instance, by observing a dog barking, a human can associate the sound with the dog, and can separately learn that the animal is called a “dog.” Further, such learning is possible even if the sound is observed in a noisy environment, e.g.,
∗Work done during an internship at Sony Group Corporation †Corresponding author
when a car is passing by or someone is talking nearby, where humans can still associate the barking sound solely with the dog. Prior work in psychophysics also suggests the intertwined cognition of vision and hearing (Sekuler et al., 1997; Shimojo & Shams, 2001; Rahne et al., 2007).
Motivated by this observation, we aim to tackle text-queried sound separation using only unlabeled videos in the wild. We propose a text-queried sound separation model called CLIPSep that leverages abundant unlabeled video data resources by utilizing the contrastive image-language pretraining (CLIP) (Radford et al., 2021) model to bridge the audio and text modalities. As illustrated in Figure 1, during training, the image feature extracted from a video frame by the CLIP-image encoder is used to condition a sound separation model, and the model is trained to separate the sound that corresponds to the image query in a self-supervised setting. Thanks to the properties of the CLIP model, which projects corresponding text and images to close embeddings, at test time we instead use the text feature obtained by the CLIP-text encoder from a text query in a zero-shot setting.
However, such zero-shot modality transfer can be challenging when we use videos in the wild for training as they often contain off-screen sounds and voice overs that can lead to undesired audiovisual associations. To address this problem, we propose the noise invariant training (NIT), where query-based separation heads and permutation invariant separation heads jointly estimate the noisy target sounds. We validate in our experiments that the proposed noise invariant training reduces
the zero-shot modality transfer gap when the model is trained on a noisy dataset, sometimes achieving competitive results against a fully supervised text-queried sound separation system.
Our contributions can be summarized as follows: 1) We propose the first text-queried universal sound separation model that can be trained on unlabeled videos. 2) We propose a new approach called noise invariant training for training a query-based sound separation model on noisy data in the wild. Audio samples can be found on an our demo website.1 For reproducibility, all source code, hyperparameters and pretrained models are available at: https://github.com/sony/CLIPSep.
2 RELATED WORK
Universal sound separation Much prior work on sound separation focuses on separating sounds for a specific domain such as speech (Wang & Chen, 2018) or music (Takahashi & Mitsufuji, 2021; Mitsufuji et al., 2021). Recent advances in domain specific sound separation lead several attempts to generalize to arbitrary sound classes. Kavalerov et al. (2019) reported successful results on separating arbitrary sounds with a fixed number of sources by adopting the permutation invariant training (PIT) (Yu et al., 2017), which was originally proposed for speech separation. While this approach does not require labeled data for training, a post-selection process is required as we cannot not tell what sounds are included in each separated result. Follow-up work (Ochiai et al., 2020; Kong et al., 2020) addressed this issue by conditioning the separation model with a class label to specify the target sound in a supervised setting. However, these approaches still require labeled data for training, and the interface for selecting the target class becomes cumbersome when we need a large number of classes to handle open-domain data. Wisdom et al. (2020) later proposed an unsupervised method called mixture invariant training (MixIT) for learning sound separation on noisy data. MixIT is designed to separate all sources at a time and also requires a post-selection process such as using a pre-trained sound classifier (Scott et al., 2021), which requires labeled data for training, to identify the target sounds. We summarize and compare related work in Table 1.
Query-based sound separation Visual information has been used for selecting the target sound in speech (Ephrat et al., 2019; Afouras et al., 2020), music (Zhao et al., 2018; 2019; Tian et al., 2021) and universal sounds (Owens & Efros, 2018; Gao et al., 2018; Rouditchenko et al., 2019). While many image-queried sound separation approaches require clean video data that contains isolated sources, Tzinis et al. (2021) introduced an unsupervised method called AudioScope for separating on-screen sounds using noisy videos based on the MixIT model. While image queries can serve as a
1https://sony.github.io/CLIPSep/
natural interface for specifying the target sound in certain use cases, images of target sounds become unavailable in low-light conditions and for sounds from out-of-screen objects.
Another line of research uses the audio modality to query acoustically similar sounds. Chen et al. (2022) showed that such approach can generalize to unseen sounds. Later, Gfeller et al. (2021) cropped two disjoint segments from single recording and used them as a query-target pair to train a sound separation model, assuming both segments contain the same sound source. However, in many cases, it is impractical to prepare a reference audio sample for the desired sound as the query.
Most recently, text-queried sound separation has been studied as it provides a natural and scalable interface for specifying arbitrary target sounds as compared to systems that use a fixed set of class labels. Liu et al. (2022) employed a pretrained language model to encode the text query, and condition the model to separate the corresponding sounds. Kilgour et al. (2022) proposed a model that accepts audio or text queries in a hybrid manner. These approaches, however, require labeled text-audio paired data for training. Different from prior work, our goal is to learn text-queried sound separation for arbitrary sound without labeled data, specifically using unlabeled noisy videos in the wild.
Contrastive language-image-audio pretraining The CLIP model (Radford et al., 2021) has been used as a pretraining of joint embedding spaces among text, image and audio modalities for downstream tasks such as audio classification (Wu et al., 2022; Guzhov et al., 2022) and sound guided image manipulation (Lee et al., 2022). Pretraining is done either in a supervised manner using labels (Guzhov et al., 2022; Lee et al., 2022) or in a self-supervised manner by training an additional audio encoder to map input audio to the pretrained CLIP embedding space (Wu et al., 2022). In contrast, we explore the zero-shot modality transfer capability of the CLIP model by freezing the pre-trained CLIP model and directly optimizing the rest of the model for the target sound separation task.
3 METHOD
3.1 CLIPSEP—LEARNING TEXT-QUERIED SOUND SEPARATION WITHOUT LABELED DATA
In this section, we propose the CLIPSep model for text-queried sound separation without using labeled data. We base the CLIPSep model on Sound-of-Pixels (SOP) (Zhao et al., 2018) and replace the video analysis network of the SOP model. As illustrated in Figure 2, during training, the model takes as inputs an audio mixture x = ∑n i=1 si, where s1, . . . , sn are the n audio tracks, along with their corresponding images y1, . . . ,yn extracted from the videos. We first transform the audio mixture x into a magnitude spectrogram X and pass the spectrogram through an audio U-Net (Ronneberger et al., 2015; Jansson et al., 2017) to produce k (≥ n) intermediate masks M̃1, . . . , M̃k. On the other stream, each image is encoded by the pretrained CLIP model (Radford et al., 2021) into an embedding ei ∈ R512. The CLIP embedding ei will further be projected to a query vector
qi ∈ Rk by a projection layer, which is expected to extract only audio-relevant information from ei.2 Finally, the query vector qi will be used to mix the intermediate masks into the final predicted masks M̂i = ∑k j=1 σ ( wijqijM̃j + bi ) , where wi ∈ Rk is a learnable scale vector, bi ∈ R a learnable bias, and σ(·) the sigmoid function. Now, suppose Mi is the ground truth mask for source si. The training objective of the model is the sum of the weighted binary cross entropy losses for each source:
LCLIPSep = n∑
i=1
WBCE(Mi, M̂i) = n∑ i=1 X ⊙ ( −Mi log M̂i − (1−Mi) log ( 1− M̂i )) . (1)
At test time, thanks to the joint image-text embedding offered by the CLIP model, we feed a text query instead of an image to the query model to obtain the query vector and separate the target sounds accordingly (see Appendix A for an illustration). As suggested by Radford et al. (2021), we prefix the text query into the form of “a photo of [user input query]” to reduce the generalization gap.3
3.2 NOISE INVARIANT TRAINING—HANDLING NOISY DATA IN THE WILD
While the CLIPSep model can separate sounds given image or text queries, it assumes that the sources are clean and contain few query-irrelevant sounds. However, this assumption does not hold for videos in the wild as many of them contain out-of-screen sounds and various background noises. Inspired by the mixture invariant training (MixIT) proposed by Wisdom et al. (2020), we further propose the noise invariant training (NIT) to tackle the challenge of training with noisy data. As illustrated in Figure 3, we introduce n additional permutation invariant heads called noise heads to the CLIPSep model, where the masks predicted by these heads are interchangeable during loss computation. Specifically, we introduce n additional projection layers, and each of them takes as input the sum of all query vectors produced by the query heads (i.e., ∑n i=1 qi) and produce a vector that is later used to mix the intermediate masks into the predicted noise mask. In principle, the query masks produced by the query vectors are expected to extract query-relevant sounds due to their stronger correlations to their corresponding queries, while the interchangeable noise masks should ‘soak up’ other sounds.
2We extract three frames with 1-sec intervals and compute their mean CLIP embedding as the input to the projection layer to reduce the negative effects when the selected frame does not contain the objects of interest.
3Similar to how we prepare the image queries, we create four queries from the input text query using four query templates (see Appendix B) and take their mean CLIP embedding as the input to the projection layer.
Mathematically, let MQ1 , . . . ,M Q n be the predicted query masks and M N 1 , . . . ,M N n be the predicted noise masks. Then, the noise invariant loss is defined as:
LNIT = min (j1,...,jn)∈Σn n∑ i=1 WBCE ( Mi,min ( 1, M̂Qi + M̂ N ji )) , (2)
where Σn denotes the set of all permutations of {1, . . . , n}.4 Take n = 2 for example.5 We consider the two possible ways for combining the query heads and the noise heads:
(Arrangement 1) M̂1 = min ( 1, M̂Q1 + M̂ N 1 ) , M̂2 = min ( 1, M̂Q2 + M̂ N 2 ) , (3)
(Arrangement 2) M̂ ′1 = min ( 1, M̂Q1 + M̂ N 2 ) , M̂ ′2 = min ( 1, M̂Q2 + M̂ N 1 ) . (4)
Then, the noise invariant loss is defined as the smallest loss achievable: L(2)NIT = min ( WBCE ( M1, M̂1 ) +WBCE ( M2, M̂2 ) ,WBCE ( M1, M̂ ′ 1 ) +WBCE ( M2, M̂ ′ 2 )) . (5)
Once the model is trained, we discard the noise heads and use only the query heads for inference (see Appendix A for an illustration). Unlike the MixIT model (Wisdom et al., 2020), our proposed noise invariant training still allows us to specify the target sound by an input query, and it does not require any post-selection process as we only use the query heads during inference.
In practice, we find that the model tends to assign part of the target sounds to the noise heads as these heads can freely enjoy the optimal permutation to minimize the loss. Hence, we further introduce a regularization term to penalize producing high activations on the noise masks:
LREG = max ( 0, n∑ i=1 mean ( M̂Ni ) − γ ) , (6)
where γ ∈ [0, n] is a hyperparameter that we will refer to as the noise regularization level. The proposed regularization has no effect when the sum of the means of all the noise masks is lower than a predefined threshold γ, while having a linearly growing penalty when the sum is higher than γ. Finally, the training objective of the CLIPSep-NIT model is a weighted sum of the noise invariant loss and regularization term: LCLIPSep-NIT = LNIT +λLREG , where λ ∈ R is a weight hyperparameter. We set λ = 0.1 for all experiments, which we find work well across different settings.
4We note that CLIPSep-NIT considers 2n sources in total as the model has n queried heads and n noise heads. While PIT (Yu et al., 2017) and MixIT (Wisdom et al., 2020) respectively require O((2n)!) and O(22n) search to consider 2n sources, the proposed NIT only requires O(n!) permutation in the loss computation.
5Since our goal is not to further separate the noise into individual sources but to separate the sounds that correspond to the query, n may not need to be large. In practice, we find that the CLIPSep-NIT model with n = 2 already learns to handle the noise properly and can successfully transfer to the text-queried mode. Thus, we use n = 2 throughout this paper and leave the testing on larger n as future work.
4 EXPERIMENTS
We base our implementations on the code provided by Zhao et al. (2018) (https://github.com/ hangzhaomit/Sound-of-Pixels). Implementation details can be found in Appendix C.
4.1 EXPERIMENTS ON CLEAN DATA
We first evaluate the proposed CLIPSep model without the noise invariant training on musical instrument sound separation task using the MUSIC dataset, as done in (Zhao et al., 2018). This experiment is designed to focus on evaluating the quality of the learned query vectors and the zeroshot modality transferability of the CLIPSep model on a small, clean dataset rather than showing its ability to separate arbitrary sounds. The MUSIC dataset is a collection of 536 video recordings of people playing a musical instrument out of 11 instrument classes. Since no existing work has trained a text-queried sound separation model using only unlabeled data to our knowledge, we compare the proposed CLIPSep model with two baselines that serve as upper bounds—the PIT model (Yu et al., 2017, see Appendix D for an illustration) and a version of the CLIPSep model where the query model is replaced by learnable embeddings for the labels, which we will refer to as the LabelSep model. In addition, we also include the SOP model (Zhao et al., 2018) to investigate the quality of the query vectors as the CLIPSep and SOP models share the same network architecture except the query model.
We report the results in Table 2. Our proposed CLIPSep model achieves a mean signal-to-distortion ratio (SDR) (Vincent et al., 2006) of 5.49 dB and a median SDR of 4.97 dB using text queries in a zero-shot modality transfer setting. When using image queries, the performance of the CLIPSep model is comparable to that of the SOP model. This indicates that the CLIP embeddings are as informative as those produced by the SOP model. The performance difference between the CLIPSep model using text and image queries at test time indicates the zero-shot modality transfer gap. We observe 1.54 dB and 0.88 dB differences on the mean and median SDRs, respectively. Moreover,
we also report in Table 2 and Figure 4 the performance of the CLIPSep models trained on different modalities to investigate their modality transferability in different settings. We notice that when we train the CLIPSep model using text queries, dubbed as CLIPSep-Text, the mean SDR using text queries increases to 7.91 dB. However, when we test this model using image queries, we observe a 1.66 dB difference on the mean SDR as compared to that using text queries, which is close to
the mean SDR difference we observe for the model trained with image queries. Finally, we train a CLIPSep model using both text and image queries in alternation, dubbed as CLIPSep-Hybrid. We see that it leads to the best test performance for both text and image modalities, and there is only a mean SDR difference of 0.30 dB between using text and image queries. As a reference, the LabelSep model trained with labeled data performs worse than the CLIPSep-Hybrid model using text queries. Further, the PIT model achieves a mean SDR of 8.68 dB and a median SDR of 7.67 dB, but it requires post-processing to figure out the correct assignments.
4.2 EXPERIMENTS ON NOISY DATA
Next, we evaluate the proposed method on a large-scale dataset aiming at universal sound separation. We use the VGGSound dataset (Chen et al., 2020), a large-scale audio-visual dataset containing more than 190,000 10-second videos in the wild out of more than 300 classes. We find that the audio in the VGGSound dataset is often noisy and contains off-screen sounds and background noise. Although we train the models on such noisy data, it is not suitable to use the noisy data as targets for evaluation because it fails to provide reliable results. For example, if the target sound labeled as “dog barking” also contains human speech, separating only the dog barking sound provides a lower SDR value than separating the mixture of dog barking sound and human speech even though the text query is “dog barking”. (Note that we use the labels only for evaluation but not for training.) To avoid this issue, we consider the following two evaluation settings:
• MUSIC+: Samples in the MUSIC dataset are used as clean targets and mixed with a sample in the VGGSound dataset as an interference. The separation quality is evaluated on the clean target from the MUSIC dataset. As we do not use the MUSIC dataset for training, this can be considered as zero-shot transfer to a new data domain containing unseen sounds (Radford et al., 2019; Brown et al., 2020). To avoid the unexpected overlap of the target sound types in the MUSIC and VGGSound datasets caused by the label mismatch, we exclude all the musical instrument playing videos from the VGGSound dataset in this setting.
• VGGSound-Clean+: We manually collect 100 clean samples that contain distinct target sounds from the VGGSound test set, which we will refer to as VGGSound-Clean. We mix an audio sample in VGGSound-Clean with another in the test set of VGGSound. Similarly, we consider the VGGSound audio as an interference sound added to the relatively cleaner VGGSound-Clean audio and evaluate the separation quality on the VGGSound-Clean stem.
Table 3 shows the evaluation results. First, CLIPSep successfully learns text-queried sound separation even with noisy unlabeled data, achieving 5.22 dB and 3.53 dB SDR improvements over the mixture on MUSIC+ and VGGSound-Clean+, respectively. By comparing CLIPSep and CLIPSep-NIT, we observe that NIT improves the mean SDRs in both settings. Moreover, on MUSIC+, CLIPSep-NIT’s performance matches that of CLIPSep-Text, which utilizes labels for training, achieving only a 0.46 dB lower mean SDR and even a 0.05 dB higher median SDR. This result suggests that the proposed self-supervised text-queried sound separation method can learn separation capability competitive with the fully supervised model in some target sounds. In contrast, there is still a gap between them on VGGSound-Clean+, possibly because the videos of non-music-instrument objects are more noisy in both audio and visual domains, thus resulting in a more challenging zero-shot modality transfer. This hypothesis is also supported by the higher zero-shot modality transfer gap (mean SDR difference of image- and text-queried mode) of 1.79 dB on VGGSound-Clean+ than that of 1.01 dB on MUSIC+ for CLIPSep-NIT. In addition, we consider another baseline model that replaces the CLIP model in CLIPSep with a BERT encoder (Devlin et al., 2019), which we call BERTSep. Interestingly, although BERTSep performs similarly to CLIPSep-Text on VGGSound-Clean+, the performance of BERTSep is significantly lower than that of CLIPSep-Text on MUISC+, indicating that BERTSep fails to generalize to unseen text queries. We hypothesize that the CLIP text embedding captures the timbral similarity of musical instruments better than the BERT embedding do, because the CLIP model is aware of the visual similarity between musical instruments during training. Moreover, it is interesting to see that CLIPSep outperforms CLIPSep-NIT when an image query is used at test time (domain-matched condition), possibly because images contain richer context information such as objects nearby and backgrounds than labels, and the models can use such information to better separate the target sound. While CLIPSep has to fully utilize such information, CLIPSep-NIT can use the noise heads to model sounds that are less relevant to the image query. Since we remove the noise heads from CLIPSep-NIT during the evaluation, it can rely less on such information from the image, thus improving the zero-shot modality transferability. Figure 5 shows an example of the separation results on MUSIC+ (see Figures 12 to 15 for more examples). We observe that the two noise heads contain mostly background noise. Audio samples can be found on our demo website.1
4.3 EXAMINING THE EFFECTS OF THE NOISE REGULARIZATION LEVEL γ
In this experiment, we examine the effects of the noise regularization level γ in Equation (6) by changing the value from 0 to 1. As we can see from Figure 6 (a) and (b), CLIPSep-NIT with γ = 0.25 achieves the highest SDR on both evaluation settings. This suggests that the optimal γ value is not sensitive to the evaluation dataset. Further, we also report in Figure 6 (c) the total mean noise head activation, ∑n i=1 mean(M̂ N i ), on the validation set. As M̂ N i is the mask estimate for the noise, the total mean noise head activation value indicates to what extent signals are assigned to the noise head. We observe that the proposed regularizer successfully keeps the total mean noise head activation close to the desired level, γ, for γ ≤ 0.5. Interestingly, the total mean noise head activation is still around 0.5 when γ = 1.0, suggesting that the model inherently tries to use both the query-heads and the noise heads to predict the noisy target sounds. Moreover, while we discard the noise heads during evaluation in our experiments, keeping the noise heads can lead to a higher SDR as shown in
Figure 6 (a) and (b), which can be helpful in certain use cases where a post-processing procedure similar to the PIT model (Yu et al., 2017) is acceptable.
5 DISCUSSIONS
For the experiments presented in this paper, we work on labeled datasets so that we can evaluate the performance of the proposed models. However, our proposed models do not require any labeled data for training, and can thus be trained on larger unlabeled video collections in the wild. Moreover, we observe that the proposed model shows the capability of combing multiple queries, e.g., “a photo of [query A] and [query B],” to extract multiple target sounds, and we report the results on the demo website. This offers a more natural user interface against having to separate each target sound and mix them via an additional post-processing step. We also show in Appendix G that our proposed model is robust to different text queries and can extract the desired sounds.
In our experiments, we often observe a modality transfer gap greater than 1 dB difference of SDR. A future research direction is to explore different approaches to reduce the modality transfer gap. For example, the CLIP model is pretrained on a different dataset, and thus finetuning the CLIP model on the target dataset can help improve the underlying modality transferability within the CLIP model. Further, while the proposed noise invariant training is shown to improve the training on noisy data and reduce the modality transfer gap, it still requires a sufficient audio-visual correspondence for training video. In other words, if the audio and images are irrelevant in most videos, the model will struggle to learn the correspondence between the query and target sound. In practice, we find that the data in the VGGSound dataset often contains off-screen sounds and the labels sometimes correspond to only part of the video content. Hence, filtering on the training data to enhance its audio-visual correspondence can also help reduce the modality transfer gap. This can be achieved by self-supervised audio-visual correspondence prediction (Arandjelović & Zisserman, 2017a;b) or temporal synchronization (Korbar et al., 2018; Owens & Efros, 2018).
Another future direction is to explore the semi-supervised setting where a small subset of labeled data can be used to improve the modality transferability. We can also consider the proposed method as a pretraining on unlabeled data for other separation tasks in the low-resource regime. We include in Appendix H a preliminary experiment in this aspect using the ESC-50 dataset (Piczak, 2015).
6 CONCLUSION
In this work, we have presented a novel text-queried universal sound separation model that can be trained on noisy unlabeled videos. In this end, we have proposed to use the contrastive imagelanguage pretraining to bridge the audio and text modalities, and proposed the noise invariant training for training a query-based sound separation model on noisy data. We have shown that the proposed models can learn to separate an arbitrary sound specified by a text query out of a mixture, even achieving competitive performance against a fully supervised model in some settings. We believe our proposed approach closes the gap between the ways humans and machines learn to focus on a sound in a mixture, namely, the multi-modal self-supervised learning paradigm of humans against the supervised learning paradigm adopted by existing label-based machine learning approaches.
ACKNOWLEDGEMENTS
We would like to thank Stefan Uhlich, Giorgio Fabbro and Woosung Choi for their helpful comments during the preparation of this manuscript. We also thank Mayank Kumar Singh for supporting the setup of the subjective test in Appendix F. Hao-Wen thank J. Yang and Family Foundation and Taiwan Ministry of Education for supporting his PhD study.
B QUERY ENSEMBLING
Radford et al. (2021) suggest that using a prompt template in the form of “a photo of [user input query]” helps bridge the distribution gap between text queries used for zero-shot image classification and text in the training dataset for the CLIP model. They further show that the ensemble of various prompt templates improve the generalizability. Motivated by this observation, we adopt a similar idea and use several query templates at test time (see Table 4). These query templates are heuristically chosen to handle the noisy images extracted from videos.
C IMPLEMENTATION DETAILS
We implement the audio model as a 7-layer U-Net (Ronneberger et al., 2015). We use k = 32. We use binary masks as the ground truth masks during training while using the raw, real-valued masks for evaluation. We train all the models for 200,000 steps with a batch size of 32. We use the Adam optimizer (Kingma & Ba, 2015) with β1 = 0.9, β2 = 0.999 and ϵ = 10−8. In addition, we clip the norm of the gradients to 1.0 (Zhang et al., 2020). We adopt the following learning rate schedule with a warm-up—the learning rate starts from 0 and grows to 0.001 after 5,000 steps, and then it linearly drops to 0.0001 at 100,000 steps and keeps this value thereafter. We validate the model every 10,000 steps using image queries as we do not assume labeled data is available for the validation set. We use a sampling rate of 16,000 Hz and work on audio clips of length 65,535 samples (≈ 4 seconds). During training, we randomly sample a center frame from a video and extract three frames (images) with 1-sec intervals and 4-sec audio around the center frame. During inference, for image-queried models, we extract three frames with 1-sec intervals around the center of the test clip. For the spectrogram computation, we use a filter length of 1024, a hop length of 256 and a window size of 1024 in the short-time Fourier transform (STFT). We resize images extracted from video to a size of 224-by-224 pixels. For the CLIPSep-Hybrid model, we alternatively train the model with text and image queries, i.e., one batch with all image queries and next with all text queries, and so on. We implement all the models using the PyTorch library (Paszke et al., 2019). We compute the signal-to-distortion ratio (SDR) using museval (Stöter et al., 2018).
In our preliminary experiments, we also tried directly predicting the final mask by conditioning the audio model on the query vector. We applied this modification for both SOP and CLIPSep models, however, we observe that this architecture is prone to overfitting. We hypothesize that this is because the audio model is powerful enough to remember the subtle clues in the query vector, which hinder the generalization to a new sound and query. In contrast, the proposed architecture first predicts over-determined masks and then combines them on the basis of the query vector, which avoids the overfitting problem due to the simple fusion step.
D PERMUTATION INVARIANT TRAINING
Figure 8 illustrates the permuatation invariant training (PIT) model (Yu et al., 2017). The permutation invariant loss is defined as follows for n = 2.
LPIT = min ( WBCE(M1, M̂1) +WBCE(M2, M̂2),WBCE(M1, M̂2) +WBCE(M2, M̂1) ) , (7)
where M̂1 and M̂2 are the predicted masks. Note that the PIT model requires an additional postselection step to obtain the target sound.
E QUALITATIVE EXAMPLE RESULTS
We show in Figures 12 to 15 some example results. More results and audio samples can be found at https://sony.github.io/CLIPSep/.
F SUBJECTIVE EVALUATION
We conduct a subjective test to evaluate whether the SDR results aligned with perceptual quality. As done in the Sound of Pixel (Zhao et al., 2018), separated audio samples are randomly presented to evaluators, and the following question is asked: “Which sound do you hear? 1. A, 2. B, 3. Both, or 4. None of them”. Here A and B are replaced by labels of their mixture sources, e.g. A=accordion, B=engine accelerating. Ten samples (including naturally occurring mixture) are evaluated for each model and 16 evaluators have participated in the evaluation. Table 5 shows the percentages of samples which are correctly identified the target sound class (Correct), which are incorrectly identified the target sound sources (Wrong), which are selected as both sounds are audible (Both), and which are selected as neither of the sounds are audible (None). The results indicate that the evaluators more often choose the correct sound source for CLIPSep-NIT (83.8%) than CLIPSep (66.3%) with text queries. Notably, CLIPSep-NIT with text-query obtained a higher correct score than that with image-query, which matches the training mode. This is probably because image queries often contain information about backgrounds and environments, hence, some noise and off-screen sounds are also suggested by the image-queries and leak to the query head. In contrast, text-queries purely contain the information of target sounds, thus, the query head more aggressively extract the target sounds.
G ROBUSTNESS TO DIFFERENT QUERIES
To examine the model’s robustness to different queries, we take the same input mixture and query the model with different text queries. We use the CLIPSep-NIT model on the MUSIC+ dataset and
report in Figure 16 the results. We see that the model is robust to different text queries and can extract the desired sounds. Audio samples can be found at https://sony.github.io/CLIPSep/.
H FINETUNING EXPERIMENTS ON THE ESC-50 DATASET
In this experiment, we aim to examine the possibilities of having a clean dataset for further finetuning. We consider the ESC-50 dataset (Piczak, 2015), a collection of 2,000 high-quality environmental audio recordings, as the clean dataset here.6 We report the experimental results in Table 6. We can see that the model pretrained on VGGSound does not generalize well to the ESC-50 dataset as the ESC-50 contains much cleaner sounds, i.e., without query-irrelevant sounds and background noise. Further, if we train the CLIPSep model from scratch on the ESC-50 dataset, it can only achieve a mean SDR of 5.18 dB and a median SDR of 5.09 dB. However, if we take the model pretrained on the VGGSound dataset and finetune it on the ESC-50 dataset, it can achieve a mean SDR of 6.73 dB and a median SDR of 4.89 dB, resulting in an improvement of 1.55 dB on the mean SDR.
I TRAINING BEHAVIORS
We present in Figure 9 the training and validation losses along the training progress. Please note that we only show the results obtained using text queries for reference but do not use them for choosing the best model. We also evaluate the intermediate checkpoints every 10,000 steps and present in Figure 10 the test SDR along the training progress. In addition, for the CLIPSep-NIT model, we visualize in Figure 11 the total mean noise head activation, ∑n i=1 mean(M̂ N i ), along the training progress. We can see that the total mean noise head activation stays around the desired level for γ = 0.1, 0.25. For γ = 0.5 and the unregularized version, the total mean noise head activation converges to a similar value around 0.55.
6https://github.com/karolpiczak/ESC-50 | 1. What is the focus and contribution of the paper regarding text-queried source separation?
2. What are the strengths of the proposed approach, particularly in its practical applications and results?
3. What are the weaknesses of the paper, especially regarding comparisons with other works and the effectiveness of the proposed mixit layer?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a source separation system for text-queried source separation. The authors propose to train the system with a picture query during training time, however in inference time they use text for the query. In addition to the basic system, they also propose to add a mixit layer at the end of the pipeline to increase the noise robustness of the system.
Strengths And Weaknesses
Strengths:
The proposed problem is definitely interesting, and I can see the practical applications of this system.
The results (shared in the link https://dezimynona.github.io/separation/) seems to suggest that the system is doing what is intended.
Weaknesses:
I think it would have been nice to also compare with a baseline system which uses sentence embeddings as a guide. This paper could be a nice point of comparison https://arxiv.org/pdf/2203.15147.pdf. You could have done this comparison in two ways. 1) On your experiments you can directly train this model and compare 2) You could have taken a pretrained systems for both your approach, and the baseline and compare in a zero-shot manner. The VGGSound+None experiment that you have on your demo page is a nice option for this.
There is little difference between the separation quality of Clipsep and Clipsep+NIT. In some of the examples on your demo page the two methods sound very similar.
Clarity, Quality, Novelty And Reproducibility
The paper reads well in general. In terms of novelty, due to the fact that this paper proposes a new training methodology which enables training with audio-video pairs, it seems to differentiate itself from the existing papers. |
ICLR | Title
CLIPSep: Learning Text-queried Sound Separation with Noisy Unlabeled Videos
Abstract
Recent years have seen progress beyond domain-specific sound separation for speech or music towards universal sound separation for arbitrary sounds. Prior work on universal sound separation has investigated separating a target sound out of an audio mixture given a text query. Such text-queried sound separation systems provide a natural and scalable interface for specifying arbitrary target sounds. However, supervised text-queried sound separation systems require costly labeled audio-text pairs for training. Moreover, the audio provided in existing datasets is often recorded in a controlled environment, causing a considerable generalization gap to noisy audio in the wild. In this work, we aim to approach text-queried universal sound separation by using only unlabeled data. We propose to leverage the visual modality as a bridge to learn the desired audio-textual correspondence. The proposed CLIPSep model first encodes the input query into a query vector using the contrastive language-image pretraining (CLIP) model, and the query vector is then used to condition an audio separation model to separate out the target sound. While the model is trained on image-audio pairs extracted from unlabeled videos, at test time we can instead query the model with text inputs in a zero-shot setting, thanks to the joint language-image embedding learned by the CLIP model. Further, videos in the wild often contain off-screen sounds and background noise that may hinder the model from learning the desired audio-textual correspondence. To address this problem, we further propose an approach called noise invariant training for training a query-based sound separation model on noisy data. Experimental results show that the proposed models successfully learn text-queried universal sound separation using only noisy unlabeled videos, even achieving competitive performance against a supervised model in some settings.
1 INTRODUCTION
Humans can focus on to a specific sound in the environment and describe it using language. Such abilities are learned using multiple modalities—auditory for selective listening, vision for learning the concepts of sounding objects, and language for describing the objects or scenes for communication. In machine listening, selective listening is often cast as the problem of sound separation, which aims to separate sound sources from an audio mixture (Cherry, 1953; Bach & Jordan, 2005). While text queries offer a natural interface for humans to specify the target sound to separate from a mixture (Liu et al., 2022; Kilgour et al., 2022), training a text-queried sound separation model in a supervised manner requires labeled audio-text paired data of single-source recordings of a vast number of sound types, which can be costly to acquire. Moreover, such isolated sounds are often recorded in controlled environments and have a considerable domain gap to recordings in the wild, which usually contain arbitrary noise and reverberations. In contrast, humans often leverage the visual modality to assist learning the sounds of various objects (Baillargeon, 2002). For instance, by observing a dog barking, a human can associate the sound with the dog, and can separately learn that the animal is called a “dog.” Further, such learning is possible even if the sound is observed in a noisy environment, e.g.,
∗Work done during an internship at Sony Group Corporation †Corresponding author
when a car is passing by or someone is talking nearby, where humans can still associate the barking sound solely with the dog. Prior work in psychophysics also suggests the intertwined cognition of vision and hearing (Sekuler et al., 1997; Shimojo & Shams, 2001; Rahne et al., 2007).
Motivated by this observation, we aim to tackle text-queried sound separation using only unlabeled videos in the wild. We propose a text-queried sound separation model called CLIPSep that leverages abundant unlabeled video data resources by utilizing the contrastive image-language pretraining (CLIP) (Radford et al., 2021) model to bridge the audio and text modalities. As illustrated in Figure 1, during training, the image feature extracted from a video frame by the CLIP-image encoder is used to condition a sound separation model, and the model is trained to separate the sound that corresponds to the image query in a self-supervised setting. Thanks to the properties of the CLIP model, which projects corresponding text and images to close embeddings, at test time we instead use the text feature obtained by the CLIP-text encoder from a text query in a zero-shot setting.
However, such zero-shot modality transfer can be challenging when we use videos in the wild for training as they often contain off-screen sounds and voice overs that can lead to undesired audiovisual associations. To address this problem, we propose the noise invariant training (NIT), where query-based separation heads and permutation invariant separation heads jointly estimate the noisy target sounds. We validate in our experiments that the proposed noise invariant training reduces
the zero-shot modality transfer gap when the model is trained on a noisy dataset, sometimes achieving competitive results against a fully supervised text-queried sound separation system.
Our contributions can be summarized as follows: 1) We propose the first text-queried universal sound separation model that can be trained on unlabeled videos. 2) We propose a new approach called noise invariant training for training a query-based sound separation model on noisy data in the wild. Audio samples can be found on an our demo website.1 For reproducibility, all source code, hyperparameters and pretrained models are available at: https://github.com/sony/CLIPSep.
2 RELATED WORK
Universal sound separation Much prior work on sound separation focuses on separating sounds for a specific domain such as speech (Wang & Chen, 2018) or music (Takahashi & Mitsufuji, 2021; Mitsufuji et al., 2021). Recent advances in domain specific sound separation lead several attempts to generalize to arbitrary sound classes. Kavalerov et al. (2019) reported successful results on separating arbitrary sounds with a fixed number of sources by adopting the permutation invariant training (PIT) (Yu et al., 2017), which was originally proposed for speech separation. While this approach does not require labeled data for training, a post-selection process is required as we cannot not tell what sounds are included in each separated result. Follow-up work (Ochiai et al., 2020; Kong et al., 2020) addressed this issue by conditioning the separation model with a class label to specify the target sound in a supervised setting. However, these approaches still require labeled data for training, and the interface for selecting the target class becomes cumbersome when we need a large number of classes to handle open-domain data. Wisdom et al. (2020) later proposed an unsupervised method called mixture invariant training (MixIT) for learning sound separation on noisy data. MixIT is designed to separate all sources at a time and also requires a post-selection process such as using a pre-trained sound classifier (Scott et al., 2021), which requires labeled data for training, to identify the target sounds. We summarize and compare related work in Table 1.
Query-based sound separation Visual information has been used for selecting the target sound in speech (Ephrat et al., 2019; Afouras et al., 2020), music (Zhao et al., 2018; 2019; Tian et al., 2021) and universal sounds (Owens & Efros, 2018; Gao et al., 2018; Rouditchenko et al., 2019). While many image-queried sound separation approaches require clean video data that contains isolated sources, Tzinis et al. (2021) introduced an unsupervised method called AudioScope for separating on-screen sounds using noisy videos based on the MixIT model. While image queries can serve as a
1https://sony.github.io/CLIPSep/
natural interface for specifying the target sound in certain use cases, images of target sounds become unavailable in low-light conditions and for sounds from out-of-screen objects.
Another line of research uses the audio modality to query acoustically similar sounds. Chen et al. (2022) showed that such approach can generalize to unseen sounds. Later, Gfeller et al. (2021) cropped two disjoint segments from single recording and used them as a query-target pair to train a sound separation model, assuming both segments contain the same sound source. However, in many cases, it is impractical to prepare a reference audio sample for the desired sound as the query.
Most recently, text-queried sound separation has been studied as it provides a natural and scalable interface for specifying arbitrary target sounds as compared to systems that use a fixed set of class labels. Liu et al. (2022) employed a pretrained language model to encode the text query, and condition the model to separate the corresponding sounds. Kilgour et al. (2022) proposed a model that accepts audio or text queries in a hybrid manner. These approaches, however, require labeled text-audio paired data for training. Different from prior work, our goal is to learn text-queried sound separation for arbitrary sound without labeled data, specifically using unlabeled noisy videos in the wild.
Contrastive language-image-audio pretraining The CLIP model (Radford et al., 2021) has been used as a pretraining of joint embedding spaces among text, image and audio modalities for downstream tasks such as audio classification (Wu et al., 2022; Guzhov et al., 2022) and sound guided image manipulation (Lee et al., 2022). Pretraining is done either in a supervised manner using labels (Guzhov et al., 2022; Lee et al., 2022) or in a self-supervised manner by training an additional audio encoder to map input audio to the pretrained CLIP embedding space (Wu et al., 2022). In contrast, we explore the zero-shot modality transfer capability of the CLIP model by freezing the pre-trained CLIP model and directly optimizing the rest of the model for the target sound separation task.
3 METHOD
3.1 CLIPSEP—LEARNING TEXT-QUERIED SOUND SEPARATION WITHOUT LABELED DATA
In this section, we propose the CLIPSep model for text-queried sound separation without using labeled data. We base the CLIPSep model on Sound-of-Pixels (SOP) (Zhao et al., 2018) and replace the video analysis network of the SOP model. As illustrated in Figure 2, during training, the model takes as inputs an audio mixture x = ∑n i=1 si, where s1, . . . , sn are the n audio tracks, along with their corresponding images y1, . . . ,yn extracted from the videos. We first transform the audio mixture x into a magnitude spectrogram X and pass the spectrogram through an audio U-Net (Ronneberger et al., 2015; Jansson et al., 2017) to produce k (≥ n) intermediate masks M̃1, . . . , M̃k. On the other stream, each image is encoded by the pretrained CLIP model (Radford et al., 2021) into an embedding ei ∈ R512. The CLIP embedding ei will further be projected to a query vector
qi ∈ Rk by a projection layer, which is expected to extract only audio-relevant information from ei.2 Finally, the query vector qi will be used to mix the intermediate masks into the final predicted masks M̂i = ∑k j=1 σ ( wijqijM̃j + bi ) , where wi ∈ Rk is a learnable scale vector, bi ∈ R a learnable bias, and σ(·) the sigmoid function. Now, suppose Mi is the ground truth mask for source si. The training objective of the model is the sum of the weighted binary cross entropy losses for each source:
LCLIPSep = n∑
i=1
WBCE(Mi, M̂i) = n∑ i=1 X ⊙ ( −Mi log M̂i − (1−Mi) log ( 1− M̂i )) . (1)
At test time, thanks to the joint image-text embedding offered by the CLIP model, we feed a text query instead of an image to the query model to obtain the query vector and separate the target sounds accordingly (see Appendix A for an illustration). As suggested by Radford et al. (2021), we prefix the text query into the form of “a photo of [user input query]” to reduce the generalization gap.3
3.2 NOISE INVARIANT TRAINING—HANDLING NOISY DATA IN THE WILD
While the CLIPSep model can separate sounds given image or text queries, it assumes that the sources are clean and contain few query-irrelevant sounds. However, this assumption does not hold for videos in the wild as many of them contain out-of-screen sounds and various background noises. Inspired by the mixture invariant training (MixIT) proposed by Wisdom et al. (2020), we further propose the noise invariant training (NIT) to tackle the challenge of training with noisy data. As illustrated in Figure 3, we introduce n additional permutation invariant heads called noise heads to the CLIPSep model, where the masks predicted by these heads are interchangeable during loss computation. Specifically, we introduce n additional projection layers, and each of them takes as input the sum of all query vectors produced by the query heads (i.e., ∑n i=1 qi) and produce a vector that is later used to mix the intermediate masks into the predicted noise mask. In principle, the query masks produced by the query vectors are expected to extract query-relevant sounds due to their stronger correlations to their corresponding queries, while the interchangeable noise masks should ‘soak up’ other sounds.
2We extract three frames with 1-sec intervals and compute their mean CLIP embedding as the input to the projection layer to reduce the negative effects when the selected frame does not contain the objects of interest.
3Similar to how we prepare the image queries, we create four queries from the input text query using four query templates (see Appendix B) and take their mean CLIP embedding as the input to the projection layer.
Mathematically, let MQ1 , . . . ,M Q n be the predicted query masks and M N 1 , . . . ,M N n be the predicted noise masks. Then, the noise invariant loss is defined as:
LNIT = min (j1,...,jn)∈Σn n∑ i=1 WBCE ( Mi,min ( 1, M̂Qi + M̂ N ji )) , (2)
where Σn denotes the set of all permutations of {1, . . . , n}.4 Take n = 2 for example.5 We consider the two possible ways for combining the query heads and the noise heads:
(Arrangement 1) M̂1 = min ( 1, M̂Q1 + M̂ N 1 ) , M̂2 = min ( 1, M̂Q2 + M̂ N 2 ) , (3)
(Arrangement 2) M̂ ′1 = min ( 1, M̂Q1 + M̂ N 2 ) , M̂ ′2 = min ( 1, M̂Q2 + M̂ N 1 ) . (4)
Then, the noise invariant loss is defined as the smallest loss achievable: L(2)NIT = min ( WBCE ( M1, M̂1 ) +WBCE ( M2, M̂2 ) ,WBCE ( M1, M̂ ′ 1 ) +WBCE ( M2, M̂ ′ 2 )) . (5)
Once the model is trained, we discard the noise heads and use only the query heads for inference (see Appendix A for an illustration). Unlike the MixIT model (Wisdom et al., 2020), our proposed noise invariant training still allows us to specify the target sound by an input query, and it does not require any post-selection process as we only use the query heads during inference.
In practice, we find that the model tends to assign part of the target sounds to the noise heads as these heads can freely enjoy the optimal permutation to minimize the loss. Hence, we further introduce a regularization term to penalize producing high activations on the noise masks:
LREG = max ( 0, n∑ i=1 mean ( M̂Ni ) − γ ) , (6)
where γ ∈ [0, n] is a hyperparameter that we will refer to as the noise regularization level. The proposed regularization has no effect when the sum of the means of all the noise masks is lower than a predefined threshold γ, while having a linearly growing penalty when the sum is higher than γ. Finally, the training objective of the CLIPSep-NIT model is a weighted sum of the noise invariant loss and regularization term: LCLIPSep-NIT = LNIT +λLREG , where λ ∈ R is a weight hyperparameter. We set λ = 0.1 for all experiments, which we find work well across different settings.
4We note that CLIPSep-NIT considers 2n sources in total as the model has n queried heads and n noise heads. While PIT (Yu et al., 2017) and MixIT (Wisdom et al., 2020) respectively require O((2n)!) and O(22n) search to consider 2n sources, the proposed NIT only requires O(n!) permutation in the loss computation.
5Since our goal is not to further separate the noise into individual sources but to separate the sounds that correspond to the query, n may not need to be large. In practice, we find that the CLIPSep-NIT model with n = 2 already learns to handle the noise properly and can successfully transfer to the text-queried mode. Thus, we use n = 2 throughout this paper and leave the testing on larger n as future work.
4 EXPERIMENTS
We base our implementations on the code provided by Zhao et al. (2018) (https://github.com/ hangzhaomit/Sound-of-Pixels). Implementation details can be found in Appendix C.
4.1 EXPERIMENTS ON CLEAN DATA
We first evaluate the proposed CLIPSep model without the noise invariant training on musical instrument sound separation task using the MUSIC dataset, as done in (Zhao et al., 2018). This experiment is designed to focus on evaluating the quality of the learned query vectors and the zeroshot modality transferability of the CLIPSep model on a small, clean dataset rather than showing its ability to separate arbitrary sounds. The MUSIC dataset is a collection of 536 video recordings of people playing a musical instrument out of 11 instrument classes. Since no existing work has trained a text-queried sound separation model using only unlabeled data to our knowledge, we compare the proposed CLIPSep model with two baselines that serve as upper bounds—the PIT model (Yu et al., 2017, see Appendix D for an illustration) and a version of the CLIPSep model where the query model is replaced by learnable embeddings for the labels, which we will refer to as the LabelSep model. In addition, we also include the SOP model (Zhao et al., 2018) to investigate the quality of the query vectors as the CLIPSep and SOP models share the same network architecture except the query model.
We report the results in Table 2. Our proposed CLIPSep model achieves a mean signal-to-distortion ratio (SDR) (Vincent et al., 2006) of 5.49 dB and a median SDR of 4.97 dB using text queries in a zero-shot modality transfer setting. When using image queries, the performance of the CLIPSep model is comparable to that of the SOP model. This indicates that the CLIP embeddings are as informative as those produced by the SOP model. The performance difference between the CLIPSep model using text and image queries at test time indicates the zero-shot modality transfer gap. We observe 1.54 dB and 0.88 dB differences on the mean and median SDRs, respectively. Moreover,
we also report in Table 2 and Figure 4 the performance of the CLIPSep models trained on different modalities to investigate their modality transferability in different settings. We notice that when we train the CLIPSep model using text queries, dubbed as CLIPSep-Text, the mean SDR using text queries increases to 7.91 dB. However, when we test this model using image queries, we observe a 1.66 dB difference on the mean SDR as compared to that using text queries, which is close to
the mean SDR difference we observe for the model trained with image queries. Finally, we train a CLIPSep model using both text and image queries in alternation, dubbed as CLIPSep-Hybrid. We see that it leads to the best test performance for both text and image modalities, and there is only a mean SDR difference of 0.30 dB between using text and image queries. As a reference, the LabelSep model trained with labeled data performs worse than the CLIPSep-Hybrid model using text queries. Further, the PIT model achieves a mean SDR of 8.68 dB and a median SDR of 7.67 dB, but it requires post-processing to figure out the correct assignments.
4.2 EXPERIMENTS ON NOISY DATA
Next, we evaluate the proposed method on a large-scale dataset aiming at universal sound separation. We use the VGGSound dataset (Chen et al., 2020), a large-scale audio-visual dataset containing more than 190,000 10-second videos in the wild out of more than 300 classes. We find that the audio in the VGGSound dataset is often noisy and contains off-screen sounds and background noise. Although we train the models on such noisy data, it is not suitable to use the noisy data as targets for evaluation because it fails to provide reliable results. For example, if the target sound labeled as “dog barking” also contains human speech, separating only the dog barking sound provides a lower SDR value than separating the mixture of dog barking sound and human speech even though the text query is “dog barking”. (Note that we use the labels only for evaluation but not for training.) To avoid this issue, we consider the following two evaluation settings:
• MUSIC+: Samples in the MUSIC dataset are used as clean targets and mixed with a sample in the VGGSound dataset as an interference. The separation quality is evaluated on the clean target from the MUSIC dataset. As we do not use the MUSIC dataset for training, this can be considered as zero-shot transfer to a new data domain containing unseen sounds (Radford et al., 2019; Brown et al., 2020). To avoid the unexpected overlap of the target sound types in the MUSIC and VGGSound datasets caused by the label mismatch, we exclude all the musical instrument playing videos from the VGGSound dataset in this setting.
• VGGSound-Clean+: We manually collect 100 clean samples that contain distinct target sounds from the VGGSound test set, which we will refer to as VGGSound-Clean. We mix an audio sample in VGGSound-Clean with another in the test set of VGGSound. Similarly, we consider the VGGSound audio as an interference sound added to the relatively cleaner VGGSound-Clean audio and evaluate the separation quality on the VGGSound-Clean stem.
Table 3 shows the evaluation results. First, CLIPSep successfully learns text-queried sound separation even with noisy unlabeled data, achieving 5.22 dB and 3.53 dB SDR improvements over the mixture on MUSIC+ and VGGSound-Clean+, respectively. By comparing CLIPSep and CLIPSep-NIT, we observe that NIT improves the mean SDRs in both settings. Moreover, on MUSIC+, CLIPSep-NIT’s performance matches that of CLIPSep-Text, which utilizes labels for training, achieving only a 0.46 dB lower mean SDR and even a 0.05 dB higher median SDR. This result suggests that the proposed self-supervised text-queried sound separation method can learn separation capability competitive with the fully supervised model in some target sounds. In contrast, there is still a gap between them on VGGSound-Clean+, possibly because the videos of non-music-instrument objects are more noisy in both audio and visual domains, thus resulting in a more challenging zero-shot modality transfer. This hypothesis is also supported by the higher zero-shot modality transfer gap (mean SDR difference of image- and text-queried mode) of 1.79 dB on VGGSound-Clean+ than that of 1.01 dB on MUSIC+ for CLIPSep-NIT. In addition, we consider another baseline model that replaces the CLIP model in CLIPSep with a BERT encoder (Devlin et al., 2019), which we call BERTSep. Interestingly, although BERTSep performs similarly to CLIPSep-Text on VGGSound-Clean+, the performance of BERTSep is significantly lower than that of CLIPSep-Text on MUISC+, indicating that BERTSep fails to generalize to unseen text queries. We hypothesize that the CLIP text embedding captures the timbral similarity of musical instruments better than the BERT embedding do, because the CLIP model is aware of the visual similarity between musical instruments during training. Moreover, it is interesting to see that CLIPSep outperforms CLIPSep-NIT when an image query is used at test time (domain-matched condition), possibly because images contain richer context information such as objects nearby and backgrounds than labels, and the models can use such information to better separate the target sound. While CLIPSep has to fully utilize such information, CLIPSep-NIT can use the noise heads to model sounds that are less relevant to the image query. Since we remove the noise heads from CLIPSep-NIT during the evaluation, it can rely less on such information from the image, thus improving the zero-shot modality transferability. Figure 5 shows an example of the separation results on MUSIC+ (see Figures 12 to 15 for more examples). We observe that the two noise heads contain mostly background noise. Audio samples can be found on our demo website.1
4.3 EXAMINING THE EFFECTS OF THE NOISE REGULARIZATION LEVEL γ
In this experiment, we examine the effects of the noise regularization level γ in Equation (6) by changing the value from 0 to 1. As we can see from Figure 6 (a) and (b), CLIPSep-NIT with γ = 0.25 achieves the highest SDR on both evaluation settings. This suggests that the optimal γ value is not sensitive to the evaluation dataset. Further, we also report in Figure 6 (c) the total mean noise head activation, ∑n i=1 mean(M̂ N i ), on the validation set. As M̂ N i is the mask estimate for the noise, the total mean noise head activation value indicates to what extent signals are assigned to the noise head. We observe that the proposed regularizer successfully keeps the total mean noise head activation close to the desired level, γ, for γ ≤ 0.5. Interestingly, the total mean noise head activation is still around 0.5 when γ = 1.0, suggesting that the model inherently tries to use both the query-heads and the noise heads to predict the noisy target sounds. Moreover, while we discard the noise heads during evaluation in our experiments, keeping the noise heads can lead to a higher SDR as shown in
Figure 6 (a) and (b), which can be helpful in certain use cases where a post-processing procedure similar to the PIT model (Yu et al., 2017) is acceptable.
5 DISCUSSIONS
For the experiments presented in this paper, we work on labeled datasets so that we can evaluate the performance of the proposed models. However, our proposed models do not require any labeled data for training, and can thus be trained on larger unlabeled video collections in the wild. Moreover, we observe that the proposed model shows the capability of combing multiple queries, e.g., “a photo of [query A] and [query B],” to extract multiple target sounds, and we report the results on the demo website. This offers a more natural user interface against having to separate each target sound and mix them via an additional post-processing step. We also show in Appendix G that our proposed model is robust to different text queries and can extract the desired sounds.
In our experiments, we often observe a modality transfer gap greater than 1 dB difference of SDR. A future research direction is to explore different approaches to reduce the modality transfer gap. For example, the CLIP model is pretrained on a different dataset, and thus finetuning the CLIP model on the target dataset can help improve the underlying modality transferability within the CLIP model. Further, while the proposed noise invariant training is shown to improve the training on noisy data and reduce the modality transfer gap, it still requires a sufficient audio-visual correspondence for training video. In other words, if the audio and images are irrelevant in most videos, the model will struggle to learn the correspondence between the query and target sound. In practice, we find that the data in the VGGSound dataset often contains off-screen sounds and the labels sometimes correspond to only part of the video content. Hence, filtering on the training data to enhance its audio-visual correspondence can also help reduce the modality transfer gap. This can be achieved by self-supervised audio-visual correspondence prediction (Arandjelović & Zisserman, 2017a;b) or temporal synchronization (Korbar et al., 2018; Owens & Efros, 2018).
Another future direction is to explore the semi-supervised setting where a small subset of labeled data can be used to improve the modality transferability. We can also consider the proposed method as a pretraining on unlabeled data for other separation tasks in the low-resource regime. We include in Appendix H a preliminary experiment in this aspect using the ESC-50 dataset (Piczak, 2015).
6 CONCLUSION
In this work, we have presented a novel text-queried universal sound separation model that can be trained on noisy unlabeled videos. In this end, we have proposed to use the contrastive imagelanguage pretraining to bridge the audio and text modalities, and proposed the noise invariant training for training a query-based sound separation model on noisy data. We have shown that the proposed models can learn to separate an arbitrary sound specified by a text query out of a mixture, even achieving competitive performance against a fully supervised model in some settings. We believe our proposed approach closes the gap between the ways humans and machines learn to focus on a sound in a mixture, namely, the multi-modal self-supervised learning paradigm of humans against the supervised learning paradigm adopted by existing label-based machine learning approaches.
ACKNOWLEDGEMENTS
We would like to thank Stefan Uhlich, Giorgio Fabbro and Woosung Choi for their helpful comments during the preparation of this manuscript. We also thank Mayank Kumar Singh for supporting the setup of the subjective test in Appendix F. Hao-Wen thank J. Yang and Family Foundation and Taiwan Ministry of Education for supporting his PhD study.
B QUERY ENSEMBLING
Radford et al. (2021) suggest that using a prompt template in the form of “a photo of [user input query]” helps bridge the distribution gap between text queries used for zero-shot image classification and text in the training dataset for the CLIP model. They further show that the ensemble of various prompt templates improve the generalizability. Motivated by this observation, we adopt a similar idea and use several query templates at test time (see Table 4). These query templates are heuristically chosen to handle the noisy images extracted from videos.
C IMPLEMENTATION DETAILS
We implement the audio model as a 7-layer U-Net (Ronneberger et al., 2015). We use k = 32. We use binary masks as the ground truth masks during training while using the raw, real-valued masks for evaluation. We train all the models for 200,000 steps with a batch size of 32. We use the Adam optimizer (Kingma & Ba, 2015) with β1 = 0.9, β2 = 0.999 and ϵ = 10−8. In addition, we clip the norm of the gradients to 1.0 (Zhang et al., 2020). We adopt the following learning rate schedule with a warm-up—the learning rate starts from 0 and grows to 0.001 after 5,000 steps, and then it linearly drops to 0.0001 at 100,000 steps and keeps this value thereafter. We validate the model every 10,000 steps using image queries as we do not assume labeled data is available for the validation set. We use a sampling rate of 16,000 Hz and work on audio clips of length 65,535 samples (≈ 4 seconds). During training, we randomly sample a center frame from a video and extract three frames (images) with 1-sec intervals and 4-sec audio around the center frame. During inference, for image-queried models, we extract three frames with 1-sec intervals around the center of the test clip. For the spectrogram computation, we use a filter length of 1024, a hop length of 256 and a window size of 1024 in the short-time Fourier transform (STFT). We resize images extracted from video to a size of 224-by-224 pixels. For the CLIPSep-Hybrid model, we alternatively train the model with text and image queries, i.e., one batch with all image queries and next with all text queries, and so on. We implement all the models using the PyTorch library (Paszke et al., 2019). We compute the signal-to-distortion ratio (SDR) using museval (Stöter et al., 2018).
In our preliminary experiments, we also tried directly predicting the final mask by conditioning the audio model on the query vector. We applied this modification for both SOP and CLIPSep models, however, we observe that this architecture is prone to overfitting. We hypothesize that this is because the audio model is powerful enough to remember the subtle clues in the query vector, which hinder the generalization to a new sound and query. In contrast, the proposed architecture first predicts over-determined masks and then combines them on the basis of the query vector, which avoids the overfitting problem due to the simple fusion step.
D PERMUTATION INVARIANT TRAINING
Figure 8 illustrates the permuatation invariant training (PIT) model (Yu et al., 2017). The permutation invariant loss is defined as follows for n = 2.
LPIT = min ( WBCE(M1, M̂1) +WBCE(M2, M̂2),WBCE(M1, M̂2) +WBCE(M2, M̂1) ) , (7)
where M̂1 and M̂2 are the predicted masks. Note that the PIT model requires an additional postselection step to obtain the target sound.
E QUALITATIVE EXAMPLE RESULTS
We show in Figures 12 to 15 some example results. More results and audio samples can be found at https://sony.github.io/CLIPSep/.
F SUBJECTIVE EVALUATION
We conduct a subjective test to evaluate whether the SDR results aligned with perceptual quality. As done in the Sound of Pixel (Zhao et al., 2018), separated audio samples are randomly presented to evaluators, and the following question is asked: “Which sound do you hear? 1. A, 2. B, 3. Both, or 4. None of them”. Here A and B are replaced by labels of their mixture sources, e.g. A=accordion, B=engine accelerating. Ten samples (including naturally occurring mixture) are evaluated for each model and 16 evaluators have participated in the evaluation. Table 5 shows the percentages of samples which are correctly identified the target sound class (Correct), which are incorrectly identified the target sound sources (Wrong), which are selected as both sounds are audible (Both), and which are selected as neither of the sounds are audible (None). The results indicate that the evaluators more often choose the correct sound source for CLIPSep-NIT (83.8%) than CLIPSep (66.3%) with text queries. Notably, CLIPSep-NIT with text-query obtained a higher correct score than that with image-query, which matches the training mode. This is probably because image queries often contain information about backgrounds and environments, hence, some noise and off-screen sounds are also suggested by the image-queries and leak to the query head. In contrast, text-queries purely contain the information of target sounds, thus, the query head more aggressively extract the target sounds.
G ROBUSTNESS TO DIFFERENT QUERIES
To examine the model’s robustness to different queries, we take the same input mixture and query the model with different text queries. We use the CLIPSep-NIT model on the MUSIC+ dataset and
report in Figure 16 the results. We see that the model is robust to different text queries and can extract the desired sounds. Audio samples can be found at https://sony.github.io/CLIPSep/.
H FINETUNING EXPERIMENTS ON THE ESC-50 DATASET
In this experiment, we aim to examine the possibilities of having a clean dataset for further finetuning. We consider the ESC-50 dataset (Piczak, 2015), a collection of 2,000 high-quality environmental audio recordings, as the clean dataset here.6 We report the experimental results in Table 6. We can see that the model pretrained on VGGSound does not generalize well to the ESC-50 dataset as the ESC-50 contains much cleaner sounds, i.e., without query-irrelevant sounds and background noise. Further, if we train the CLIPSep model from scratch on the ESC-50 dataset, it can only achieve a mean SDR of 5.18 dB and a median SDR of 5.09 dB. However, if we take the model pretrained on the VGGSound dataset and finetune it on the ESC-50 dataset, it can achieve a mean SDR of 6.73 dB and a median SDR of 4.89 dB, resulting in an improvement of 1.55 dB on the mean SDR.
I TRAINING BEHAVIORS
We present in Figure 9 the training and validation losses along the training progress. Please note that we only show the results obtained using text queries for reference but do not use them for choosing the best model. We also evaluate the intermediate checkpoints every 10,000 steps and present in Figure 10 the test SDR along the training progress. In addition, for the CLIPSep-NIT model, we visualize in Figure 11 the total mean noise head activation, ∑n i=1 mean(M̂ N i ), along the training progress. We can see that the total mean noise head activation stays around the desired level for γ = 0.1, 0.25. For γ = 0.5 and the unregularized version, the total mean noise head activation converges to a similar value around 0.55.
6https://github.com/karolpiczak/ESC-50 | 1. What is the main contribution of the paper regarding source separation using unlabeled videos?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of scalability and performance?
3. Do you have any concerns or suggestions regarding the architecture design choices and their explanations?
4. How does the reviewer assess the novelty, reproducibility, clarity, and quality of the paper's content?
5. Are there any specific questions or areas the reviewer would like the authors to address or clarify in the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
CLIPSep demonstrates how a pretrained CLIP model can be used to train a source separation model using unlabeled videos and achieve competitive results in some settings.
Strengths And Weaknesses
Strengths:
This model shows a path toward training a sound separation model that is text queryable for arbitrary sources and can be trained on unlabeled video data.
Results are competitive with labeled approaches in some settings.
Weaknesses:
The goal of this approach is to be able to scale up training on an arbitrary number of in-the-wild videos. However, the model is trained and evaluated only on relatively small and clean datasets. Even when the data is somewhat noisy (e.g., the offscreen noises in VGGSound), the model starts to exhibit difficulties using only text queries. The authors acknowledge these issues in the Discussion section and provide some ideas for improvement, but I'm concerned that we don't know how well the model will actually scale up to in-the-wild video datasets. It's possible that entirely different techniques will end up being needed to get to that level of unlabeled training.
Update: After discussion with the authors, I realized I misunderstood the scale of VGGSound and how representative it is of "in the wild" audio, so I am much less concerned with how well this technique will scale up.
Motivation for some of the architecture design choices is not fully explained and alternatives are not fully explored (details below).
Update: After discussion with the authors, they have updated the paper to explain some of these choices. I found the discussion around "early fusion" vs. "late fusion" particularly interesting.
Clarity, Quality, Novelty And Reproducibility
Novelty: This is the first work to show training a text-queryable sound separation model trained on unlabeled video data.
Reproducibility: All code and pretrained models will be made available.
Overall clarity is good, but I have a few suggestions:
Section 2.3: My understanding is that the CLIP model is used as is without any training or finetuning. I think the final sentence of this paragraph could be reworded to make it clear that the part of the model you're optimizing doesn't include CLIP.
The paper mentions a few times that the model and code is based on Sound-of-Pixels. I realize that the techniques in this paper are different than the SOP approach, but I think it would be helpful to have those differences called out explicitly because important parts are reused.
For the architecture, I'd like to hear more about the intuition behind having the U-Net output k masks without any conditioning on the separation query. Rather than having the query vectors mix the intermediate masks, why not just condition mask generation on the query?
Why are the noise heads discarded at test time? Is the intuition that you're training the U-Net to use some of its k masks to specialize in noise and then not be utilized by the query vectors? |
ICLR | Title
The shape and simplicity biases of adversarially robust ImageNet-trained CNNs
Abstract
Adversarial training has been the topic of dozens of studies and a leading method for defending against adversarial attacks. Yet, it remains largely unknown (a) how adversarially-robust ImageNet classifiers (R classifiers) generalize to out-ofdistribution examples; and (b) how their generalization capability relates to their hidden representations. In this paper, we perform a thorough, systematic study to answer these two questions across AlexNet, GoogLeNet, and ResNet-50 architectures. We found that while standard ImageNet classifiers have a strong texture bias, their R counterparts rely heavily on shapes. Remarkably, adversarial training induces three simplicity biases into hidden neurons in the process of “robustifying” the network. That is, each convolutional neuron in R networks often changes to detecting (1) pixel-wise smoother patterns i.e. a mechanism that blocks highfrequency noise from passing through the network; (2) more lower-level features i.e. textures and colors (instead of objects); and (3) fewer types of inputs. Our findings reveal the interesting mechanisms that made networks more adversarially robust and also explain some recent findings e.g. why R networks benefit from much larger capacity (Xie & Yuille, 2020) and can act as a strong image prior in image synthesis (Santurkar et al., 2019).
N/A
Adversarial training has been the topic of dozens of studies and a leading method for defending against adversarial attacks. Yet, it remains largely unknown (a) how adversarially-robust ImageNet classifiers (R classifiers) generalize to out-ofdistribution examples; and (b) how their generalization capability relates to their hidden representations. In this paper, we perform a thorough, systematic study to answer these two questions across AlexNet, GoogLeNet, and ResNet-50 architectures. We found that while standard ImageNet classifiers have a strong texture bias, their R counterparts rely heavily on shapes. Remarkably, adversarial training induces three simplicity biases into hidden neurons in the process of “robustifying” the network. That is, each convolutional neuron in R networks often changes to detecting (1) pixel-wise smoother patterns i.e. a mechanism that blocks highfrequency noise from passing through the network; (2) more lower-level features i.e. textures and colors (instead of objects); and (3) fewer types of inputs. Our findings reveal the interesting mechanisms that made networks more adversarially robust and also explain some recent findings e.g. why R networks benefit from much larger capacity (Xie & Yuille, 2020) and can act as a strong image prior in image synthesis (Santurkar et al., 2019).
1 INTRODUCTION
Given excellent test-set performance, deep neural networks often fail to generalize to out-ofdistribution (OOD) examples (Nguyen et al., 2015) including “adversarial examples”, i.e. modified inputs that are imperceptibly different from the real data but change predicted labels entirely (Szegedy et al., 2014). Importantly, adversarial examples can transfer between models and cause unseen, all machine learning (ML) models to misbehave (Papernot et al., 2017), threatening the security and reliability of ML applications (Akhtar & Mian, 2018). Adversarial training—teaching a classifier to correctly label adversarial examples (instead of real data)—has been a leading method in defending against adversarial attacks and the most effective defense in ICLR 2018 (Athalye et al., 2018). Besides improved performance on adversarial examples, test-set accuracy can also be improved, for some architectures, when real images are properly incorporated into adversarial training (Xie et al., 2020). It is therefore important to study how the standard adversarial training (by Madry et al. 2018) changes the hidden representations and generalization capabilities of neural networks.
On smaller datasets, Zhang & Zhu (2019) found that adversarially-robust networks (hereafter, R networks) rely heavily on shapes (instead of textures) to classify images. Intuitively, training on pixel-wise noisy images would encourage R networks to focus less on local statistics (e.g. textures) and instead harness global features (e.g. shapes) more. However, an important, open question is:
Q1: On ImageNet, do R networks still prefer shapes over textures?
It remains unknown whether such shape preference carries over to the large-scale ImageNet (Russakovsky et al., 2015), which often induces a large texture bias into networks (Geirhos et al., 2019) e.g. to separate ∼150 four-legged species in ImageNet. Also, this shape-bias hypothesis suggested by Zhang & Zhu (2019) seems to contradict the recent findings that R networks on ImageNet act as a strong texture prior i.e. they can be successfully used for many image translation tasks without any extra image prior (Santurkar et al., 2019). The above discussion leads to a follow-up question:
Q2: If an R network has a stronger preference for shapes than standard ImageNet networks (hereafter, S networks), will it perform better on OOD distorted images?
Networks trained to be more shape-biased can generalize better to many unseen ImageNet-C (Hendrycks & Dietterich, 2019) image corruptions than S networks, which have a strong texture bias (Brendel & Bethge, 2019). In contrast, there was also evidence that classifiers trained on one type of images often do not generalize well to others (Geirhos et al., 2018; Nguyen et al., 2015; Kang et al., 2019). Importantly, R networks often underperform S networks on original test sets (Tsipras et al., 2019) perhaps due to an inherent trade-off (Madry et al., 2018), a mismatch between real vs. adversarial distributions (Xie et al., 2020), or a limitation in architectures—AdvProp helps improving performance of EfficientNets but not ResNets (Xie et al., 2020).
Most previous work aimed at understanding the behaviors of R classifiers as a function but little is known about the internal characteristics of R networks and, furthermore, their connections to the shape bias and generalization performance. Here, we ask:
Q3: How did adversarial training change the hidden neural representations to make classifiers more shape-biased and adversarially robust?
In this paper, we harness the common benchmarks in ML interpretability and neuroscience—cueconflict (Geirhos et al., 2019), NetDissect (Bau et al., 2017), and ImageNet-C—to answer the three questions above via a systematic study across three different convolutional architectures—AlexNet (Krizhevsky et al., 2012), GoogLeNet (Szegedy et al., 2015), and ResNet-50 (He et al., 2016)— trained to perform image classification on the large-scale ImageNet dataset (Russakovsky et al., 2015). Our main findings include:1
1. R classifiers trained on ImageNet prefer shapes over textures∼67% of the time (Sec. 3.1)— a stark contrast to the S classifiers, which use shapes at only ∼25%.
2. Consistent with the strong shape bias, R classifiers interestingly outperform S counterparts on texture-less, distorted images (stylized and silhouetted images) (Sec. 3.2.2).
3. Adversarial training makes R networks more robust by (1) blocking pixel-wise input noise via smooth filters (Sec. 3.3.1); (2) narrowing the input range that highly activates neurons to simpler patterns, effectively reducing the space of adversarial inputs (Sec. 3.3.2).
4. Units that detect texture patterns (according to NetDissect) are not only useful to texturebased recognition as expected but can be also highly useful to shape-based recognition (Sec. 3.4). By aligning NetDissect and cue-conflict frameworks, we found that hidden neurons in R networks are surprisingly neither strongly shape-biased nor texture-biased, but instead generalists that detect low-level features (Sec. 3.4).
2 NETWORKS AND DATASETS
Networks To understand the effects of adversarial training across a wide range of architectures, we compare each pair of S and R models while keeping their network architectures constant. That is, we conduct all experiments on two groups of classifiers: (a) standard AlexNet, GoogLeNet, & ResNet-50 (hereafter, ResNet) models pre-trained on the 1000-class 2012 ImageNet dataset; and (b) three adversarially-robust counterparts i.e. AlexNet-R, GoogLeNet-R, & ResNet-R which were trained via adversarial training (see below) (Madry et al., 2018).
Training A standard classifier with parameters θ was trained to minimize the cross-entropy loss L over pairs of (training example x, ground-truth label y) drawn from the ImageNet training set D:
arg min θ
E(x,y)∼D [ L(θ, x, y) ] (1)
On the other hand, we trained each R classifier via Madry et al. (2018) adversarial training framework where each real example x is changed by a perturbation ∆:
arg min θ E(x,y)∼D [ max ∆∈P L(θ, x+ ∆, y) ]
(2)
1All code and data will be available on github upon publication.
where P is the perturbation range (Madry et al., 2018), here, within an L2 norm. Hyperparameters The S models were downloaded from PyTorch model zoo (PyTorch, 2019). We trained all R models using the robustness library (Engstrom et al., 2019), using the same hyperparameters in Engstrom et al. (2020); Santurkar et al. (2019); Bansal et al. (2020). That is, adverarial examples were generated using Projected Gradient Descent (PGD) (Madry et al., 2018) with an L2 norm constraint of 3, a step size of 0.5, and 7 PGD-attack steps. R models were trained using an SGD optimizer for 90 epochs with a momentum of 0.9, an initial learning rate of 0.1 (which is reduced 10 times every 30 epochs), a weight decay of 10−4, and a batch size of 256 on 4 Tesla-V100 GPU’s.
Compared to the standard counterparts, R models have substantially higher adversarial accuracy but lower ImageNet validation-set accuracy (Table 1). To compute adversarial accuracy, we perturbed validation-set images with the same PGD attack settings as used in training.
Correctly-labeled image subsets: ImageNet-CL Following Bansal et al. (2020), to compare the behaviors of two networks of identical architectures on the same inputs, we tested them on the largest ImageNet validation subset (hereafter, ImageNet-CL) where both models have 100% accuracy. The sizes of the three subsets for three architectures—AlexNet, GoogLeNet, and ResNet—are respectively: 17,693, 24,581, and 27,343. On modified ImageNet images (e.g. ImageNet-C), we only tested each pair of networks on the modified images whose original versions exist in ImageNet-CL. That is, we wish to gain deeper insights into how networks behave on correctly-classified images, and then how their behaviors change when some input feature (e.g. textures or shapes) is modified.
3 EXPERIMENT AND RESULTS
3.1 DO IMAGENET ADVERSARIALLY ROBUST NETWORKS PREFER SHAPES OR TEXTURES?
It is important to know which type of feature a classifier uses when making decisions. While standard ImageNet networks often carry a strong texture bias (Geirhos et al., 2019), it is unknown whether their adversarially-robust counterparts would be heavily texture- or shape-biased. Here, we test this hypothesis by comparing S and R models on the well-known cue-conflict dataset (Geirhos et al., 2019). That is, we feed “stylized” images provided by Geirhos et al. (2019) that contain contradicting texture and shape cues (e.g. elephant skin on a cat silhouette) and count the times a model uses textures or shapes (i.e. outputting elephant or cat) when it makes a correct prediction.
Experiment Our procedure follows Geirhos et al. (2019). First, we excluded 80 images that do not have conflicting cues (e.g. cat textures on cat shapes) from their 1,280-image dataset. Each texture or shape cue belongs to one of 16 MS COCO (Caesar et al., 2018) coarse labels (e.g. cat or elephant). Second, we ran the networks on these images and converted their 1000-class probability vector outputs into 16-class probability vectors by taking the average over the probabilities of the fine-grained classes that are under the same COCO label. Third, we took only the images that each network correctly labels (i.e. into the texture or shape class), which ranges from 669 to 877 images (out of 1,200) for 6 networks and computed the texture and shape accuracies over 16 classes.
Results On average, over three architectures, R classifiers rely on shapes ≥ 67.08% of the time i.e. ∼2.7× higher than 24.56% of the S models (Table 2). In other words, by replacing the real examples with adversarial examples, adversarial training causes the heavy texture bias of ImageNet classifiers (Geirhos et al., 2019; Brendel & Bethge, 2019) to drop substantially (∼2.7×).
3.2 DO ROBUST NETWORKS GENERALIZE TO UNSEEN TYPES OF DISTORTED IMAGES?
We have found that changing from standard training to adversarial training changes ImageNet classifiers entirely from texture-biased into shape-biased (Sec. 3.1). Furthermore, Geirhos et al. (2019)
found that some training regimes that encourage classifiers to focus more on shape can improve their performance on unseen image distortions. Therefore, it is interesting to test whether R models—a type of shape-biased classifiers— would generalize well to any OOD image types.
ImageNet-C We compare S and R networks on the ImageNet-C dataset which was designed to test model robustness on 15 common types of image corruptions (Fig. 1c), where several shape-biased classifiers were known to outperform S classifiers (Geirhos et al., 2019). Here, we tested each pair of S and R models on the ImageNet-C distorted images whose original versions were correctly labeled by both (i.e. in ImageNet-CL sets; Sec. 2).
Results R models show no generalization boost on ImageNet-C i.e. they performed on-par or worse than the S counterparts (Table 3c). This is consistent with the findings in Table 4 in Geirhos et al. (2019) that a stronger shape bias does not necessarily imply better generalizability.
To further understand the generalization capability of R models, we tested them on two controlled image types where either shape or texture cues are removed from the original, correctly-labeled ImageNet images. Note that when both shape and texture cues are present e.g. in cue-conflict images, R classifiers consistently prefer shape over texture i.e. a shape bias. However, this bias is orthogonal to the performance when only either texture or shape cues are present.
3.2.1 PERFORMANCE ON SHAPE-LESS, TEXTURE-PRESERVING IMAGES
We created shape-less images by dividing each ImageNet-CL image into a grid of p×p even patches where p ∈ {2, 4, 8} and re-combining them randomly into a new “scrambled” version (Fig. 1d). On average, over three grid types, we observed a larger accuracy drop in R models compared to S models, ranging from 1.6× to 2.04× lower accuracy (Table 3d). That is, R model performance drops substantially when object shapes are removed—another evidence for their reliance on shapes. Compare predictions of ResNet vs. ResNet-R for scrambled images in Fig. A6. Remarkably, ResNet accuracy only drops from 100% to 94.77% on the 2× 2 scrambled images (Fig. A1).
3.2.2 PERFORMANCE ON TEXTURE-LESS, SHAPE-PRESERVING IMAGES
Following Geirhos et al. (2019), we tested R models on three types of texture-less images where the texture is increasingly removed: (1) stylized ImageNet images where textures are randomly modified; (2) binary, black-and-white, i.e. B&W, images (Fig. 1f); and (3) silhouette images where the texture information is completely removed (Fig. 1e, g).
Stylized ImageNet To construct a set of stylized ImageNet images (see Fig. 1e), we took all ImageNet-CL images (Sec. 2) and changed their textures via a stylization procedure in Geirhos et al. (2019), which harnesses the style transfer technique (Gatys et al., 2016) to apply a random style to each ImageNet “content” image.
B&W images For all ImageNet-CL images, we used the same process described in Geirhos et al. (2019) to generate silhouettes, but we did not manually select and modify the images. We used the ImageMagick command-line tool (ImageMagick) to binarize ImageNet images into B&W images via the following steps:
1. convert image.jpeg image.bmp 2. potrace - -svg image.bmp -o image.svg 3. rsvg-convert image.svg > image.jpeg
Silhouette For all ImageNet-CL images, we obtained their segmentation maps via a PyTorch DeepLab-v2 model (Chen et al., 2017) pre-trained on MS COCO-Stuff. We used the ImageNet-CL images that belong to a set of 16 COCO coarse classes in Geirhos et al. (2019) (e.g. bird, bicycle, airplane, etc.). When evaluating classifiers, an image is considered correctly labeled if its ImageNet predicted label is a subclass of the correct class among the 16 COCO classes (Fig. 1f; mapping sandpiper→ bird). Results On all three texture-less sets, R models consistently outperformed their S counterparts (Table 3e–g)—a remarkable generalization capability, especially on B&W and silhouette images where all texture information is mostly removed.
3.3 HOW DOES ADVERSARIAL TRAINING MAKE NETWORKS MORE ROBUST?
What internal mechanisms help R networks become more robust? Here, we shed light into this question by analyzing R networks at the weight (Sec. 3.3.1) and neuron (Sec. 3.3.2) levels.
3.3.1 WEIGHT LEVEL: SMOOTH FILTERS TO BLOCK PIXEL-WISE NOISE
Consistent with Yin et al. (2019); Gilmer et al. (2019), we observed that AlexNet-R substantially outperforms AlexNet not only on adversarial examples but also several types of high-frequency image types (e.g. additive noise) in ImageNet-C (Table A1).
Smoother filters To explain this phenomenon, we visualized the weights of all 64 conv1 filters (11×11×3), in both AlexNet and AlexNet-R, as RGB images. We compare each AlexNet conv1 filter with its nearest conv1 filter (via Spearman rank correlation) in AlexNet-R. Remarkably, R filters appear qualitatively much smoother than their counterparts (Fig. 2a). The R filter bank is also less diverse e.g. R edge detectors are often black-and-white in contrast to the colorful AlexNet edges (Fig. 2b). A similar contrast was also seen for the GoogLeNet and ResNet models (Fig. A3).
We also quantify the smoothness, in total variation (TV), of the filters of all 6 models (Table. 4) and found that, on average, the filters in R networks are much smoother. For example, the mean TV of
AlexNet-R is about 2 times smaller than AlexNet. Also, in lower layers, the filters in R classifiers are consistently 2 to 3 times smoother (Fig. A27).
Blocking pixel-wise noise We hypothesize that the smoothness of filters makes R classifiers more robust against noisy images. To test this hypothesis, we computed the total variation (TV) (Rudin et al., 1992) of the channels across 5 conv layers when feeding ImageNet-CL images and their noisy versions (Fig. 1c; ImageNet-C Level 1 additive noise ∼ N(0, 0.08)) to S and R models. At conv1, the smoothness of R activation maps remains almost unchanged before and after noise addition (Fig. 3a; yellow circles are on the diagonal line). In contrast, the conv1 filters in standard AlexNet allow Gaussian noise to pass through, yielding larger-TV channels (Fig. 3a; blue circles are mostly above the diagonal). That is, the smooth filters in R models indeed can filter out pixel-wise Gaussian noise despite that R models were not explicitly trained on this image type! Interestingly, Ford et al. (2019) finding that the reverse engineering also works: training with Gaussian noise can improve adversarial robustness.
In higher layers, it is intuitive that the pixel-wise noise added to the input image might not necessarily cause activation maps, in both S and R networks, to be noisy because higher-layered units detect more abstract concepts. However, interestingly, we still found that R channels to have consistently less mean TV (Fig. 3b–c). Our result suggests that most of the de-noising effects take place at lower layers (which contain generic features) instead of higher layers.
3.3.2 NEURON LEVEL: ROBUST NEURONS PREFER LOWER-LEVEL AND FEWER INPUTS
Here, via NetDissect framework, we wish to characterize how adversarial training changed the hidden neurons in R networks to make R classifiers more adversarially robust.
Network Dissection (hereafter, NetDissect) is a common framework for quantifying the functions of a neuron by computing the Intersection over Union (IoU) between each activation map (i.e. channels) and the human-annotated segmentation maps for the same input images. That is, each channel is given an IoU score per human-defined concept (e.g. dog or zigzagged) indicating its accuracy in detecting images of that concept. A channel is tested for its accuracy on all ∼1,400 concepts, which span across six coarse categories: object, part, scene, texture, color, and material (Bau et al., 2017) (c.f. Fig. A11 for example NetDissect images in texture and color concepts). Following Bau et al. (2017), we assign each channel C a main functional label i.e. the concept that C has the highest IoU with. In both S and R models, we ran NetDissect on all 1152, 5808, and 3904 channels from,
respectively, 5, 12, and 5 main convolutional layers (post-ReLU) of the AlexNet, GoogLeNet, and ResNet-50 architectures (c.f. Sec. A for more details of layers used).
Shift to detecting more low-level features i.e. colors and textures We found a consistent trend— adversarial training resulted in substantially more filters that detect colors and textures (i.e. in R models) in exchange for fewer object and part detectors. For example, throughout the same GoogLeNet architecture, we observed a 102% and a 34% increase of color and texture detectors, respectively, in the R model, but a 20% and a 26% fewer object and part detectors, compared to the S model (c.f. Fig. 4a). After adversarial training,∼11%, 15%, and 10% of all hidden neurons (in the tested layers) in AlexNet, GoogLeNet, and ResNet, respectively, shift their roles to detecting lowerlevel features (i.e. textures and colors) instead of higher-level features (Fig. A12). Across three architectures, the increases in texture and color channels are often larger in higher layers. While lower-layered units often learn more generic features, higher-layered units are more task-specific (Nguyen et al., 2016a), hence the largest functional shifts in higher layers.
We also compare the shape-biased ResNet-R with ResNet-SIN i.e. a ResNet-50 trained exclusively on stylized images (Geirhos et al., 2019), which also has a strong shape bias of 81.37%. 2 Interestingly, similar to ResNet-R, ResNet-SIN also have more low-level feature detectors (colors and textures) and fewer high-level feature detectors (objects and parts) than the vanilla ResNet (Fig. A28).
Shift to detecting simpler objects Analyzing the concepts in the object category where we observed largest changes in channel count, we found evidence that neurons change from detecting
2model A in https://github.com/rgeirhos/texture-vs-shape/
complex to simpler objects. That is, for each NetDissect concept, we computed the difference in the numbers of channels between the S and R model. In the same object category, AlexNet-R model has substantially fewer channels detecting complex concepts e.g. −30 dog, −13 cat, and −11 person detectors (Fig. A8b; rightmost columns), compared to the standard network. In contrast, the R model has more channels detecting simpler concepts, e.g. +40 sky and +12 ceiling channels (Fig. A8b; leftmost columns). The top-49 images that highest-activated R units across five conv layers also show their strong preference for simpler backgrounds and objects (Figs. A15–A19).
Shift to detecting fewer unique concepts The previous sections have revealed that neurons in R models often prefer images that are pixel-wise smoother (Sec. 3.3.1) and of lower-level features (Sec. 3.3.2), compared to S neurons. Another important property of the complexity of the function computed at each neuron is the diversity of types of inputs detected by the neuron (Nguyen et al., 2016b; 2019). Here, we compare the diversity score of NetDissect concepts detected by units in S and R networks. For each channel C, we calculated a diversity score i.e. the number of unique concepts that C detects with an IoU score ≥ 0.01. Interestingly, on average, an R unit fires for 1.16 times fewer unique concepts than an S unit (22.43 vs. 26.07; c.f. Fig. A10a). Similar trends were observed in ResNet (Fig. A10b). Qualitatively comparing the highest-activation training-set images by the highest-IoU channels in both networks, for the same most-frequent concepts (e.g. striped), often confirms a striking difference: R units prefer a less diverse set of inputs (Fig. A12). As R hidden units fire for fewer concepts, i.e. significantly fewer inputs, the space for adversarial inputs to cause R models to misbehave is strictly smaller.
3.4 WHICH NEURONS ARE IMPORTANT FOR SHAPE- OR TEXTURE-BASED RECOGNITION?
To understand how the changes in R hidden neurons (Sec. 3.3) relate to the shape bias of R classifiers (Sec. 3.1), here, we zero out every channel, one at a time, in S and R networks and measure the performance drop in recognizing shape and texture from cue-conflict images.
Shape & Texture scores For each channel, we computed a Shape score i.e. the number of images originally correctly labeled into the shape class by the network but that, after the ablation, are labeled differently (examples in Fig 5a–b). Similarly, we computed a Texture score per channel. The Shape and Texture scores quantify the importance of a channel in classification using shapes or textures.
First, we found that the channels labeled texture by NetDissect are not only important to texturebut also shape-based recognition. That is, on average, zero-ing out these channels caused non-zero Texture and Shape scores (Fig. 4b; Texture and are above 0). See Fig. 5 for an example of texture channels with high Shape and Texture scores.3 This result sheds light into the fact that R networks consistently have more texture units (Fig. 4a) but are shape-biased (Sec. 3.1).
3Similar visualizations of some other neurons from both S and R networks are in Appendix Fig. A21–A26.
Second, the texture units are, as expected, highly texture-biased in AlexNet (Fig. 4b Texture; is almost 2× of ). However, surprisingly, those texture units in AlexNet-R are neither strongly shape-biased nor texture-biased (Fig. 4b; Texture ≈ ). That is, across all three groups of the object, color, and texture, R neurons appear mostly to be generalist, low-level feature detectors. This generalist property might be a reason for why R networks are more effective in transfer learning than S networks (Salman et al., 2020).
Finally, the contrast above between the texture bias of S and R channels (Fig. 4b) reminds researchers that the single NetDissect label assigned to each neuron is not describing a full picture of what the neuron does and how it helps in downstream tasks. To the best of our knowledge, this is the first work to align the NetDissect and cue-conflict frameworks to study how individual neurons contribute to the generalizability and shape bias of the entire network.
4 DISCUSSION AND RELATED WORK
Deep neural networks tend to prioritize learning simple patterns that are common across the training set (Arpit et al., 2017). Furthermore, deep ReLU networks often prefer learning simple functions (Valle-Perez et al., 2019; De Palma et al., 2019), specifically low-frequency functions (Rahaman et al., 2019), which are more robust to random parameter perturbations. Along this direction, here, we have shown that R networks (1) have smoother weights (Sec. 3.3.1), (2) prefer even simpler and fewer inputs (Sec. 3.3.2) than standard deep networks—i.e. R networks represent even simpler functions. Such simplicity biases are consistent with the fact that gradient images of R networks are much smoother (Tsipras et al., 2019) and that R classifiers act as a strong image prior for image synthesis (Santurkar et al., 2019).
Each R neuron computing a more restricted function than an S neuron (Sec. 3.3.2) implies that R models would require more neurons to mimic a complex S network. This is consistent with recent findings that adversarial training requires a larger model capacity (Xie & Yuille, 2020).
While AdvProp did not yet show benefits on ResNet (Xie et al., 2020), it might be interesting future work to find out whether EfficientNets trained via AdvProp also have shape and simplicity biases. Furthermore, simplicity biases may be incorporated as regularizers into future training algorithms to improve model robustness. For example, encouraging filters to be smoother might improve robustness to high-frequency noise. Also aligned with our findings, Rozsa & Boult (2019) found that explicitly narrowing down the non-zero input regions of ReLUs can improve adversarial robustness.
We found that R networks heavily rely on shape cues in contrast to S networks. One may fuse an S network and a R network (two channels, one uses texture and one uses shape) into a single, more robust, interpretable ML model. That is, such model may (1) have better generalization on OOD data than S or R network alone and (2) enable an explanation to users on what features a network uses to label a given image.
Our study on how individual hidden neurons contribute to the R network shape preference (Sec. 3.4) revealed that texture-detector units are equally important to the texture-based and shape-based recognition. This is in contrast to a common hypothesis that texture detectors should be exclusively only useful to texture-biased recognition. Our surprising finding suggests that the categories of stimuli in the well-known Network Dissection (Bau et al., 2017) need to be re-labeled and also extended with low-frequency patterns e.g. single lines or silhouettes in order to more accurately quantify hidden representations.
5 CONCLUSION
A CONVOLUTIONAL LAYERS USED IN NETWORK DISSECTION ANALYSIS
For both standard and robust models, we ran NetDissect on 5 convolutional layers in AlexNet (Krizhevsky et al., 2012), 12 in GoogLeNet (Szegedy et al., 2015), and 5 in ResNet-50 architectures (He et al., 2016). For each layer, we use after-ReLU activations (if ReLU exists).
AlexNet layers: conv1, conv2, conv3, conv4, conv5. Refer to these names in Krizhevsky et al. (2012).
GoogLeNet layers: conv1, conv2, conv3, inception3a, inception3b, inception4a, inception4b, inception4c, inception4d, inception4e, inception5a, inception5b
Refer to these names in PyTorch code https://github.com/pytorch/vision/blob/ master/torchvision/models/googlenet.py#L83-L101.
ResNet-50 layers: conv1, layer1, layer2, layer3, layer4
Refer to these names in PyTorch code https://github.com/pytorch/vision/blob/ master/torchvision/models/resnet.py#L145-L155).
Table A1: Top-1 accuracy of 6 models (in %) on all 15 types of image corruptions in ImageNet-C (Hendrycks & Dietterich, 2019). On average over all 15 distortion types, R models underperform their standard counterparts.
AlexNet AlexNet-R GoogLeNet GoogLeNet-R ResNet ResNet-R
Noise Gaussian 11.36 21.98 33.28 18.71 29.03 24.53
Shot 10.55 21.35 31.01 17.86 26.97 23.92 Impulse 7.74 19.68 24.54 15.30 23.55 21.07
Blur
Defocus 18.01 15.59 28.42 20.72 38.40 26.36 Glass 17.37 17.91 23.91 29.02 26.78 34.29
Motion 21.40 21.45 31.14 28.29 38.61 33.15 Zoom 20.16 21.60 25.57 28.98 35.73 33.83
Weather Snow 13.32 12.25 32.66 21.36 33.19 25.83 Frost 17.34 11.00 36.80 20.31 39.08 27.83 Fog 18.07 1.83 42.80 3.48 46.17 5.65
Brightness 43.54 27.71 64.46 42.96 68.32 49.71
Digital Contrast 14.68 3.28 43.66 5.90 38.86 8.78 Elastic 35.39 32.29 42.79 41.98 46.16 44.94 Pixelate 28.22 36.33 54.86 48.11 44.49 52.62 JPEG 39.35 38.65 52.57 50.44 53.80 54.37
mean Accuracy 21.10 20.19 37.90 26.23 39.27 31.13
100 100 100 100 100 100
61.75
30.09
91.18
66.76
94.77
73.35
36.03
15.98
51.74
22.31
68.31
25.96
5.99 4.70 6.31 4.37 11.02
4.06
A cc
ur ac
y (in
% )
0
25
50
75
100
AlexNet AlexNet-R GoogLeNet GoogLeNet-R ResNet ResNet-R
1x1 2x2 4x4 8x8
Figure A1: Standard models substantially outperform R models when tested on scrambled images due to their capability of recognizing images based on textures. See Fig. A6 for examples of scrambled images and their top-5 predictions from ResNet-R and ResNet (which achieves a remarkable accuracy of 94.77%). Here, we report top-1 accuracy scores (in %) on the scrambled images whose original versions were correctly labeled by both standard and R classifiers (hence, the 100% for 1× 1 blue bars).
Figure A2: conv1 filters of AlexNet-R are smoother than the filters in standard AlexNet. In each column, we show an AlexNet filter conv1 filter and their nearest filter (bottom) from the AlexNet-R. Above each pair of filters are their Spearman rank correlation score (e.g. r: 0.36) and their total variation (TV) difference (i.e. smoothness differences). Standard AlexNet filters are mostly noisier than their nearest R filter (i.e. positive TV differences).
AlexNet 11×11×3 AlexNet-R
GoogLeNet 7×7×3 GoogLeNet-R
ResNet 7×7×3 ResNet-R
Figure A3: All 64 conv1 filters of in each standard network (left) and its counterpart (right). The filters of R models (right) are smoother and less diverse compared to those in standard models (left). Especially, the edge filters of standard networks are noisier and often contain multiple colors in them.
(a) Real (b) Scrambled (c) Stylized (d) Contour (e) Silhouette
Figure A4: Applying different transformation that remove shape/texture on real images. We randomly show an example of 7 out of 16 COCO coarser classes. See Table 3 for classification accuracy scores on different images distortion dataset in 1000 classes(Except for Silhouette). *Note: Silhouette are validate in 16 COCO coarse classes.
0 1000 2000 3000 4000 5000 TV of channels for Clean Images
0
1000
2000
3000
4000
5000
TV o
f c ha
nn el
s f or
N oi
sy Im
ag es
AlexNet conv1 AlexNet-R conv1
(a) conv1
0 500 1000 1500 TV of channels for Clean Images
0
250
500
750
1000
1250
1500
1750
TV o
f c ha
nn el
s f or
N oi
sy Im
ag es
AlexNet conv2 AlexNet-R conv2
(b) conv2
0 200 400 600 TV of channels for Clean Images
0
100
200
300
400
500
600
700
TV o
f c ha
nn el
s f or
N oi
sy Im
ag es
AlexNet conv3 AlexNet-R conv3
(c) conv3
0 100 200 300 400 TV of channels for Clean Images
0
100
200
300 400 TV o f c ha nn el s f or N oi sy
Im ag es AlexNet conv4 AlexNet-R conv4
(d) conv4
0 50 100 150 200 TV of channels for Clean Images
0
50
100
150
200
TV o
f c ha
nn el
s f or
N oi
sy Im
ag es
AlexNet conv5 AlexNet-R conv5
(e) conv5
Figure A5: Each point shows the Total Variation (TV) of the activation maps on clean and noisy images for an AlexNet or AlexNet-R channel. We observe a striking difference in conv1: The smoothness of R channels remains unchanged before and after noise addition, explaining their superior performance in classifying noisy images. While the channel smoothness differences (between two networks) are gradually smaller in higher layers, we still observe R channels are consistently smoother.
R es
N et
-R R
es N
et R
es N
et -R
R es
N et
R es
N et
-R R
es N
et 1× 1 2× 2 4× 4 8× 8 1× 1 2× 2 4× 4 8× 8
Figure A6: ResNet-R, on average across the three patch sizes, underperforms the standard ResNet model. Surprisingly, we observe that ResNet correctly classifies the image to their ground truth class even when the image is randomly shuffled into 16 patches, e.g., ResNet classifies the 4 × 4 case of rule, safe with∼ 100% confidence. The results are consistent with the strong texture bias of ResNet and shape bias of ResNet-R (described in Sec. 3.2.1).
Figure A7: For each network, we show the number of channels in each of the 6 NetDissect categories (color, texture, etc) in Bau et al. (2017). Across all three architectures, R models consistently have more color and texture channels while substantially fewer object detectors.
str ipe
d
ba nd
ed che qu ere d fre ckl ed fril ly wa ffle d int erl ace d ve ine d wo ve n lac elik e po tho ledline d me she d po rou s ga uzy zig zag ge d cry sta llin e fle cke d spr ink led sta ine d gro ov ed sm ea red bu mp y fib rou s ple ate d wr ink led
cro ssh
atc he d kn itte d
pe rfo
rat ed cra cke d ho ne yco mb ed pa isle y po lka -do tte d stu dd ed cob we bb ed spi ral led gri d do tte d sw
irly 40
20
0
20
40
60
80
In cr
ea se
in n
um be
r o f c
ha nn
el s
80
23 21 14 13 11 9 8 5 4 4 4 2 2 2 1 1 1 1 1 1
-1 -1 -2 -2 -2 -3 -3 -5 -6 -8 -9 -10-10-10-11 -20
-26 -35
(a) Differences in texture channels between AlexNet and AlexNet-R
sky cei ling cur tai n flo or pa int ing mo un tai n sid ew alksno w fie ld
sw ive
l ch air
san d t
rapgra ss ea rth
bu ildi
ng tab le sky scr ap er bo ok be d po tte dp lan t she lf pla tfo rm gro un d bo at cab ine t esc ala tor ligh t tvm on ito r fire pla ce po le pill ar car pe t flo we r toi let win do wp an e po ol tab le ho use air pla ne foo d wa ter bic ycl e she eptra in bu s bir d mo tor bik e tre e pla nt ho rse car pe rso n cat do g
30
20
10
0
10
20
30
40
In cr
ea se
in n
um be
r o f c
ha nn
el s
40
12 8 6
3 3 2 2 2 1 1 1 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -2 -2 -2 -2 -3 -3 -3 -3 -4 -4 -4 -5 -6 -7 -8 -8 -8 -9 -11-13
-30
(b) Differences in object channels between AlexNet and AlexNet-R
Figure A8: In each bar plot, we column shows the difference in the number of channels (between AlexNet-R and AlexNet) for a given concept e.g. striped or banded. That is, yellow bars (i.e. positive numbers) show the count of channels that the R model has more than the standard network in the same concept. Vice versa, teal bars represent the concepts that R models have fewer channels. The NetDissect concept names are given in the x-axis. Top: In the texture category, the R model has a lot more simple texture patterns e.g. striped and banded (see Fig. A11 for example patterns in these concepts). Bottom: In the object category, AlexNet-R often prefers simpler-object detectors e.g. sky or ceiling (Fig. A8b; leftmost) while the standard network has more complex-objects detectors e.g. dog and cat (Fig. A8b; rightmost).
conv1 conv2 conv3 conv4 conv5 0
20
40
60
80
100
Nu m
be r o
f o bj
ec t c
ha nn
el s
0
18
71
46
104
5
28
57
33
49
AlexNet AlexNet-R
(a) Number of object detectors per AlexNet layer
conv1 conv2 conv3 conv4 conv5 0
10
20
30
40
Nu m
be r o
f c ol
or c
ha nn
el s
14
20 19
5
12 8
42 45
32
25
AlexNet AlexNet-R
(b) Number of color detectors per AlexNet layer
Figure A9: In higher layers (here, conv4 and conv5), AlexNet-R have fewer object detectors but more color detector units compared to standard AlexNet. The differences between the two networks increase as we go from lower to higher layers. Because both networks share an identical architecture, the plots here demonstrate a substantial shift in the functionality of the neurons as the result of adversarial training—detecting more colors and textures and fewer objects. Similar trends were also observed between standard and R models of GoogLeNet and ResNet-50 architectures.
conv1 conv2 conv3 conv4 conv5
20
25
30
35
M ea
n di
ve rs
ity sc
or e
AlexNet AlexNet-R
(a) AlexNet layer-wise mean diversity
conv1 layer1 layer2 layer3 layer4 5
10
15
20
25
30
35
40
45
M ea
n di
ve rs
ity sc
or e
ResNet ResNet-R
(b) ResNet layer-wise mean diversity
Figure A10: In each plot, we show the mean diversity scores across all channels in each layer. Both AlexNet-R and ResNet-R consistently have channels with lower diversity scores (i.e. detecting fewer unique concepts) than the standard counterparts.
Figure A20: AlexNet conv419 with Shape and Texture scores of 18 and 22, respectively. It has a NetDissect label of spiralled (IoU: 0.0568) under texture category. Although this neuron is in NetDissect texture category, the misclassified images suggest that this neuron helps in both shapeand texture-based recognition. Top: Top-49 images that highest-activated this channel. Middle: Mis-classified images in shape category (18 images). Bottom: Mis-classified images in texture category (22 images). | 1. What are the main findings of the paper regarding adversarially robust CNN architectures?
2. How do the authors demonstrate the reliance on shape rather than texture in image recognition?
3. What are the limitations of the paper's contributions, according to the reviewer?
4. How does the reviewer assess the quality and clarity of the paper's writing?
5. What are the pros and cons of the paper, according to the reviewer? | Review | Review
In this paper, the authors show that adversarially robust versions of three popular CNN architectures trained for image classification on ImageNet rely on shape rather than on textures to perform recognition. They also show that adversarially robust networks do not outperform non-robust networks on corrupted data. Finally, they perform some analysis to determine whether intermediate features are more related to shape or texture, finding that these representations to intertwine both types of information.
Quality: The paper is well written and the experiments are very interesting. However, the contributions may be somewhat incremental (see below).
Clarity: The paper is written very clearly.
Significance and originality: The paper is interesting, but the novelty is a bit limited. I am torn about this, because I do think that the experiments are interesting and well explained, and verifying results on different datasets (especially very large datasets like ImageNet) is important. However, given that the shape bias of adversarially robust networks was already known previously (as pointed out by the authors) and that there is no methodological contribution, I think that the contributions are somewhat incremental.
Pros: Well written, interesting experiments.
Cons: Incremental contribution.
After reading the author feedback, I would like to thank the authors and I agree with them that it is critical to test hypotheses on large-scale datasets. However, I still think that the contribution is marginally below the acceptance threshold. |
ICLR | Title
The shape and simplicity biases of adversarially robust ImageNet-trained CNNs
Abstract
Adversarial training has been the topic of dozens of studies and a leading method for defending against adversarial attacks. Yet, it remains largely unknown (a) how adversarially-robust ImageNet classifiers (R classifiers) generalize to out-ofdistribution examples; and (b) how their generalization capability relates to their hidden representations. In this paper, we perform a thorough, systematic study to answer these two questions across AlexNet, GoogLeNet, and ResNet-50 architectures. We found that while standard ImageNet classifiers have a strong texture bias, their R counterparts rely heavily on shapes. Remarkably, adversarial training induces three simplicity biases into hidden neurons in the process of “robustifying” the network. That is, each convolutional neuron in R networks often changes to detecting (1) pixel-wise smoother patterns i.e. a mechanism that blocks highfrequency noise from passing through the network; (2) more lower-level features i.e. textures and colors (instead of objects); and (3) fewer types of inputs. Our findings reveal the interesting mechanisms that made networks more adversarially robust and also explain some recent findings e.g. why R networks benefit from much larger capacity (Xie & Yuille, 2020) and can act as a strong image prior in image synthesis (Santurkar et al., 2019).
N/A
Adversarial training has been the topic of dozens of studies and a leading method for defending against adversarial attacks. Yet, it remains largely unknown (a) how adversarially-robust ImageNet classifiers (R classifiers) generalize to out-ofdistribution examples; and (b) how their generalization capability relates to their hidden representations. In this paper, we perform a thorough, systematic study to answer these two questions across AlexNet, GoogLeNet, and ResNet-50 architectures. We found that while standard ImageNet classifiers have a strong texture bias, their R counterparts rely heavily on shapes. Remarkably, adversarial training induces three simplicity biases into hidden neurons in the process of “robustifying” the network. That is, each convolutional neuron in R networks often changes to detecting (1) pixel-wise smoother patterns i.e. a mechanism that blocks highfrequency noise from passing through the network; (2) more lower-level features i.e. textures and colors (instead of objects); and (3) fewer types of inputs. Our findings reveal the interesting mechanisms that made networks more adversarially robust and also explain some recent findings e.g. why R networks benefit from much larger capacity (Xie & Yuille, 2020) and can act as a strong image prior in image synthesis (Santurkar et al., 2019).
1 INTRODUCTION
Given excellent test-set performance, deep neural networks often fail to generalize to out-ofdistribution (OOD) examples (Nguyen et al., 2015) including “adversarial examples”, i.e. modified inputs that are imperceptibly different from the real data but change predicted labels entirely (Szegedy et al., 2014). Importantly, adversarial examples can transfer between models and cause unseen, all machine learning (ML) models to misbehave (Papernot et al., 2017), threatening the security and reliability of ML applications (Akhtar & Mian, 2018). Adversarial training—teaching a classifier to correctly label adversarial examples (instead of real data)—has been a leading method in defending against adversarial attacks and the most effective defense in ICLR 2018 (Athalye et al., 2018). Besides improved performance on adversarial examples, test-set accuracy can also be improved, for some architectures, when real images are properly incorporated into adversarial training (Xie et al., 2020). It is therefore important to study how the standard adversarial training (by Madry et al. 2018) changes the hidden representations and generalization capabilities of neural networks.
On smaller datasets, Zhang & Zhu (2019) found that adversarially-robust networks (hereafter, R networks) rely heavily on shapes (instead of textures) to classify images. Intuitively, training on pixel-wise noisy images would encourage R networks to focus less on local statistics (e.g. textures) and instead harness global features (e.g. shapes) more. However, an important, open question is:
Q1: On ImageNet, do R networks still prefer shapes over textures?
It remains unknown whether such shape preference carries over to the large-scale ImageNet (Russakovsky et al., 2015), which often induces a large texture bias into networks (Geirhos et al., 2019) e.g. to separate ∼150 four-legged species in ImageNet. Also, this shape-bias hypothesis suggested by Zhang & Zhu (2019) seems to contradict the recent findings that R networks on ImageNet act as a strong texture prior i.e. they can be successfully used for many image translation tasks without any extra image prior (Santurkar et al., 2019). The above discussion leads to a follow-up question:
Q2: If an R network has a stronger preference for shapes than standard ImageNet networks (hereafter, S networks), will it perform better on OOD distorted images?
Networks trained to be more shape-biased can generalize better to many unseen ImageNet-C (Hendrycks & Dietterich, 2019) image corruptions than S networks, which have a strong texture bias (Brendel & Bethge, 2019). In contrast, there was also evidence that classifiers trained on one type of images often do not generalize well to others (Geirhos et al., 2018; Nguyen et al., 2015; Kang et al., 2019). Importantly, R networks often underperform S networks on original test sets (Tsipras et al., 2019) perhaps due to an inherent trade-off (Madry et al., 2018), a mismatch between real vs. adversarial distributions (Xie et al., 2020), or a limitation in architectures—AdvProp helps improving performance of EfficientNets but not ResNets (Xie et al., 2020).
Most previous work aimed at understanding the behaviors of R classifiers as a function but little is known about the internal characteristics of R networks and, furthermore, their connections to the shape bias and generalization performance. Here, we ask:
Q3: How did adversarial training change the hidden neural representations to make classifiers more shape-biased and adversarially robust?
In this paper, we harness the common benchmarks in ML interpretability and neuroscience—cueconflict (Geirhos et al., 2019), NetDissect (Bau et al., 2017), and ImageNet-C—to answer the three questions above via a systematic study across three different convolutional architectures—AlexNet (Krizhevsky et al., 2012), GoogLeNet (Szegedy et al., 2015), and ResNet-50 (He et al., 2016)— trained to perform image classification on the large-scale ImageNet dataset (Russakovsky et al., 2015). Our main findings include:1
1. R classifiers trained on ImageNet prefer shapes over textures∼67% of the time (Sec. 3.1)— a stark contrast to the S classifiers, which use shapes at only ∼25%.
2. Consistent with the strong shape bias, R classifiers interestingly outperform S counterparts on texture-less, distorted images (stylized and silhouetted images) (Sec. 3.2.2).
3. Adversarial training makes R networks more robust by (1) blocking pixel-wise input noise via smooth filters (Sec. 3.3.1); (2) narrowing the input range that highly activates neurons to simpler patterns, effectively reducing the space of adversarial inputs (Sec. 3.3.2).
4. Units that detect texture patterns (according to NetDissect) are not only useful to texturebased recognition as expected but can be also highly useful to shape-based recognition (Sec. 3.4). By aligning NetDissect and cue-conflict frameworks, we found that hidden neurons in R networks are surprisingly neither strongly shape-biased nor texture-biased, but instead generalists that detect low-level features (Sec. 3.4).
2 NETWORKS AND DATASETS
Networks To understand the effects of adversarial training across a wide range of architectures, we compare each pair of S and R models while keeping their network architectures constant. That is, we conduct all experiments on two groups of classifiers: (a) standard AlexNet, GoogLeNet, & ResNet-50 (hereafter, ResNet) models pre-trained on the 1000-class 2012 ImageNet dataset; and (b) three adversarially-robust counterparts i.e. AlexNet-R, GoogLeNet-R, & ResNet-R which were trained via adversarial training (see below) (Madry et al., 2018).
Training A standard classifier with parameters θ was trained to minimize the cross-entropy loss L over pairs of (training example x, ground-truth label y) drawn from the ImageNet training set D:
arg min θ
E(x,y)∼D [ L(θ, x, y) ] (1)
On the other hand, we trained each R classifier via Madry et al. (2018) adversarial training framework where each real example x is changed by a perturbation ∆:
arg min θ E(x,y)∼D [ max ∆∈P L(θ, x+ ∆, y) ]
(2)
1All code and data will be available on github upon publication.
where P is the perturbation range (Madry et al., 2018), here, within an L2 norm. Hyperparameters The S models were downloaded from PyTorch model zoo (PyTorch, 2019). We trained all R models using the robustness library (Engstrom et al., 2019), using the same hyperparameters in Engstrom et al. (2020); Santurkar et al. (2019); Bansal et al. (2020). That is, adverarial examples were generated using Projected Gradient Descent (PGD) (Madry et al., 2018) with an L2 norm constraint of 3, a step size of 0.5, and 7 PGD-attack steps. R models were trained using an SGD optimizer for 90 epochs with a momentum of 0.9, an initial learning rate of 0.1 (which is reduced 10 times every 30 epochs), a weight decay of 10−4, and a batch size of 256 on 4 Tesla-V100 GPU’s.
Compared to the standard counterparts, R models have substantially higher adversarial accuracy but lower ImageNet validation-set accuracy (Table 1). To compute adversarial accuracy, we perturbed validation-set images with the same PGD attack settings as used in training.
Correctly-labeled image subsets: ImageNet-CL Following Bansal et al. (2020), to compare the behaviors of two networks of identical architectures on the same inputs, we tested them on the largest ImageNet validation subset (hereafter, ImageNet-CL) where both models have 100% accuracy. The sizes of the three subsets for three architectures—AlexNet, GoogLeNet, and ResNet—are respectively: 17,693, 24,581, and 27,343. On modified ImageNet images (e.g. ImageNet-C), we only tested each pair of networks on the modified images whose original versions exist in ImageNet-CL. That is, we wish to gain deeper insights into how networks behave on correctly-classified images, and then how their behaviors change when some input feature (e.g. textures or shapes) is modified.
3 EXPERIMENT AND RESULTS
3.1 DO IMAGENET ADVERSARIALLY ROBUST NETWORKS PREFER SHAPES OR TEXTURES?
It is important to know which type of feature a classifier uses when making decisions. While standard ImageNet networks often carry a strong texture bias (Geirhos et al., 2019), it is unknown whether their adversarially-robust counterparts would be heavily texture- or shape-biased. Here, we test this hypothesis by comparing S and R models on the well-known cue-conflict dataset (Geirhos et al., 2019). That is, we feed “stylized” images provided by Geirhos et al. (2019) that contain contradicting texture and shape cues (e.g. elephant skin on a cat silhouette) and count the times a model uses textures or shapes (i.e. outputting elephant or cat) when it makes a correct prediction.
Experiment Our procedure follows Geirhos et al. (2019). First, we excluded 80 images that do not have conflicting cues (e.g. cat textures on cat shapes) from their 1,280-image dataset. Each texture or shape cue belongs to one of 16 MS COCO (Caesar et al., 2018) coarse labels (e.g. cat or elephant). Second, we ran the networks on these images and converted their 1000-class probability vector outputs into 16-class probability vectors by taking the average over the probabilities of the fine-grained classes that are under the same COCO label. Third, we took only the images that each network correctly labels (i.e. into the texture or shape class), which ranges from 669 to 877 images (out of 1,200) for 6 networks and computed the texture and shape accuracies over 16 classes.
Results On average, over three architectures, R classifiers rely on shapes ≥ 67.08% of the time i.e. ∼2.7× higher than 24.56% of the S models (Table 2). In other words, by replacing the real examples with adversarial examples, adversarial training causes the heavy texture bias of ImageNet classifiers (Geirhos et al., 2019; Brendel & Bethge, 2019) to drop substantially (∼2.7×).
3.2 DO ROBUST NETWORKS GENERALIZE TO UNSEEN TYPES OF DISTORTED IMAGES?
We have found that changing from standard training to adversarial training changes ImageNet classifiers entirely from texture-biased into shape-biased (Sec. 3.1). Furthermore, Geirhos et al. (2019)
found that some training regimes that encourage classifiers to focus more on shape can improve their performance on unseen image distortions. Therefore, it is interesting to test whether R models—a type of shape-biased classifiers— would generalize well to any OOD image types.
ImageNet-C We compare S and R networks on the ImageNet-C dataset which was designed to test model robustness on 15 common types of image corruptions (Fig. 1c), where several shape-biased classifiers were known to outperform S classifiers (Geirhos et al., 2019). Here, we tested each pair of S and R models on the ImageNet-C distorted images whose original versions were correctly labeled by both (i.e. in ImageNet-CL sets; Sec. 2).
Results R models show no generalization boost on ImageNet-C i.e. they performed on-par or worse than the S counterparts (Table 3c). This is consistent with the findings in Table 4 in Geirhos et al. (2019) that a stronger shape bias does not necessarily imply better generalizability.
To further understand the generalization capability of R models, we tested them on two controlled image types where either shape or texture cues are removed from the original, correctly-labeled ImageNet images. Note that when both shape and texture cues are present e.g. in cue-conflict images, R classifiers consistently prefer shape over texture i.e. a shape bias. However, this bias is orthogonal to the performance when only either texture or shape cues are present.
3.2.1 PERFORMANCE ON SHAPE-LESS, TEXTURE-PRESERVING IMAGES
We created shape-less images by dividing each ImageNet-CL image into a grid of p×p even patches where p ∈ {2, 4, 8} and re-combining them randomly into a new “scrambled” version (Fig. 1d). On average, over three grid types, we observed a larger accuracy drop in R models compared to S models, ranging from 1.6× to 2.04× lower accuracy (Table 3d). That is, R model performance drops substantially when object shapes are removed—another evidence for their reliance on shapes. Compare predictions of ResNet vs. ResNet-R for scrambled images in Fig. A6. Remarkably, ResNet accuracy only drops from 100% to 94.77% on the 2× 2 scrambled images (Fig. A1).
3.2.2 PERFORMANCE ON TEXTURE-LESS, SHAPE-PRESERVING IMAGES
Following Geirhos et al. (2019), we tested R models on three types of texture-less images where the texture is increasingly removed: (1) stylized ImageNet images where textures are randomly modified; (2) binary, black-and-white, i.e. B&W, images (Fig. 1f); and (3) silhouette images where the texture information is completely removed (Fig. 1e, g).
Stylized ImageNet To construct a set of stylized ImageNet images (see Fig. 1e), we took all ImageNet-CL images (Sec. 2) and changed their textures via a stylization procedure in Geirhos et al. (2019), which harnesses the style transfer technique (Gatys et al., 2016) to apply a random style to each ImageNet “content” image.
B&W images For all ImageNet-CL images, we used the same process described in Geirhos et al. (2019) to generate silhouettes, but we did not manually select and modify the images. We used the ImageMagick command-line tool (ImageMagick) to binarize ImageNet images into B&W images via the following steps:
1. convert image.jpeg image.bmp 2. potrace - -svg image.bmp -o image.svg 3. rsvg-convert image.svg > image.jpeg
Silhouette For all ImageNet-CL images, we obtained their segmentation maps via a PyTorch DeepLab-v2 model (Chen et al., 2017) pre-trained on MS COCO-Stuff. We used the ImageNet-CL images that belong to a set of 16 COCO coarse classes in Geirhos et al. (2019) (e.g. bird, bicycle, airplane, etc.). When evaluating classifiers, an image is considered correctly labeled if its ImageNet predicted label is a subclass of the correct class among the 16 COCO classes (Fig. 1f; mapping sandpiper→ bird). Results On all three texture-less sets, R models consistently outperformed their S counterparts (Table 3e–g)—a remarkable generalization capability, especially on B&W and silhouette images where all texture information is mostly removed.
3.3 HOW DOES ADVERSARIAL TRAINING MAKE NETWORKS MORE ROBUST?
What internal mechanisms help R networks become more robust? Here, we shed light into this question by analyzing R networks at the weight (Sec. 3.3.1) and neuron (Sec. 3.3.2) levels.
3.3.1 WEIGHT LEVEL: SMOOTH FILTERS TO BLOCK PIXEL-WISE NOISE
Consistent with Yin et al. (2019); Gilmer et al. (2019), we observed that AlexNet-R substantially outperforms AlexNet not only on adversarial examples but also several types of high-frequency image types (e.g. additive noise) in ImageNet-C (Table A1).
Smoother filters To explain this phenomenon, we visualized the weights of all 64 conv1 filters (11×11×3), in both AlexNet and AlexNet-R, as RGB images. We compare each AlexNet conv1 filter with its nearest conv1 filter (via Spearman rank correlation) in AlexNet-R. Remarkably, R filters appear qualitatively much smoother than their counterparts (Fig. 2a). The R filter bank is also less diverse e.g. R edge detectors are often black-and-white in contrast to the colorful AlexNet edges (Fig. 2b). A similar contrast was also seen for the GoogLeNet and ResNet models (Fig. A3).
We also quantify the smoothness, in total variation (TV), of the filters of all 6 models (Table. 4) and found that, on average, the filters in R networks are much smoother. For example, the mean TV of
AlexNet-R is about 2 times smaller than AlexNet. Also, in lower layers, the filters in R classifiers are consistently 2 to 3 times smoother (Fig. A27).
Blocking pixel-wise noise We hypothesize that the smoothness of filters makes R classifiers more robust against noisy images. To test this hypothesis, we computed the total variation (TV) (Rudin et al., 1992) of the channels across 5 conv layers when feeding ImageNet-CL images and their noisy versions (Fig. 1c; ImageNet-C Level 1 additive noise ∼ N(0, 0.08)) to S and R models. At conv1, the smoothness of R activation maps remains almost unchanged before and after noise addition (Fig. 3a; yellow circles are on the diagonal line). In contrast, the conv1 filters in standard AlexNet allow Gaussian noise to pass through, yielding larger-TV channels (Fig. 3a; blue circles are mostly above the diagonal). That is, the smooth filters in R models indeed can filter out pixel-wise Gaussian noise despite that R models were not explicitly trained on this image type! Interestingly, Ford et al. (2019) finding that the reverse engineering also works: training with Gaussian noise can improve adversarial robustness.
In higher layers, it is intuitive that the pixel-wise noise added to the input image might not necessarily cause activation maps, in both S and R networks, to be noisy because higher-layered units detect more abstract concepts. However, interestingly, we still found that R channels to have consistently less mean TV (Fig. 3b–c). Our result suggests that most of the de-noising effects take place at lower layers (which contain generic features) instead of higher layers.
3.3.2 NEURON LEVEL: ROBUST NEURONS PREFER LOWER-LEVEL AND FEWER INPUTS
Here, via NetDissect framework, we wish to characterize how adversarial training changed the hidden neurons in R networks to make R classifiers more adversarially robust.
Network Dissection (hereafter, NetDissect) is a common framework for quantifying the functions of a neuron by computing the Intersection over Union (IoU) between each activation map (i.e. channels) and the human-annotated segmentation maps for the same input images. That is, each channel is given an IoU score per human-defined concept (e.g. dog or zigzagged) indicating its accuracy in detecting images of that concept. A channel is tested for its accuracy on all ∼1,400 concepts, which span across six coarse categories: object, part, scene, texture, color, and material (Bau et al., 2017) (c.f. Fig. A11 for example NetDissect images in texture and color concepts). Following Bau et al. (2017), we assign each channel C a main functional label i.e. the concept that C has the highest IoU with. In both S and R models, we ran NetDissect on all 1152, 5808, and 3904 channels from,
respectively, 5, 12, and 5 main convolutional layers (post-ReLU) of the AlexNet, GoogLeNet, and ResNet-50 architectures (c.f. Sec. A for more details of layers used).
Shift to detecting more low-level features i.e. colors and textures We found a consistent trend— adversarial training resulted in substantially more filters that detect colors and textures (i.e. in R models) in exchange for fewer object and part detectors. For example, throughout the same GoogLeNet architecture, we observed a 102% and a 34% increase of color and texture detectors, respectively, in the R model, but a 20% and a 26% fewer object and part detectors, compared to the S model (c.f. Fig. 4a). After adversarial training,∼11%, 15%, and 10% of all hidden neurons (in the tested layers) in AlexNet, GoogLeNet, and ResNet, respectively, shift their roles to detecting lowerlevel features (i.e. textures and colors) instead of higher-level features (Fig. A12). Across three architectures, the increases in texture and color channels are often larger in higher layers. While lower-layered units often learn more generic features, higher-layered units are more task-specific (Nguyen et al., 2016a), hence the largest functional shifts in higher layers.
We also compare the shape-biased ResNet-R with ResNet-SIN i.e. a ResNet-50 trained exclusively on stylized images (Geirhos et al., 2019), which also has a strong shape bias of 81.37%. 2 Interestingly, similar to ResNet-R, ResNet-SIN also have more low-level feature detectors (colors and textures) and fewer high-level feature detectors (objects and parts) than the vanilla ResNet (Fig. A28).
Shift to detecting simpler objects Analyzing the concepts in the object category where we observed largest changes in channel count, we found evidence that neurons change from detecting
2model A in https://github.com/rgeirhos/texture-vs-shape/
complex to simpler objects. That is, for each NetDissect concept, we computed the difference in the numbers of channels between the S and R model. In the same object category, AlexNet-R model has substantially fewer channels detecting complex concepts e.g. −30 dog, −13 cat, and −11 person detectors (Fig. A8b; rightmost columns), compared to the standard network. In contrast, the R model has more channels detecting simpler concepts, e.g. +40 sky and +12 ceiling channels (Fig. A8b; leftmost columns). The top-49 images that highest-activated R units across five conv layers also show their strong preference for simpler backgrounds and objects (Figs. A15–A19).
Shift to detecting fewer unique concepts The previous sections have revealed that neurons in R models often prefer images that are pixel-wise smoother (Sec. 3.3.1) and of lower-level features (Sec. 3.3.2), compared to S neurons. Another important property of the complexity of the function computed at each neuron is the diversity of types of inputs detected by the neuron (Nguyen et al., 2016b; 2019). Here, we compare the diversity score of NetDissect concepts detected by units in S and R networks. For each channel C, we calculated a diversity score i.e. the number of unique concepts that C detects with an IoU score ≥ 0.01. Interestingly, on average, an R unit fires for 1.16 times fewer unique concepts than an S unit (22.43 vs. 26.07; c.f. Fig. A10a). Similar trends were observed in ResNet (Fig. A10b). Qualitatively comparing the highest-activation training-set images by the highest-IoU channels in both networks, for the same most-frequent concepts (e.g. striped), often confirms a striking difference: R units prefer a less diverse set of inputs (Fig. A12). As R hidden units fire for fewer concepts, i.e. significantly fewer inputs, the space for adversarial inputs to cause R models to misbehave is strictly smaller.
3.4 WHICH NEURONS ARE IMPORTANT FOR SHAPE- OR TEXTURE-BASED RECOGNITION?
To understand how the changes in R hidden neurons (Sec. 3.3) relate to the shape bias of R classifiers (Sec. 3.1), here, we zero out every channel, one at a time, in S and R networks and measure the performance drop in recognizing shape and texture from cue-conflict images.
Shape & Texture scores For each channel, we computed a Shape score i.e. the number of images originally correctly labeled into the shape class by the network but that, after the ablation, are labeled differently (examples in Fig 5a–b). Similarly, we computed a Texture score per channel. The Shape and Texture scores quantify the importance of a channel in classification using shapes or textures.
First, we found that the channels labeled texture by NetDissect are not only important to texturebut also shape-based recognition. That is, on average, zero-ing out these channels caused non-zero Texture and Shape scores (Fig. 4b; Texture and are above 0). See Fig. 5 for an example of texture channels with high Shape and Texture scores.3 This result sheds light into the fact that R networks consistently have more texture units (Fig. 4a) but are shape-biased (Sec. 3.1).
3Similar visualizations of some other neurons from both S and R networks are in Appendix Fig. A21–A26.
Second, the texture units are, as expected, highly texture-biased in AlexNet (Fig. 4b Texture; is almost 2× of ). However, surprisingly, those texture units in AlexNet-R are neither strongly shape-biased nor texture-biased (Fig. 4b; Texture ≈ ). That is, across all three groups of the object, color, and texture, R neurons appear mostly to be generalist, low-level feature detectors. This generalist property might be a reason for why R networks are more effective in transfer learning than S networks (Salman et al., 2020).
Finally, the contrast above between the texture bias of S and R channels (Fig. 4b) reminds researchers that the single NetDissect label assigned to each neuron is not describing a full picture of what the neuron does and how it helps in downstream tasks. To the best of our knowledge, this is the first work to align the NetDissect and cue-conflict frameworks to study how individual neurons contribute to the generalizability and shape bias of the entire network.
4 DISCUSSION AND RELATED WORK
Deep neural networks tend to prioritize learning simple patterns that are common across the training set (Arpit et al., 2017). Furthermore, deep ReLU networks often prefer learning simple functions (Valle-Perez et al., 2019; De Palma et al., 2019), specifically low-frequency functions (Rahaman et al., 2019), which are more robust to random parameter perturbations. Along this direction, here, we have shown that R networks (1) have smoother weights (Sec. 3.3.1), (2) prefer even simpler and fewer inputs (Sec. 3.3.2) than standard deep networks—i.e. R networks represent even simpler functions. Such simplicity biases are consistent with the fact that gradient images of R networks are much smoother (Tsipras et al., 2019) and that R classifiers act as a strong image prior for image synthesis (Santurkar et al., 2019).
Each R neuron computing a more restricted function than an S neuron (Sec. 3.3.2) implies that R models would require more neurons to mimic a complex S network. This is consistent with recent findings that adversarial training requires a larger model capacity (Xie & Yuille, 2020).
While AdvProp did not yet show benefits on ResNet (Xie et al., 2020), it might be interesting future work to find out whether EfficientNets trained via AdvProp also have shape and simplicity biases. Furthermore, simplicity biases may be incorporated as regularizers into future training algorithms to improve model robustness. For example, encouraging filters to be smoother might improve robustness to high-frequency noise. Also aligned with our findings, Rozsa & Boult (2019) found that explicitly narrowing down the non-zero input regions of ReLUs can improve adversarial robustness.
We found that R networks heavily rely on shape cues in contrast to S networks. One may fuse an S network and a R network (two channels, one uses texture and one uses shape) into a single, more robust, interpretable ML model. That is, such model may (1) have better generalization on OOD data than S or R network alone and (2) enable an explanation to users on what features a network uses to label a given image.
Our study on how individual hidden neurons contribute to the R network shape preference (Sec. 3.4) revealed that texture-detector units are equally important to the texture-based and shape-based recognition. This is in contrast to a common hypothesis that texture detectors should be exclusively only useful to texture-biased recognition. Our surprising finding suggests that the categories of stimuli in the well-known Network Dissection (Bau et al., 2017) need to be re-labeled and also extended with low-frequency patterns e.g. single lines or silhouettes in order to more accurately quantify hidden representations.
5 CONCLUSION
A CONVOLUTIONAL LAYERS USED IN NETWORK DISSECTION ANALYSIS
For both standard and robust models, we ran NetDissect on 5 convolutional layers in AlexNet (Krizhevsky et al., 2012), 12 in GoogLeNet (Szegedy et al., 2015), and 5 in ResNet-50 architectures (He et al., 2016). For each layer, we use after-ReLU activations (if ReLU exists).
AlexNet layers: conv1, conv2, conv3, conv4, conv5. Refer to these names in Krizhevsky et al. (2012).
GoogLeNet layers: conv1, conv2, conv3, inception3a, inception3b, inception4a, inception4b, inception4c, inception4d, inception4e, inception5a, inception5b
Refer to these names in PyTorch code https://github.com/pytorch/vision/blob/ master/torchvision/models/googlenet.py#L83-L101.
ResNet-50 layers: conv1, layer1, layer2, layer3, layer4
Refer to these names in PyTorch code https://github.com/pytorch/vision/blob/ master/torchvision/models/resnet.py#L145-L155).
Table A1: Top-1 accuracy of 6 models (in %) on all 15 types of image corruptions in ImageNet-C (Hendrycks & Dietterich, 2019). On average over all 15 distortion types, R models underperform their standard counterparts.
AlexNet AlexNet-R GoogLeNet GoogLeNet-R ResNet ResNet-R
Noise Gaussian 11.36 21.98 33.28 18.71 29.03 24.53
Shot 10.55 21.35 31.01 17.86 26.97 23.92 Impulse 7.74 19.68 24.54 15.30 23.55 21.07
Blur
Defocus 18.01 15.59 28.42 20.72 38.40 26.36 Glass 17.37 17.91 23.91 29.02 26.78 34.29
Motion 21.40 21.45 31.14 28.29 38.61 33.15 Zoom 20.16 21.60 25.57 28.98 35.73 33.83
Weather Snow 13.32 12.25 32.66 21.36 33.19 25.83 Frost 17.34 11.00 36.80 20.31 39.08 27.83 Fog 18.07 1.83 42.80 3.48 46.17 5.65
Brightness 43.54 27.71 64.46 42.96 68.32 49.71
Digital Contrast 14.68 3.28 43.66 5.90 38.86 8.78 Elastic 35.39 32.29 42.79 41.98 46.16 44.94 Pixelate 28.22 36.33 54.86 48.11 44.49 52.62 JPEG 39.35 38.65 52.57 50.44 53.80 54.37
mean Accuracy 21.10 20.19 37.90 26.23 39.27 31.13
100 100 100 100 100 100
61.75
30.09
91.18
66.76
94.77
73.35
36.03
15.98
51.74
22.31
68.31
25.96
5.99 4.70 6.31 4.37 11.02
4.06
A cc
ur ac
y (in
% )
0
25
50
75
100
AlexNet AlexNet-R GoogLeNet GoogLeNet-R ResNet ResNet-R
1x1 2x2 4x4 8x8
Figure A1: Standard models substantially outperform R models when tested on scrambled images due to their capability of recognizing images based on textures. See Fig. A6 for examples of scrambled images and their top-5 predictions from ResNet-R and ResNet (which achieves a remarkable accuracy of 94.77%). Here, we report top-1 accuracy scores (in %) on the scrambled images whose original versions were correctly labeled by both standard and R classifiers (hence, the 100% for 1× 1 blue bars).
Figure A2: conv1 filters of AlexNet-R are smoother than the filters in standard AlexNet. In each column, we show an AlexNet filter conv1 filter and their nearest filter (bottom) from the AlexNet-R. Above each pair of filters are their Spearman rank correlation score (e.g. r: 0.36) and their total variation (TV) difference (i.e. smoothness differences). Standard AlexNet filters are mostly noisier than their nearest R filter (i.e. positive TV differences).
AlexNet 11×11×3 AlexNet-R
GoogLeNet 7×7×3 GoogLeNet-R
ResNet 7×7×3 ResNet-R
Figure A3: All 64 conv1 filters of in each standard network (left) and its counterpart (right). The filters of R models (right) are smoother and less diverse compared to those in standard models (left). Especially, the edge filters of standard networks are noisier and often contain multiple colors in them.
(a) Real (b) Scrambled (c) Stylized (d) Contour (e) Silhouette
Figure A4: Applying different transformation that remove shape/texture on real images. We randomly show an example of 7 out of 16 COCO coarser classes. See Table 3 for classification accuracy scores on different images distortion dataset in 1000 classes(Except for Silhouette). *Note: Silhouette are validate in 16 COCO coarse classes.
0 1000 2000 3000 4000 5000 TV of channels for Clean Images
0
1000
2000
3000
4000
5000
TV o
f c ha
nn el
s f or
N oi
sy Im
ag es
AlexNet conv1 AlexNet-R conv1
(a) conv1
0 500 1000 1500 TV of channels for Clean Images
0
250
500
750
1000
1250
1500
1750
TV o
f c ha
nn el
s f or
N oi
sy Im
ag es
AlexNet conv2 AlexNet-R conv2
(b) conv2
0 200 400 600 TV of channels for Clean Images
0
100
200
300
400
500
600
700
TV o
f c ha
nn el
s f or
N oi
sy Im
ag es
AlexNet conv3 AlexNet-R conv3
(c) conv3
0 100 200 300 400 TV of channels for Clean Images
0
100
200
300 400 TV o f c ha nn el s f or N oi sy
Im ag es AlexNet conv4 AlexNet-R conv4
(d) conv4
0 50 100 150 200 TV of channels for Clean Images
0
50
100
150
200
TV o
f c ha
nn el
s f or
N oi
sy Im
ag es
AlexNet conv5 AlexNet-R conv5
(e) conv5
Figure A5: Each point shows the Total Variation (TV) of the activation maps on clean and noisy images for an AlexNet or AlexNet-R channel. We observe a striking difference in conv1: The smoothness of R channels remains unchanged before and after noise addition, explaining their superior performance in classifying noisy images. While the channel smoothness differences (between two networks) are gradually smaller in higher layers, we still observe R channels are consistently smoother.
R es
N et
-R R
es N
et R
es N
et -R
R es
N et
R es
N et
-R R
es N
et 1× 1 2× 2 4× 4 8× 8 1× 1 2× 2 4× 4 8× 8
Figure A6: ResNet-R, on average across the three patch sizes, underperforms the standard ResNet model. Surprisingly, we observe that ResNet correctly classifies the image to their ground truth class even when the image is randomly shuffled into 16 patches, e.g., ResNet classifies the 4 × 4 case of rule, safe with∼ 100% confidence. The results are consistent with the strong texture bias of ResNet and shape bias of ResNet-R (described in Sec. 3.2.1).
Figure A7: For each network, we show the number of channels in each of the 6 NetDissect categories (color, texture, etc) in Bau et al. (2017). Across all three architectures, R models consistently have more color and texture channels while substantially fewer object detectors.
str ipe
d
ba nd
ed che qu ere d fre ckl ed fril ly wa ffle d int erl ace d ve ine d wo ve n lac elik e po tho ledline d me she d po rou s ga uzy zig zag ge d cry sta llin e fle cke d spr ink led sta ine d gro ov ed sm ea red bu mp y fib rou s ple ate d wr ink led
cro ssh
atc he d kn itte d
pe rfo
rat ed cra cke d ho ne yco mb ed pa isle y po lka -do tte d stu dd ed cob we bb ed spi ral led gri d do tte d sw
irly 40
20
0
20
40
60
80
In cr
ea se
in n
um be
r o f c
ha nn
el s
80
23 21 14 13 11 9 8 5 4 4 4 2 2 2 1 1 1 1 1 1
-1 -1 -2 -2 -2 -3 -3 -5 -6 -8 -9 -10-10-10-11 -20
-26 -35
(a) Differences in texture channels between AlexNet and AlexNet-R
sky cei ling cur tai n flo or pa int ing mo un tai n sid ew alksno w fie ld
sw ive
l ch air
san d t
rapgra ss ea rth
bu ildi
ng tab le sky scr ap er bo ok be d po tte dp lan t she lf pla tfo rm gro un d bo at cab ine t esc ala tor ligh t tvm on ito r fire pla ce po le pill ar car pe t flo we r toi let win do wp an e po ol tab le ho use air pla ne foo d wa ter bic ycl e she eptra in bu s bir d mo tor bik e tre e pla nt ho rse car pe rso n cat do g
30
20
10
0
10
20
30
40
In cr
ea se
in n
um be
r o f c
ha nn
el s
40
12 8 6
3 3 2 2 2 1 1 1 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -2 -2 -2 -2 -3 -3 -3 -3 -4 -4 -4 -5 -6 -7 -8 -8 -8 -9 -11-13
-30
(b) Differences in object channels between AlexNet and AlexNet-R
Figure A8: In each bar plot, we column shows the difference in the number of channels (between AlexNet-R and AlexNet) for a given concept e.g. striped or banded. That is, yellow bars (i.e. positive numbers) show the count of channels that the R model has more than the standard network in the same concept. Vice versa, teal bars represent the concepts that R models have fewer channels. The NetDissect concept names are given in the x-axis. Top: In the texture category, the R model has a lot more simple texture patterns e.g. striped and banded (see Fig. A11 for example patterns in these concepts). Bottom: In the object category, AlexNet-R often prefers simpler-object detectors e.g. sky or ceiling (Fig. A8b; leftmost) while the standard network has more complex-objects detectors e.g. dog and cat (Fig. A8b; rightmost).
conv1 conv2 conv3 conv4 conv5 0
20
40
60
80
100
Nu m
be r o
f o bj
ec t c
ha nn
el s
0
18
71
46
104
5
28
57
33
49
AlexNet AlexNet-R
(a) Number of object detectors per AlexNet layer
conv1 conv2 conv3 conv4 conv5 0
10
20
30
40
Nu m
be r o
f c ol
or c
ha nn
el s
14
20 19
5
12 8
42 45
32
25
AlexNet AlexNet-R
(b) Number of color detectors per AlexNet layer
Figure A9: In higher layers (here, conv4 and conv5), AlexNet-R have fewer object detectors but more color detector units compared to standard AlexNet. The differences between the two networks increase as we go from lower to higher layers. Because both networks share an identical architecture, the plots here demonstrate a substantial shift in the functionality of the neurons as the result of adversarial training—detecting more colors and textures and fewer objects. Similar trends were also observed between standard and R models of GoogLeNet and ResNet-50 architectures.
conv1 conv2 conv3 conv4 conv5
20
25
30
35
M ea
n di
ve rs
ity sc
or e
AlexNet AlexNet-R
(a) AlexNet layer-wise mean diversity
conv1 layer1 layer2 layer3 layer4 5
10
15
20
25
30
35
40
45
M ea
n di
ve rs
ity sc
or e
ResNet ResNet-R
(b) ResNet layer-wise mean diversity
Figure A10: In each plot, we show the mean diversity scores across all channels in each layer. Both AlexNet-R and ResNet-R consistently have channels with lower diversity scores (i.e. detecting fewer unique concepts) than the standard counterparts.
Figure A20: AlexNet conv419 with Shape and Texture scores of 18 and 22, respectively. It has a NetDissect label of spiralled (IoU: 0.0568) under texture category. Although this neuron is in NetDissect texture category, the misclassified images suggest that this neuron helps in both shapeand texture-based recognition. Top: Top-49 images that highest-activated this channel. Middle: Mis-classified images in shape category (18 images). Bottom: Mis-classified images in texture category (22 images). | 1. What is the main contribution of the paper regarding the relationships between adversarially trained CNNs and shape-based representation?
2. What are the strengths of the paper in terms of methodology and experiments?
3. Do you have any concerns about the novelty of the paper's findings compared to prior works?
4. How do the results support the conclusions made by the authors?
5. Are there any suggestions or recommendations for additional experiments to further justify the claims made in the paper? | Review | Review
This paper takes a step further to understand the relationships between the adversarial trained CNNs (R-CNNs) and shape-based representation, and delve deeper into the R-CNNs via studying the hidden units. First, it justifies that the R-CNNs prefer shape cues based on random-shuffled, Stylized-ImageNet, and silhouette experiments. Then, it tests R-CNNs on ImageNet-C to show the less connection between the shape-biased and the robustness against common corruptions. Finally, it studies the hidden unit via qualitative tools including NetDissect.
The studied direction is important for both representation learning and robustness communities. The methodology of this paper and all experiments are technically sound. Using Network Dissection is a good point to study here. Testing on three benchmarks, and evaluate three different network architectures can help to verify the conclusions are general. The results are sufficient and would support their arguments, though more analysis would add support to the claims. Overall the paper is easy to follow and the reviewer thinks the experiments are easy to replicate.
Below are some concerns and suggestions:
The novelty is slightly limited. Many understanding is complementary to the findings in the literature, such as [1] that adversarially trained models are shape-biased, [2] that there're nor significant correlations between shape-biased and robustness against common corruptions, and [3] that low-frequency help the generalization. Many experimental methods are the same as previous work but only perform on adversarially trained models, like the scrambled (Sec. 3.2.1) is the same as the random shuffled in [4], test on Stylized-ImageNet[2], ImageNet-C, and etc. Also, there're no new techniques proposed. Basically, all the used techniques are from literature, like Network Dissection, and etc.
According to Table 3, compare to the Shape-Less column, the difference in the Texture-less column is not that significant. The reviewer wonder if the author would perform extra experiments to verify the R models significantly more prefer shape information, such as test on edge maps. The edge map is easy to drive from the silhouette and may be more suitable to be tested here. The value of the edge map and silhouette may be set to binary or greyscale [0, 255].
Currently, only qualitative results are provided to justify the R models would contain smoother filters than the standard ones. The reviewer wonders if any quantitative criterion can be designed and be reported thereon.
In the blocking pixel-wise noise section (Page 6), the author claims that R models are more robust against Gaussian additive noise based on Figure 3. However, the ImageNet-C contains such distorted images, and S models outperform than R models. The reviewer wonders if the author can test on Gaussian additive noise distorted images and report the results to further justify the claim.
To sum up, the reviewer would vote 5 currently -- due to its limited novelty but some understanding of the representation of adversarially trained models are proposed.
[1]Interpreting Adversarially Trained Convolutional Neural Network
[2]Imagenet-trained Cnns are Biased towards Texture; Increasing Shape Bias improves Accuracy and Robustness
[3]High-frequency Component Helps Explain the Generalization of Convolutional Neural Networks
[4]Defective Convolutional Layers Learn Robust CNNs
-------------------after rebuttal-----------------------
I thank the authors for their rebuttal. Since the authors reply near the discussion phase end, I cannot ask follow-up questions.
The answers partially address my concerns and thus I would raise my score to 6.
For novelty , the author answers three points. For the first point, fuse two channels of information are not so convincing on the novelty aspects. Also, there needs an extra cost to collect process the images. For the second point, how would you formulate a regularize? |
ICLR | Title
The shape and simplicity biases of adversarially robust ImageNet-trained CNNs
Abstract
Adversarial training has been the topic of dozens of studies and a leading method for defending against adversarial attacks. Yet, it remains largely unknown (a) how adversarially-robust ImageNet classifiers (R classifiers) generalize to out-ofdistribution examples; and (b) how their generalization capability relates to their hidden representations. In this paper, we perform a thorough, systematic study to answer these two questions across AlexNet, GoogLeNet, and ResNet-50 architectures. We found that while standard ImageNet classifiers have a strong texture bias, their R counterparts rely heavily on shapes. Remarkably, adversarial training induces three simplicity biases into hidden neurons in the process of “robustifying” the network. That is, each convolutional neuron in R networks often changes to detecting (1) pixel-wise smoother patterns i.e. a mechanism that blocks highfrequency noise from passing through the network; (2) more lower-level features i.e. textures and colors (instead of objects); and (3) fewer types of inputs. Our findings reveal the interesting mechanisms that made networks more adversarially robust and also explain some recent findings e.g. why R networks benefit from much larger capacity (Xie & Yuille, 2020) and can act as a strong image prior in image synthesis (Santurkar et al., 2019).
N/A
Adversarial training has been the topic of dozens of studies and a leading method for defending against adversarial attacks. Yet, it remains largely unknown (a) how adversarially-robust ImageNet classifiers (R classifiers) generalize to out-ofdistribution examples; and (b) how their generalization capability relates to their hidden representations. In this paper, we perform a thorough, systematic study to answer these two questions across AlexNet, GoogLeNet, and ResNet-50 architectures. We found that while standard ImageNet classifiers have a strong texture bias, their R counterparts rely heavily on shapes. Remarkably, adversarial training induces three simplicity biases into hidden neurons in the process of “robustifying” the network. That is, each convolutional neuron in R networks often changes to detecting (1) pixel-wise smoother patterns i.e. a mechanism that blocks highfrequency noise from passing through the network; (2) more lower-level features i.e. textures and colors (instead of objects); and (3) fewer types of inputs. Our findings reveal the interesting mechanisms that made networks more adversarially robust and also explain some recent findings e.g. why R networks benefit from much larger capacity (Xie & Yuille, 2020) and can act as a strong image prior in image synthesis (Santurkar et al., 2019).
1 INTRODUCTION
Given excellent test-set performance, deep neural networks often fail to generalize to out-ofdistribution (OOD) examples (Nguyen et al., 2015) including “adversarial examples”, i.e. modified inputs that are imperceptibly different from the real data but change predicted labels entirely (Szegedy et al., 2014). Importantly, adversarial examples can transfer between models and cause unseen, all machine learning (ML) models to misbehave (Papernot et al., 2017), threatening the security and reliability of ML applications (Akhtar & Mian, 2018). Adversarial training—teaching a classifier to correctly label adversarial examples (instead of real data)—has been a leading method in defending against adversarial attacks and the most effective defense in ICLR 2018 (Athalye et al., 2018). Besides improved performance on adversarial examples, test-set accuracy can also be improved, for some architectures, when real images are properly incorporated into adversarial training (Xie et al., 2020). It is therefore important to study how the standard adversarial training (by Madry et al. 2018) changes the hidden representations and generalization capabilities of neural networks.
On smaller datasets, Zhang & Zhu (2019) found that adversarially-robust networks (hereafter, R networks) rely heavily on shapes (instead of textures) to classify images. Intuitively, training on pixel-wise noisy images would encourage R networks to focus less on local statistics (e.g. textures) and instead harness global features (e.g. shapes) more. However, an important, open question is:
Q1: On ImageNet, do R networks still prefer shapes over textures?
It remains unknown whether such shape preference carries over to the large-scale ImageNet (Russakovsky et al., 2015), which often induces a large texture bias into networks (Geirhos et al., 2019) e.g. to separate ∼150 four-legged species in ImageNet. Also, this shape-bias hypothesis suggested by Zhang & Zhu (2019) seems to contradict the recent findings that R networks on ImageNet act as a strong texture prior i.e. they can be successfully used for many image translation tasks without any extra image prior (Santurkar et al., 2019). The above discussion leads to a follow-up question:
Q2: If an R network has a stronger preference for shapes than standard ImageNet networks (hereafter, S networks), will it perform better on OOD distorted images?
Networks trained to be more shape-biased can generalize better to many unseen ImageNet-C (Hendrycks & Dietterich, 2019) image corruptions than S networks, which have a strong texture bias (Brendel & Bethge, 2019). In contrast, there was also evidence that classifiers trained on one type of images often do not generalize well to others (Geirhos et al., 2018; Nguyen et al., 2015; Kang et al., 2019). Importantly, R networks often underperform S networks on original test sets (Tsipras et al., 2019) perhaps due to an inherent trade-off (Madry et al., 2018), a mismatch between real vs. adversarial distributions (Xie et al., 2020), or a limitation in architectures—AdvProp helps improving performance of EfficientNets but not ResNets (Xie et al., 2020).
Most previous work aimed at understanding the behaviors of R classifiers as a function but little is known about the internal characteristics of R networks and, furthermore, their connections to the shape bias and generalization performance. Here, we ask:
Q3: How did adversarial training change the hidden neural representations to make classifiers more shape-biased and adversarially robust?
In this paper, we harness the common benchmarks in ML interpretability and neuroscience—cueconflict (Geirhos et al., 2019), NetDissect (Bau et al., 2017), and ImageNet-C—to answer the three questions above via a systematic study across three different convolutional architectures—AlexNet (Krizhevsky et al., 2012), GoogLeNet (Szegedy et al., 2015), and ResNet-50 (He et al., 2016)— trained to perform image classification on the large-scale ImageNet dataset (Russakovsky et al., 2015). Our main findings include:1
1. R classifiers trained on ImageNet prefer shapes over textures∼67% of the time (Sec. 3.1)— a stark contrast to the S classifiers, which use shapes at only ∼25%.
2. Consistent with the strong shape bias, R classifiers interestingly outperform S counterparts on texture-less, distorted images (stylized and silhouetted images) (Sec. 3.2.2).
3. Adversarial training makes R networks more robust by (1) blocking pixel-wise input noise via smooth filters (Sec. 3.3.1); (2) narrowing the input range that highly activates neurons to simpler patterns, effectively reducing the space of adversarial inputs (Sec. 3.3.2).
4. Units that detect texture patterns (according to NetDissect) are not only useful to texturebased recognition as expected but can be also highly useful to shape-based recognition (Sec. 3.4). By aligning NetDissect and cue-conflict frameworks, we found that hidden neurons in R networks are surprisingly neither strongly shape-biased nor texture-biased, but instead generalists that detect low-level features (Sec. 3.4).
2 NETWORKS AND DATASETS
Networks To understand the effects of adversarial training across a wide range of architectures, we compare each pair of S and R models while keeping their network architectures constant. That is, we conduct all experiments on two groups of classifiers: (a) standard AlexNet, GoogLeNet, & ResNet-50 (hereafter, ResNet) models pre-trained on the 1000-class 2012 ImageNet dataset; and (b) three adversarially-robust counterparts i.e. AlexNet-R, GoogLeNet-R, & ResNet-R which were trained via adversarial training (see below) (Madry et al., 2018).
Training A standard classifier with parameters θ was trained to minimize the cross-entropy loss L over pairs of (training example x, ground-truth label y) drawn from the ImageNet training set D:
arg min θ
E(x,y)∼D [ L(θ, x, y) ] (1)
On the other hand, we trained each R classifier via Madry et al. (2018) adversarial training framework where each real example x is changed by a perturbation ∆:
arg min θ E(x,y)∼D [ max ∆∈P L(θ, x+ ∆, y) ]
(2)
1All code and data will be available on github upon publication.
where P is the perturbation range (Madry et al., 2018), here, within an L2 norm. Hyperparameters The S models were downloaded from PyTorch model zoo (PyTorch, 2019). We trained all R models using the robustness library (Engstrom et al., 2019), using the same hyperparameters in Engstrom et al. (2020); Santurkar et al. (2019); Bansal et al. (2020). That is, adverarial examples were generated using Projected Gradient Descent (PGD) (Madry et al., 2018) with an L2 norm constraint of 3, a step size of 0.5, and 7 PGD-attack steps. R models were trained using an SGD optimizer for 90 epochs with a momentum of 0.9, an initial learning rate of 0.1 (which is reduced 10 times every 30 epochs), a weight decay of 10−4, and a batch size of 256 on 4 Tesla-V100 GPU’s.
Compared to the standard counterparts, R models have substantially higher adversarial accuracy but lower ImageNet validation-set accuracy (Table 1). To compute adversarial accuracy, we perturbed validation-set images with the same PGD attack settings as used in training.
Correctly-labeled image subsets: ImageNet-CL Following Bansal et al. (2020), to compare the behaviors of two networks of identical architectures on the same inputs, we tested them on the largest ImageNet validation subset (hereafter, ImageNet-CL) where both models have 100% accuracy. The sizes of the three subsets for three architectures—AlexNet, GoogLeNet, and ResNet—are respectively: 17,693, 24,581, and 27,343. On modified ImageNet images (e.g. ImageNet-C), we only tested each pair of networks on the modified images whose original versions exist in ImageNet-CL. That is, we wish to gain deeper insights into how networks behave on correctly-classified images, and then how their behaviors change when some input feature (e.g. textures or shapes) is modified.
3 EXPERIMENT AND RESULTS
3.1 DO IMAGENET ADVERSARIALLY ROBUST NETWORKS PREFER SHAPES OR TEXTURES?
It is important to know which type of feature a classifier uses when making decisions. While standard ImageNet networks often carry a strong texture bias (Geirhos et al., 2019), it is unknown whether their adversarially-robust counterparts would be heavily texture- or shape-biased. Here, we test this hypothesis by comparing S and R models on the well-known cue-conflict dataset (Geirhos et al., 2019). That is, we feed “stylized” images provided by Geirhos et al. (2019) that contain contradicting texture and shape cues (e.g. elephant skin on a cat silhouette) and count the times a model uses textures or shapes (i.e. outputting elephant or cat) when it makes a correct prediction.
Experiment Our procedure follows Geirhos et al. (2019). First, we excluded 80 images that do not have conflicting cues (e.g. cat textures on cat shapes) from their 1,280-image dataset. Each texture or shape cue belongs to one of 16 MS COCO (Caesar et al., 2018) coarse labels (e.g. cat or elephant). Second, we ran the networks on these images and converted their 1000-class probability vector outputs into 16-class probability vectors by taking the average over the probabilities of the fine-grained classes that are under the same COCO label. Third, we took only the images that each network correctly labels (i.e. into the texture or shape class), which ranges from 669 to 877 images (out of 1,200) for 6 networks and computed the texture and shape accuracies over 16 classes.
Results On average, over three architectures, R classifiers rely on shapes ≥ 67.08% of the time i.e. ∼2.7× higher than 24.56% of the S models (Table 2). In other words, by replacing the real examples with adversarial examples, adversarial training causes the heavy texture bias of ImageNet classifiers (Geirhos et al., 2019; Brendel & Bethge, 2019) to drop substantially (∼2.7×).
3.2 DO ROBUST NETWORKS GENERALIZE TO UNSEEN TYPES OF DISTORTED IMAGES?
We have found that changing from standard training to adversarial training changes ImageNet classifiers entirely from texture-biased into shape-biased (Sec. 3.1). Furthermore, Geirhos et al. (2019)
found that some training regimes that encourage classifiers to focus more on shape can improve their performance on unseen image distortions. Therefore, it is interesting to test whether R models—a type of shape-biased classifiers— would generalize well to any OOD image types.
ImageNet-C We compare S and R networks on the ImageNet-C dataset which was designed to test model robustness on 15 common types of image corruptions (Fig. 1c), where several shape-biased classifiers were known to outperform S classifiers (Geirhos et al., 2019). Here, we tested each pair of S and R models on the ImageNet-C distorted images whose original versions were correctly labeled by both (i.e. in ImageNet-CL sets; Sec. 2).
Results R models show no generalization boost on ImageNet-C i.e. they performed on-par or worse than the S counterparts (Table 3c). This is consistent with the findings in Table 4 in Geirhos et al. (2019) that a stronger shape bias does not necessarily imply better generalizability.
To further understand the generalization capability of R models, we tested them on two controlled image types where either shape or texture cues are removed from the original, correctly-labeled ImageNet images. Note that when both shape and texture cues are present e.g. in cue-conflict images, R classifiers consistently prefer shape over texture i.e. a shape bias. However, this bias is orthogonal to the performance when only either texture or shape cues are present.
3.2.1 PERFORMANCE ON SHAPE-LESS, TEXTURE-PRESERVING IMAGES
We created shape-less images by dividing each ImageNet-CL image into a grid of p×p even patches where p ∈ {2, 4, 8} and re-combining them randomly into a new “scrambled” version (Fig. 1d). On average, over three grid types, we observed a larger accuracy drop in R models compared to S models, ranging from 1.6× to 2.04× lower accuracy (Table 3d). That is, R model performance drops substantially when object shapes are removed—another evidence for their reliance on shapes. Compare predictions of ResNet vs. ResNet-R for scrambled images in Fig. A6. Remarkably, ResNet accuracy only drops from 100% to 94.77% on the 2× 2 scrambled images (Fig. A1).
3.2.2 PERFORMANCE ON TEXTURE-LESS, SHAPE-PRESERVING IMAGES
Following Geirhos et al. (2019), we tested R models on three types of texture-less images where the texture is increasingly removed: (1) stylized ImageNet images where textures are randomly modified; (2) binary, black-and-white, i.e. B&W, images (Fig. 1f); and (3) silhouette images where the texture information is completely removed (Fig. 1e, g).
Stylized ImageNet To construct a set of stylized ImageNet images (see Fig. 1e), we took all ImageNet-CL images (Sec. 2) and changed their textures via a stylization procedure in Geirhos et al. (2019), which harnesses the style transfer technique (Gatys et al., 2016) to apply a random style to each ImageNet “content” image.
B&W images For all ImageNet-CL images, we used the same process described in Geirhos et al. (2019) to generate silhouettes, but we did not manually select and modify the images. We used the ImageMagick command-line tool (ImageMagick) to binarize ImageNet images into B&W images via the following steps:
1. convert image.jpeg image.bmp 2. potrace - -svg image.bmp -o image.svg 3. rsvg-convert image.svg > image.jpeg
Silhouette For all ImageNet-CL images, we obtained their segmentation maps via a PyTorch DeepLab-v2 model (Chen et al., 2017) pre-trained on MS COCO-Stuff. We used the ImageNet-CL images that belong to a set of 16 COCO coarse classes in Geirhos et al. (2019) (e.g. bird, bicycle, airplane, etc.). When evaluating classifiers, an image is considered correctly labeled if its ImageNet predicted label is a subclass of the correct class among the 16 COCO classes (Fig. 1f; mapping sandpiper→ bird). Results On all three texture-less sets, R models consistently outperformed their S counterparts (Table 3e–g)—a remarkable generalization capability, especially on B&W and silhouette images where all texture information is mostly removed.
3.3 HOW DOES ADVERSARIAL TRAINING MAKE NETWORKS MORE ROBUST?
What internal mechanisms help R networks become more robust? Here, we shed light into this question by analyzing R networks at the weight (Sec. 3.3.1) and neuron (Sec. 3.3.2) levels.
3.3.1 WEIGHT LEVEL: SMOOTH FILTERS TO BLOCK PIXEL-WISE NOISE
Consistent with Yin et al. (2019); Gilmer et al. (2019), we observed that AlexNet-R substantially outperforms AlexNet not only on adversarial examples but also several types of high-frequency image types (e.g. additive noise) in ImageNet-C (Table A1).
Smoother filters To explain this phenomenon, we visualized the weights of all 64 conv1 filters (11×11×3), in both AlexNet and AlexNet-R, as RGB images. We compare each AlexNet conv1 filter with its nearest conv1 filter (via Spearman rank correlation) in AlexNet-R. Remarkably, R filters appear qualitatively much smoother than their counterparts (Fig. 2a). The R filter bank is also less diverse e.g. R edge detectors are often black-and-white in contrast to the colorful AlexNet edges (Fig. 2b). A similar contrast was also seen for the GoogLeNet and ResNet models (Fig. A3).
We also quantify the smoothness, in total variation (TV), of the filters of all 6 models (Table. 4) and found that, on average, the filters in R networks are much smoother. For example, the mean TV of
AlexNet-R is about 2 times smaller than AlexNet. Also, in lower layers, the filters in R classifiers are consistently 2 to 3 times smoother (Fig. A27).
Blocking pixel-wise noise We hypothesize that the smoothness of filters makes R classifiers more robust against noisy images. To test this hypothesis, we computed the total variation (TV) (Rudin et al., 1992) of the channels across 5 conv layers when feeding ImageNet-CL images and their noisy versions (Fig. 1c; ImageNet-C Level 1 additive noise ∼ N(0, 0.08)) to S and R models. At conv1, the smoothness of R activation maps remains almost unchanged before and after noise addition (Fig. 3a; yellow circles are on the diagonal line). In contrast, the conv1 filters in standard AlexNet allow Gaussian noise to pass through, yielding larger-TV channels (Fig. 3a; blue circles are mostly above the diagonal). That is, the smooth filters in R models indeed can filter out pixel-wise Gaussian noise despite that R models were not explicitly trained on this image type! Interestingly, Ford et al. (2019) finding that the reverse engineering also works: training with Gaussian noise can improve adversarial robustness.
In higher layers, it is intuitive that the pixel-wise noise added to the input image might not necessarily cause activation maps, in both S and R networks, to be noisy because higher-layered units detect more abstract concepts. However, interestingly, we still found that R channels to have consistently less mean TV (Fig. 3b–c). Our result suggests that most of the de-noising effects take place at lower layers (which contain generic features) instead of higher layers.
3.3.2 NEURON LEVEL: ROBUST NEURONS PREFER LOWER-LEVEL AND FEWER INPUTS
Here, via NetDissect framework, we wish to characterize how adversarial training changed the hidden neurons in R networks to make R classifiers more adversarially robust.
Network Dissection (hereafter, NetDissect) is a common framework for quantifying the functions of a neuron by computing the Intersection over Union (IoU) between each activation map (i.e. channels) and the human-annotated segmentation maps for the same input images. That is, each channel is given an IoU score per human-defined concept (e.g. dog or zigzagged) indicating its accuracy in detecting images of that concept. A channel is tested for its accuracy on all ∼1,400 concepts, which span across six coarse categories: object, part, scene, texture, color, and material (Bau et al., 2017) (c.f. Fig. A11 for example NetDissect images in texture and color concepts). Following Bau et al. (2017), we assign each channel C a main functional label i.e. the concept that C has the highest IoU with. In both S and R models, we ran NetDissect on all 1152, 5808, and 3904 channels from,
respectively, 5, 12, and 5 main convolutional layers (post-ReLU) of the AlexNet, GoogLeNet, and ResNet-50 architectures (c.f. Sec. A for more details of layers used).
Shift to detecting more low-level features i.e. colors and textures We found a consistent trend— adversarial training resulted in substantially more filters that detect colors and textures (i.e. in R models) in exchange for fewer object and part detectors. For example, throughout the same GoogLeNet architecture, we observed a 102% and a 34% increase of color and texture detectors, respectively, in the R model, but a 20% and a 26% fewer object and part detectors, compared to the S model (c.f. Fig. 4a). After adversarial training,∼11%, 15%, and 10% of all hidden neurons (in the tested layers) in AlexNet, GoogLeNet, and ResNet, respectively, shift their roles to detecting lowerlevel features (i.e. textures and colors) instead of higher-level features (Fig. A12). Across three architectures, the increases in texture and color channels are often larger in higher layers. While lower-layered units often learn more generic features, higher-layered units are more task-specific (Nguyen et al., 2016a), hence the largest functional shifts in higher layers.
We also compare the shape-biased ResNet-R with ResNet-SIN i.e. a ResNet-50 trained exclusively on stylized images (Geirhos et al., 2019), which also has a strong shape bias of 81.37%. 2 Interestingly, similar to ResNet-R, ResNet-SIN also have more low-level feature detectors (colors and textures) and fewer high-level feature detectors (objects and parts) than the vanilla ResNet (Fig. A28).
Shift to detecting simpler objects Analyzing the concepts in the object category where we observed largest changes in channel count, we found evidence that neurons change from detecting
2model A in https://github.com/rgeirhos/texture-vs-shape/
complex to simpler objects. That is, for each NetDissect concept, we computed the difference in the numbers of channels between the S and R model. In the same object category, AlexNet-R model has substantially fewer channels detecting complex concepts e.g. −30 dog, −13 cat, and −11 person detectors (Fig. A8b; rightmost columns), compared to the standard network. In contrast, the R model has more channels detecting simpler concepts, e.g. +40 sky and +12 ceiling channels (Fig. A8b; leftmost columns). The top-49 images that highest-activated R units across five conv layers also show their strong preference for simpler backgrounds and objects (Figs. A15–A19).
Shift to detecting fewer unique concepts The previous sections have revealed that neurons in R models often prefer images that are pixel-wise smoother (Sec. 3.3.1) and of lower-level features (Sec. 3.3.2), compared to S neurons. Another important property of the complexity of the function computed at each neuron is the diversity of types of inputs detected by the neuron (Nguyen et al., 2016b; 2019). Here, we compare the diversity score of NetDissect concepts detected by units in S and R networks. For each channel C, we calculated a diversity score i.e. the number of unique concepts that C detects with an IoU score ≥ 0.01. Interestingly, on average, an R unit fires for 1.16 times fewer unique concepts than an S unit (22.43 vs. 26.07; c.f. Fig. A10a). Similar trends were observed in ResNet (Fig. A10b). Qualitatively comparing the highest-activation training-set images by the highest-IoU channels in both networks, for the same most-frequent concepts (e.g. striped), often confirms a striking difference: R units prefer a less diverse set of inputs (Fig. A12). As R hidden units fire for fewer concepts, i.e. significantly fewer inputs, the space for adversarial inputs to cause R models to misbehave is strictly smaller.
3.4 WHICH NEURONS ARE IMPORTANT FOR SHAPE- OR TEXTURE-BASED RECOGNITION?
To understand how the changes in R hidden neurons (Sec. 3.3) relate to the shape bias of R classifiers (Sec. 3.1), here, we zero out every channel, one at a time, in S and R networks and measure the performance drop in recognizing shape and texture from cue-conflict images.
Shape & Texture scores For each channel, we computed a Shape score i.e. the number of images originally correctly labeled into the shape class by the network but that, after the ablation, are labeled differently (examples in Fig 5a–b). Similarly, we computed a Texture score per channel. The Shape and Texture scores quantify the importance of a channel in classification using shapes or textures.
First, we found that the channels labeled texture by NetDissect are not only important to texturebut also shape-based recognition. That is, on average, zero-ing out these channels caused non-zero Texture and Shape scores (Fig. 4b; Texture and are above 0). See Fig. 5 for an example of texture channels with high Shape and Texture scores.3 This result sheds light into the fact that R networks consistently have more texture units (Fig. 4a) but are shape-biased (Sec. 3.1).
3Similar visualizations of some other neurons from both S and R networks are in Appendix Fig. A21–A26.
Second, the texture units are, as expected, highly texture-biased in AlexNet (Fig. 4b Texture; is almost 2× of ). However, surprisingly, those texture units in AlexNet-R are neither strongly shape-biased nor texture-biased (Fig. 4b; Texture ≈ ). That is, across all three groups of the object, color, and texture, R neurons appear mostly to be generalist, low-level feature detectors. This generalist property might be a reason for why R networks are more effective in transfer learning than S networks (Salman et al., 2020).
Finally, the contrast above between the texture bias of S and R channels (Fig. 4b) reminds researchers that the single NetDissect label assigned to each neuron is not describing a full picture of what the neuron does and how it helps in downstream tasks. To the best of our knowledge, this is the first work to align the NetDissect and cue-conflict frameworks to study how individual neurons contribute to the generalizability and shape bias of the entire network.
4 DISCUSSION AND RELATED WORK
Deep neural networks tend to prioritize learning simple patterns that are common across the training set (Arpit et al., 2017). Furthermore, deep ReLU networks often prefer learning simple functions (Valle-Perez et al., 2019; De Palma et al., 2019), specifically low-frequency functions (Rahaman et al., 2019), which are more robust to random parameter perturbations. Along this direction, here, we have shown that R networks (1) have smoother weights (Sec. 3.3.1), (2) prefer even simpler and fewer inputs (Sec. 3.3.2) than standard deep networks—i.e. R networks represent even simpler functions. Such simplicity biases are consistent with the fact that gradient images of R networks are much smoother (Tsipras et al., 2019) and that R classifiers act as a strong image prior for image synthesis (Santurkar et al., 2019).
Each R neuron computing a more restricted function than an S neuron (Sec. 3.3.2) implies that R models would require more neurons to mimic a complex S network. This is consistent with recent findings that adversarial training requires a larger model capacity (Xie & Yuille, 2020).
While AdvProp did not yet show benefits on ResNet (Xie et al., 2020), it might be interesting future work to find out whether EfficientNets trained via AdvProp also have shape and simplicity biases. Furthermore, simplicity biases may be incorporated as regularizers into future training algorithms to improve model robustness. For example, encouraging filters to be smoother might improve robustness to high-frequency noise. Also aligned with our findings, Rozsa & Boult (2019) found that explicitly narrowing down the non-zero input regions of ReLUs can improve adversarial robustness.
We found that R networks heavily rely on shape cues in contrast to S networks. One may fuse an S network and a R network (two channels, one uses texture and one uses shape) into a single, more robust, interpretable ML model. That is, such model may (1) have better generalization on OOD data than S or R network alone and (2) enable an explanation to users on what features a network uses to label a given image.
Our study on how individual hidden neurons contribute to the R network shape preference (Sec. 3.4) revealed that texture-detector units are equally important to the texture-based and shape-based recognition. This is in contrast to a common hypothesis that texture detectors should be exclusively only useful to texture-biased recognition. Our surprising finding suggests that the categories of stimuli in the well-known Network Dissection (Bau et al., 2017) need to be re-labeled and also extended with low-frequency patterns e.g. single lines or silhouettes in order to more accurately quantify hidden representations.
5 CONCLUSION
A CONVOLUTIONAL LAYERS USED IN NETWORK DISSECTION ANALYSIS
For both standard and robust models, we ran NetDissect on 5 convolutional layers in AlexNet (Krizhevsky et al., 2012), 12 in GoogLeNet (Szegedy et al., 2015), and 5 in ResNet-50 architectures (He et al., 2016). For each layer, we use after-ReLU activations (if ReLU exists).
AlexNet layers: conv1, conv2, conv3, conv4, conv5. Refer to these names in Krizhevsky et al. (2012).
GoogLeNet layers: conv1, conv2, conv3, inception3a, inception3b, inception4a, inception4b, inception4c, inception4d, inception4e, inception5a, inception5b
Refer to these names in PyTorch code https://github.com/pytorch/vision/blob/ master/torchvision/models/googlenet.py#L83-L101.
ResNet-50 layers: conv1, layer1, layer2, layer3, layer4
Refer to these names in PyTorch code https://github.com/pytorch/vision/blob/ master/torchvision/models/resnet.py#L145-L155).
Table A1: Top-1 accuracy of 6 models (in %) on all 15 types of image corruptions in ImageNet-C (Hendrycks & Dietterich, 2019). On average over all 15 distortion types, R models underperform their standard counterparts.
AlexNet AlexNet-R GoogLeNet GoogLeNet-R ResNet ResNet-R
Noise Gaussian 11.36 21.98 33.28 18.71 29.03 24.53
Shot 10.55 21.35 31.01 17.86 26.97 23.92 Impulse 7.74 19.68 24.54 15.30 23.55 21.07
Blur
Defocus 18.01 15.59 28.42 20.72 38.40 26.36 Glass 17.37 17.91 23.91 29.02 26.78 34.29
Motion 21.40 21.45 31.14 28.29 38.61 33.15 Zoom 20.16 21.60 25.57 28.98 35.73 33.83
Weather Snow 13.32 12.25 32.66 21.36 33.19 25.83 Frost 17.34 11.00 36.80 20.31 39.08 27.83 Fog 18.07 1.83 42.80 3.48 46.17 5.65
Brightness 43.54 27.71 64.46 42.96 68.32 49.71
Digital Contrast 14.68 3.28 43.66 5.90 38.86 8.78 Elastic 35.39 32.29 42.79 41.98 46.16 44.94 Pixelate 28.22 36.33 54.86 48.11 44.49 52.62 JPEG 39.35 38.65 52.57 50.44 53.80 54.37
mean Accuracy 21.10 20.19 37.90 26.23 39.27 31.13
100 100 100 100 100 100
61.75
30.09
91.18
66.76
94.77
73.35
36.03
15.98
51.74
22.31
68.31
25.96
5.99 4.70 6.31 4.37 11.02
4.06
A cc
ur ac
y (in
% )
0
25
50
75
100
AlexNet AlexNet-R GoogLeNet GoogLeNet-R ResNet ResNet-R
1x1 2x2 4x4 8x8
Figure A1: Standard models substantially outperform R models when tested on scrambled images due to their capability of recognizing images based on textures. See Fig. A6 for examples of scrambled images and their top-5 predictions from ResNet-R and ResNet (which achieves a remarkable accuracy of 94.77%). Here, we report top-1 accuracy scores (in %) on the scrambled images whose original versions were correctly labeled by both standard and R classifiers (hence, the 100% for 1× 1 blue bars).
Figure A2: conv1 filters of AlexNet-R are smoother than the filters in standard AlexNet. In each column, we show an AlexNet filter conv1 filter and their nearest filter (bottom) from the AlexNet-R. Above each pair of filters are their Spearman rank correlation score (e.g. r: 0.36) and their total variation (TV) difference (i.e. smoothness differences). Standard AlexNet filters are mostly noisier than their nearest R filter (i.e. positive TV differences).
AlexNet 11×11×3 AlexNet-R
GoogLeNet 7×7×3 GoogLeNet-R
ResNet 7×7×3 ResNet-R
Figure A3: All 64 conv1 filters of in each standard network (left) and its counterpart (right). The filters of R models (right) are smoother and less diverse compared to those in standard models (left). Especially, the edge filters of standard networks are noisier and often contain multiple colors in them.
(a) Real (b) Scrambled (c) Stylized (d) Contour (e) Silhouette
Figure A4: Applying different transformation that remove shape/texture on real images. We randomly show an example of 7 out of 16 COCO coarser classes. See Table 3 for classification accuracy scores on different images distortion dataset in 1000 classes(Except for Silhouette). *Note: Silhouette are validate in 16 COCO coarse classes.
0 1000 2000 3000 4000 5000 TV of channels for Clean Images
0
1000
2000
3000
4000
5000
TV o
f c ha
nn el
s f or
N oi
sy Im
ag es
AlexNet conv1 AlexNet-R conv1
(a) conv1
0 500 1000 1500 TV of channels for Clean Images
0
250
500
750
1000
1250
1500
1750
TV o
f c ha
nn el
s f or
N oi
sy Im
ag es
AlexNet conv2 AlexNet-R conv2
(b) conv2
0 200 400 600 TV of channels for Clean Images
0
100
200
300
400
500
600
700
TV o
f c ha
nn el
s f or
N oi
sy Im
ag es
AlexNet conv3 AlexNet-R conv3
(c) conv3
0 100 200 300 400 TV of channels for Clean Images
0
100
200
300 400 TV o f c ha nn el s f or N oi sy
Im ag es AlexNet conv4 AlexNet-R conv4
(d) conv4
0 50 100 150 200 TV of channels for Clean Images
0
50
100
150
200
TV o
f c ha
nn el
s f or
N oi
sy Im
ag es
AlexNet conv5 AlexNet-R conv5
(e) conv5
Figure A5: Each point shows the Total Variation (TV) of the activation maps on clean and noisy images for an AlexNet or AlexNet-R channel. We observe a striking difference in conv1: The smoothness of R channels remains unchanged before and after noise addition, explaining their superior performance in classifying noisy images. While the channel smoothness differences (between two networks) are gradually smaller in higher layers, we still observe R channels are consistently smoother.
R es
N et
-R R
es N
et R
es N
et -R
R es
N et
R es
N et
-R R
es N
et 1× 1 2× 2 4× 4 8× 8 1× 1 2× 2 4× 4 8× 8
Figure A6: ResNet-R, on average across the three patch sizes, underperforms the standard ResNet model. Surprisingly, we observe that ResNet correctly classifies the image to their ground truth class even when the image is randomly shuffled into 16 patches, e.g., ResNet classifies the 4 × 4 case of rule, safe with∼ 100% confidence. The results are consistent with the strong texture bias of ResNet and shape bias of ResNet-R (described in Sec. 3.2.1).
Figure A7: For each network, we show the number of channels in each of the 6 NetDissect categories (color, texture, etc) in Bau et al. (2017). Across all three architectures, R models consistently have more color and texture channels while substantially fewer object detectors.
str ipe
d
ba nd
ed che qu ere d fre ckl ed fril ly wa ffle d int erl ace d ve ine d wo ve n lac elik e po tho ledline d me she d po rou s ga uzy zig zag ge d cry sta llin e fle cke d spr ink led sta ine d gro ov ed sm ea red bu mp y fib rou s ple ate d wr ink led
cro ssh
atc he d kn itte d
pe rfo
rat ed cra cke d ho ne yco mb ed pa isle y po lka -do tte d stu dd ed cob we bb ed spi ral led gri d do tte d sw
irly 40
20
0
20
40
60
80
In cr
ea se
in n
um be
r o f c
ha nn
el s
80
23 21 14 13 11 9 8 5 4 4 4 2 2 2 1 1 1 1 1 1
-1 -1 -2 -2 -2 -3 -3 -5 -6 -8 -9 -10-10-10-11 -20
-26 -35
(a) Differences in texture channels between AlexNet and AlexNet-R
sky cei ling cur tai n flo or pa int ing mo un tai n sid ew alksno w fie ld
sw ive
l ch air
san d t
rapgra ss ea rth
bu ildi
ng tab le sky scr ap er bo ok be d po tte dp lan t she lf pla tfo rm gro un d bo at cab ine t esc ala tor ligh t tvm on ito r fire pla ce po le pill ar car pe t flo we r toi let win do wp an e po ol tab le ho use air pla ne foo d wa ter bic ycl e she eptra in bu s bir d mo tor bik e tre e pla nt ho rse car pe rso n cat do g
30
20
10
0
10
20
30
40
In cr
ea se
in n
um be
r o f c
ha nn
el s
40
12 8 6
3 3 2 2 2 1 1 1 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -2 -2 -2 -2 -3 -3 -3 -3 -4 -4 -4 -5 -6 -7 -8 -8 -8 -9 -11-13
-30
(b) Differences in object channels between AlexNet and AlexNet-R
Figure A8: In each bar plot, we column shows the difference in the number of channels (between AlexNet-R and AlexNet) for a given concept e.g. striped or banded. That is, yellow bars (i.e. positive numbers) show the count of channels that the R model has more than the standard network in the same concept. Vice versa, teal bars represent the concepts that R models have fewer channels. The NetDissect concept names are given in the x-axis. Top: In the texture category, the R model has a lot more simple texture patterns e.g. striped and banded (see Fig. A11 for example patterns in these concepts). Bottom: In the object category, AlexNet-R often prefers simpler-object detectors e.g. sky or ceiling (Fig. A8b; leftmost) while the standard network has more complex-objects detectors e.g. dog and cat (Fig. A8b; rightmost).
conv1 conv2 conv3 conv4 conv5 0
20
40
60
80
100
Nu m
be r o
f o bj
ec t c
ha nn
el s
0
18
71
46
104
5
28
57
33
49
AlexNet AlexNet-R
(a) Number of object detectors per AlexNet layer
conv1 conv2 conv3 conv4 conv5 0
10
20
30
40
Nu m
be r o
f c ol
or c
ha nn
el s
14
20 19
5
12 8
42 45
32
25
AlexNet AlexNet-R
(b) Number of color detectors per AlexNet layer
Figure A9: In higher layers (here, conv4 and conv5), AlexNet-R have fewer object detectors but more color detector units compared to standard AlexNet. The differences between the two networks increase as we go from lower to higher layers. Because both networks share an identical architecture, the plots here demonstrate a substantial shift in the functionality of the neurons as the result of adversarial training—detecting more colors and textures and fewer objects. Similar trends were also observed between standard and R models of GoogLeNet and ResNet-50 architectures.
conv1 conv2 conv3 conv4 conv5
20
25
30
35
M ea
n di
ve rs
ity sc
or e
AlexNet AlexNet-R
(a) AlexNet layer-wise mean diversity
conv1 layer1 layer2 layer3 layer4 5
10
15
20
25
30
35
40
45
M ea
n di
ve rs
ity sc
or e
ResNet ResNet-R
(b) ResNet layer-wise mean diversity
Figure A10: In each plot, we show the mean diversity scores across all channels in each layer. Both AlexNet-R and ResNet-R consistently have channels with lower diversity scores (i.e. detecting fewer unique concepts) than the standard counterparts.
Figure A20: AlexNet conv419 with Shape and Texture scores of 18 and 22, respectively. It has a NetDissect label of spiralled (IoU: 0.0568) under texture category. Although this neuron is in NetDissect texture category, the misclassified images suggest that this neuron helps in both shapeand texture-based recognition. Top: Top-49 images that highest-activated this channel. Middle: Mis-classified images in shape category (18 images). Bottom: Mis-classified images in texture category (22 images). | 1. What is the focus of the paper regarding experimental studies on adversarial robustness?
2. What are the strengths and weaknesses of the reviewed paper?
3. Do you have any questions or suggestions regarding the nomenclature used in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any limitations in the study that could be improved in future works? | Review | Review
Summary: The submission concerns an experimental study of the behavior of networks trained with and without adversarial robustness criteria (using Madry et al., 2017). Given a set of such trained networks, a detailed look at the behavior and properties of adversarially robust networks and their non-robust counterparts is taken. This includes evaluation on the cue-conflict dataset (Geirhos et al.), and on scrambled, or texture-less (silhouette) variants of ImageNet images. Furthermore, visualizations of filter banks are analyzed and compared, as well as an analysis on the neuron level is carried out, using the NetDissect framework by Bau et al. Insights include that adversarially trained networks are more shape-biased (and reliant) than their counterparts which are known to be texture-biased. Furthermore, three simplicity biases are found that result in smoother filters, increased focus on low-level cues, and decreased diversity of inputs detected by neurons of adversarially trained networks.
Review: The authors have conducted a fairly thorough experimental study w.r.t. the level of detail considered; insights range from experiment to neuron level. The main limitations of the study are set by the types of networks considered (2012-2015 era networks only) and the types of adversarial training considered (only the work by Madry et al.). With regards to the latter, incorporation of newer work (e.g. (1), (2), or (3)) on adversarial training could have made the argument stronger, but this may simply have been subject due to scope and time limitations. Despite these limitations, the scope of evaluation is sufficient, and the level of supplied visualizations is exemplary. The work lacks any theoretical insight, but there is potential value in the type of conducted detailed empirical study. A large part of the results seems to agree with prior work (cf. mentions of “consistent with” and similar), such that part of the value of this study is additional confirmation of what prior work may have found or otherwise hypothesized. What I am missing a bit, however, is a discussion on what the mentioned findings may mean for future developments of adversarial training, adversarial attack design, or other mitigations of adversarial attacks. The discussion of possible consequences is limited to the last paragraph, which is meant to discuss future work.
Clarity of writing is generally good; however, I was not really a fan of the chosen nomenclature and the abundant definition of abbreviations which may clash with own notions of their meaning. For example, is it necessary to define “S-networks” and “R-networks”, where R-networks happen to be shape-biased and S-networks texture-biased? My preference would have been to simply spell out “adversarially trained” (“-adv”) vs. “not adversarially trained”. This is just a personal opinion and does not affect the rating. Similarly with the datasets; “ImageNet-C” is mentioned on page 3, but does not seem to be defined beyond “modified ImageNet images” and then shown in Figure 1.
Overall, I believe the scope of experiments and evaluations goes beyond workshop-level work. But originality and significance remain limited, as mentioned above.
—- (1) Shafahi et al., “Adversarial Training for Free!”, NeurIPS 2019. (2) Cohen et al., “Certified Adversarial Robustness via Randomized Smoothing”, ICML 2019. (3) Xie et al., “Smooth Adversarial Training”, arXiv preprint, 2006.14536 |
ICLR | Title
WaveFlow: A Compact Flow-based Model for Raw Audio
Abstract
In this work, we present WaveFlow, a small-footprint generative flow for raw audio, which is trained with maximum likelihood without density distillation and auxiliary losses as used in Parallel WaveNet. It provides a unified view of flowbased models for raw audio, including autoregressive flow (e.g., WaveNet) and bipartite flow (e.g., WaveGlow) as special cases. We systematically study these likelihood-based generative models for raw waveforms in terms of test likelihood and speech fidelity. We demonstrate that WaveFlow can synthesize high-fidelity speech and obtain comparable likelihood as WaveNet, while only requiring a few sequential steps to generate very long waveforms. In particular, our small-footprint WaveFlow has 5.91M parameters and can generate 22.05kHz high-fidelity speech 42.6× faster than real-time on a GPU without engineered inference kernels. 1
1 INTRODUCTION
Deep generative models have obtained noticeable successes for modeling raw audio in high-fidelity speech synthesis and music generation (e.g., van den Oord et al., 2016; Dieleman et al., 2018). Autoregressive models are among the best performing generative models for raw audio waveforms, providing the highest likelihood scores and generating high quality samples (e.g., van den Oord et al., 2016; Kalchbrenner et al., 2018). One of the most successful examples is WaveNet (van den Oord et al., 2016), an autoregressive model for waveform synthesis. It operates at the high temporal resolution of raw audio (e.g., 24kHz) and sequentially generates waveform samples at inference. As a result, WaveNet is prohibitively slow for speech synthesis and one has to develop highly engineered kernels for real-time inference (Arık et al., 2017a; Pharris, 2018). 2
Flow-based models (Dinh et al., 2014; Rezende and Mohamed, 2015) are a family of generative models, in which a simple initial density is transformed into a complex one by applying a series of invertible transformations. One group of models are based on autoregressive transformation, including autoregressive flow (AF) and inverse autoregressive flow (IAF) as the “dual” of each other (Kingma et al., 2016; Papamakarios et al., 2017; Huang et al., 2018). AF is analogous to autoregressive models, which performs parallel density evaluation and sequential synthesis. In contrast, IAF performs parallel synthesis but sequential density evaluation, making likelihood-based training very slow. Parallel WaveNet (van den Oord et al., 2018) distills an IAF from a pretrained autoregressive WaveNet, which gets the best of both worlds. However, it requires the density distillation with Monte Carlo approximation and a set of auxiliary losses for good performance, which complicates the training pipeline and increases the cost of development. Instead, ClariNet (Ping et al., 2019) simplifies the density distillation by computing a regularized KL divergence in closed-form.
Another group of flow-based models are based on bipartite transformation (Dinh et al., 2017; Kingma and Dhariwal, 2018), which provide parallel density evaluation and parallel synthesis. Most recently, WaveGlow (Prenger et al., 2019) and FloWaveNet (Kim et al., 2019) successfully applies Glow (Kingma and Dhariwal, 2018) and RealNVP (Dinh et al., 2017) for waveform synthesis, respectively. However, the bipartite transformations are less expressive than the autoregressive transformations (see Section 2.3 for detailed discussion). In general, these bipartite flows require
1Audio samples are located at: https://waveflow-demo.github.io/. 2Real-time inference is a requirement for most production text-to-speech systems. For example, if the system
can synthesize 1 second of speech in 0.5 seconds, it is 2× faster than real-time.
deeper layers, larger hidden size, and huge number of parameters to reach comparable capacities as autoregressive models. For example, WaveGlow and FloWaveNet have 87.88M and 182.64M parameters with 96 layers and 256 residual channels, respectively. In contrast, a 30-layer WaveNet has only 4.57M parameters with 128 residual channels.
In this work, we present WaveFlow, a compact flow-based model for raw audio. Specifically, we make the following contributions:
1. WaveFlow is trained with maximum likelihood without density distillation and auxiliary losses used in Parallel WaveNet (van den Oord et al., 2018) and ClariNet (Ping et al., 2019), which simplifies the training pipeline and reduces the cost of development.
2. WaveFlow squeezes the 1-D raw waveforms into a 2-D matrix and produces the whole audio within a fixed sequential steps. It also provides a unified view of flow-based models for raw audio and allows us to explicitly trade inference efficiency for model capacity. We implement WaveFlow with a dilated 2-D convolutional architecture (Yu and Koltun, 2015), and it includes both Gaussian WaveNet (Ping et al., 2019) and WaveGlow (Prenger et al., 2019) as special cases.
3. We systematically study the likelihood-based generative models for raw audios in terms of test likelihood and speech quality. We demonstrate that WaveFlow can obtain comparable likelihood and synthesize high-fidelity speech as WaveNet (van den Oord et al., 2016), while only requiring a few sequential steps to generate very long waveforms.
4. Our small-footprint WaveFlow has only 5.91M parameters and synthesizes 22.05 kHz highfidelity speech (MOS: 4.32) more than 40× faster than real-time on a Nvidia V100 GPU. In contrast, WaveGlow (Prenger et al., 2019) requires 87.8M parameters for generating high-fidelity speech. The small memory footprint is preferred in production TTS systems, especially for on-device deployment.
We organize the rest of the paper as follows. Section 2 reviews the flow-based models with autoregressive and bipartite transformations. We present WaveFlow in Section 3 and discuss related work in Section 4. We report experimental results in Section 5 and conclude the paper in Section 6.
2 FLOW-BASED GENERATIVE MODELS
Flow-based models (Dinh et al., 2014; 2017; Rezende and Mohamed, 2015) transform a simple density of latent variables p(z) (e.g., isotropic Gaussian) into a complex data distribution p(x) by applying a bijection x = f(z), where x and z are both n-dimensional. The probability density of x can be obtained through the change of variables formula:
p(x) = p(z) ∣∣∣∣det(∂f−1(x)∂x )∣∣∣∣ , (1)
where z = f−1(x) is the inverse transformation, and det (∂f−1(x)
∂x
) is the determinant of its Jacobian.
In general, it takes O(n3) to compute the determinant, which is not scalable to high-dimensional data. There are two notable groups of flow-based models with triangular Jacobians and tractable determinants. They are based on autoregressive and bipartite transformations, respectively.
2.1 AUTOREGRESSIVE TRANSFORMATION
The autoregressive flow (AF) and inverse autoregressive flow (IAF) (Kingma et al., 2016; Papamakarios et al., 2017) use autoregressive transformations. Specifically, AF defines the inverse transformation z = f−1(x;ϑ) as:
zt = xt · σt(x<t;ϑ) + µt(x<t;ϑ), (2)
where the shifting variables µt(x<t;ϑ) and scaling variables σt(x<t;ϑ) are modeled by an autoregressive architecture parameterized by ϑ (e.g., WaveNet). Note that, the t-th variable zt only depends on x≤t, thus the Jacobian is a triangular matrix as illustrated in Figure 1(a) and its determinant
is the product of the diagonal entries: det ( ∂f−1(x) ∂x ) = ∏ t σt(x<t;ϑ). The density p(x) can be
easily evaluated by change of variables formula, because z = f−1(x) can be computed in parallel from Eq. (2) (i.e., the required O(n) operations can be done in O(1) time on modern GPU hardware). However, AF has to do sequential synthesis, because the forward transformation x = f(z) is autoregressive: xt =
zt−µt(x<t;ϑ) σt(x<t;ϑ)
. In contrast, IAF uses an autoregressive transformation for z = f−1(x):
zt = xt − µt(z<t;ϑ) σt(z<t;ϑ) , (3)
making density evaluation impractically slow for training, but it can do parallel synthesis by xt = zt · σt(z<t;ϑ) + µt(z<t;ϑ). Parallel WaveNet (van den Oord et al., 2018) and ClariNet (Ping et al., 2019) are based on IAF, which lacks efficient density evaluation and relies on distillation from a pretrained autoregressive WaveNet.
2.2 BIPARTITE TRANSFORMATION
RealNVP (Dinh et al., 2017) and Glow (Kingma and Dhariwal, 2018) use bipartite transformation by partitioning the data x into two groups xa and xb, where the indices sets a ∪ b = {1, · · · , n} and a ∩ b = φ. Then, the inverse transformation z = f−1(x,θ) is defined as:
za = xa, zb = xb · σb(xa;θ) + µb(xa;θ). (4)
where the shifting variables µb(xa;θ) and scaling variables σb(xa;θ) are modeled by a feed-forward neural network. The Jacobian ∂f
−1(x) ∂x is a special triangular matrix as illustrated in Figure 1 (b). By
definition, the forward transformation x = f(z,θ) is,
xa = za, xb = zb − µb(xa;θ) σb(xa;θ) , (5)
and can also be done in parallel. As a result, the bipartite transformation provides both parallel density evaluation and parallel synthesis. In previous work, WaveGlow (Prenger et al., 2019) and FloWaveNet (Kim et al., 2019) both squeeze the adjacent audio samples on the channel dimension, and apply the bipartite transformation on the partitioned channel dimension.
2.3 CONNECTIONS
It is worthwhile to mention that the autoregressive transformation is more expressive than bipartite transformation in general. As illustrated in Figure 1(a) and (b), the autoregressive transformation introduces n×(n−1)2 complex non-linear dependencies (dark-blue cells) and n linear dependencies between data x and latents z. In contrast, bipartite transformation introduces only n 2
4 non-linear
dependencies and n2 linear dependencies. Indeed, one can reduce an autoregressive transformation z = f−1(x;ϑ) to a bipartite transformation z = f−1(x;θ) by: (i) picking an autoregressive order o such that all of the indices in set a rank early than the indices in b, and (ii) setting the shifting and scaling variables as,
µt(x<t;ϑ) = { 0 for t ∈ a µt(xa;θ) for t ∈ b , σt(x<t;ϑ) = { 1 for t ∈ a σt(xa;θ) for t ∈ b .
Given the less expressive building block, the bipartite transformation-based flows generally require many more layers and larger hidden size to match the capacity of a compact autoregressive models (e.g., as measured by test likelihood) (Kingma and Dhariwal, 2018; Prenger et al., 2019).
3 WAVEFLOW
In this section, we present WaveFlow and its implementation with dilated 2-D convolutions.
3.1 DEFINITION
We denote the high dimensional 1-D waveform as x = {x1, · · · , xn}. We first squeeze x into a h-row 2-D matrix X ∈ Rh×w by column-major order, where w = nh and adjacent samples are in the same column. We assume Z ∈ Rh×w are sampled from an isotropic Gaussian, and define the inverse transformation Z = f−1(X; Θ) as,
Zi,j = σi,j(X<i,•; Θ) ·Xi,j + µi,j(X<i,•; Θ), (6)
where X<i,• represents all elements above i-th row (see Figure 2 for an illustration). Note that, i) the receptive fields over the squeezed inputs X for computing Zi,j in WaveFlow is strictly larger than that of WaveGlow when h > 2. ii) WaveNet is equivalent to an autoregressive flow with column-major order on the squeezed inputs X . iii) Both WaveFlow and WaveGlow look at future waveform samples in original x for computing Zi,j , whereas WaveNet can not. iv) The autoregressive flow with row-major order has larger receptive fields than WaveFlow and WaveGlow.
The shifting variables µi,j(X<i,•; Θ) and scaling variables σi,j(X<i,•; Θ) in Eq. (6) are modeled by a 2-D convolutional neural network detailed in Section 3.2. By definition, the variable Zi,j only depends on the current Xi,j and previous X<i,• in raw-major order, thus the Jacobian is a triangular matrix and its determinant is:
det
( ∂f−1(X)
∂X
) = h∏ i=1 w∏ j=1 σi,j(X<i,•; Θ). (7)
As a result, the log-likelihood can be calculated in parallel by change of variable formula in Eq. (1),
log p(X) = − h∑ i=1 w∑ j=1 ( Z2i,j + 1 2 log(2π) ) + h∑ i=1 w∑ j=1 log σi,j(X<i,•; Θ), (8)
and one can do maximum likelihood training efficiently. At synthesis, one may first sample Z from the isotropic Gaussian and apply the forward transformation X = f(Z; Θ):
Xi,j = Zi,j − µi,j(X<i,•; Θ)
σi,j(X<i,•; Θ) , (9)
which is only autoregressive on height dimension. Thus, it requires h sequential steps to generate the whole waveform X . In practice, a small h (e.g., 8 or 16) works well, thus we can generate very long waveforms within a few sequential steps.
3.2 IMPLEMENTATION WITH DILATED 2-D CONVOLUTIONS
In this work, we implement WaveFlow with a dilated 2-D convolutional architecture. Specifically, we use a stack of 2-D convolution layers (e.g., 8 layers in all experiments) to model the shifting variables µi,j(X<i,•; Θ) and scaling variables σi,j(X<i,•; Θ) in Eq. (6). We use the similar architecture as
WaveNet (van den Oord et al., 2016) by replacing the dilated 1-D convolution to 2-D convolution (Yu and Koltun, 2015), while still keeping the gated-tanh nonlinearities, residual connections and skip connections.
We set the filter sizes as 3 for both height and width dimensions. We use non-causal convolutions on width dimension and set the dilation cycle as [1, 2, 4, · · · , 27]. The convolutions on height dimension are causal with an autoregressive constraint, and their dilation cycle needs to be designed carefully. In practice, we find the following rules of thumb are important to obtain good results:
• As motivated by the dilation cycle of WaveNet (van den Oord et al., 2016), the dilations of 8 layers should be set as d = [1, 2, · · · , 2s, 1, 2, · · · , 2s, · · · ], where s ≤ 7. 3
• The receptive field r over the height dimension should be larger than the squeezed height h. Otherwise, it explicitly introduces unnecessary conditional independence and leads to lower likelihood (see Table 1 for an example). Note that, the receptive field of a stack of dilated convolutional layers is: r = (k−1)× ∑ i di+1, where k is the filter size and di is the dilation
at i-th layer. Thus, the sum of dilations should satisfy: ∑ i di ≥ h−1 k−1 . However, when h is
larger than or equal to 28 = 512, we simply set the dilation cycle as [1, 2, 4, · · · , 27]. • When the receptive field r has already been larger than h, we find that convolutions with
smaller dilation and fewer holes provide larger likelihood.
We summarize the heights and preferred dilations in our experiments in Table 2. Note that, WaveFlow becomes fully autoregressive when we squeeze x by its length (i.e. h = n) and set its filter size as 1 over the width dimension, which is equivalent to a Gaussian WaveNet learned by MLE (Ping et al., 2019). If we squeeze x by h = 2 and set the filter size as 1 on the height dimension, WaveFlow becomes a bipartite flow and is equivalent to WaveGlow with squeezed channels 2.
3.3 CONDITIONAL GENERATION
In neural speech synthesis, a neural vocoder (e.g., WaveNet) synthesizes the time-domain waveforms. It can be conditioned on linguistic features (van den Oord et al., 2016; Arık et al., 2017a), the mel-spectrograms from a text-to-spectrogram model (Ping et al., 2018; Shen et al., 2018), or the
3We did try different setups, but they all lead to worse likelihood scores.
learned hidden representation within a text-to-wave architecture (Ping et al., 2019). In this work, we test WaveFlow by conditioning it on ground truth mel-spectrograms as in previous work (Prenger et al., 2019; Kim et al., 2019). The mel-spectrogram is upsampled to the same resolution as waveform samples by transposed 2-D convolutions (Ping et al., 2019). To aligned with the squeezed waveform, they are squeezed to the shape c × h × w, where c is the feature dimension (e.g, bands of the spectrogram). After a 1× 1 convolution mapping the features to residual channels, they are added as the bias term at each layer (van den Oord et al., 2016).
3.4 STACKING MULTIPLE FLOWS WITH PERMUTATIONS OVER HEIGHT DIMENSION
Flow-based models require a series of transformations until the distribution p(X) reaches a desired level of complexity (e.g., Rezende and Mohamed, 2015). We let X = Z(n) and repeatedly apply the transformation Z(i−1) = f−1(Z(i); Θ(i)) defined in Eq. (6) from Z(n) → . . . Z(i) → . . . Z(0). We assume Z(0) is from the isotropic Gaussian distribution. The likelihood p(X) can be evaluated by iteratively applying the chain rule:
p(X) = p(Z(0)) n∏ i=1 ∣∣∣∣det(∂f−1(Z(i); Θ(i))∂Z(i) )∣∣∣∣ .
We find that permuting each Z(i) over the height dimension after each transformation can significantly improve the likelihood scores. In particular, we test two permutation strategies for WaveFlow models stacked with 8 flows (i.e., X = Z(8)) in Table 3: (i) we reverse each Z(i) over the height dimension after each transformation, and (ii) we reverse Z(7), Z(6), Z(5), Z(4) over the height dimension as before, but split Z(3), Z(2), Z(1), Z(0) in the middle of the height dimension then reverse each part respectively. 4 Note that, one also needs to permute the conditioner on the height dimension accordingly, which is aligned with Z(i). From Table 3, both (i) and (ii) significantly outperform the model without permutations mainly because of bidirectional modeling. Strategy (ii) outperforms (i) because of its diverse autoregressive orders.
4 RELATED WORK
Deep neural networks for speech synthesis (a.k.a. text-to-speech) have received a lot of attention. Over the past few years, several neural text-to-speech (TTS) systems have been introduced, including WaveNet (van den Oord et al., 2016), Deep Voice (Arık et al., 2017a), Deep Voice 2 (Arık et al., 2017b), Deep Voice 3 (Ping et al., 2018), Tacotron (Wang et al., 2017), Tacotron 2 (Shen et al., 2018), Char2Wav (Sotelo et al., 2017), VoiceLoop (Taigman et al., 2018), WaveRNN (Kalchbrenner et al., 2018), ClariNet (Ping et al., 2019), Transformer TTS (Li et al., 2019), ParaNet (Peng et al., 2019) and FastSpeech (Ren et al., 2019).
Neural vocoders, such as WaveNet, play the most important role in recent advances of speech synthesis. In previous work, the state-of-the-art neural vocoders are autoregressive models (van den Oord et al., 2016; Mehri et al., 2017; Kalchbrenner et al., 2018). Several engineering endeavors have been advocated for speeding up their sequential generation process (Arık et al., 2017a; Kalchbrenner et al., 2018). In particular, Subscale WaveRNN (Kalchbrenner et al., 2018) folds a long waveform sequence x1:n into a batch of shorter sequences and can produces up to 16 samples per step, thus it requires at least n16 steps to generate the whole audio. Note that, this is different from the proposed WaveFlow, which can generate x1:n within a fixed number of steps (e.g., 16). Most recently, flowbased models have been successfully applied for parallel waveform synthesis with comparable fidelity
4After split & reverse operations, the height dimension [0, · · · , h 2 − 1, h 2 , · · · , h− 1] becomes [h 2 − 1, · · · ,
0, h− 1, · · · , h 2 ].
as autoregressive models (van den Oord et al., 2018; Ping et al., 2019; Prenger et al., 2019; Kim et al., 2019; Yamamoto et al., 2019; Serrà et al., 2019). Among these models, WaveGlow (Prenger et al., 2019) and FloWaveNet (Kim et al., 2019) have a simple training pipeline as they solely use the maximum likelihood objective. However, both of them are less expressive than autoregressive models as indicated by their lower likelihood scores.
Flow-based models can either represent the approximate posteriors for variational inference (Rezende and Mohamed, 2015; Kingma et al., 2016; Berg et al., 2018), or can be trained directly on data using the change of variables formula (Dinh et al., 2014; 2017; Kingma and Dhariwal, 2018; Grathwohl et al., 2018). In previous work, Glow (Kingma and Dhariwal, 2018) extends RealNVP (Dinh et al., 2017) with invertible 1× 1 convolution, and can generate high quality images. Later on, Hoogeboom et al. (2019) generalizes the 1× 1 convolution to invertible d× d convolutions which operate both channel and spatial axes.
5 EXPERIMENT
In this section, we compare likelihood-based generative models for raw audio in term of test likelihood, speech quality and synthesis speed.
Data: We use the LJ speech dataset (Ito, 2017) containing about 24 hours of audio with a sampling rate of 22.05kHz recorded on a MacBook Pro in a home enviroment. It consists of 13, 100 audio clips of a single female speaker reading passages from 7 non-fiction books.
Models: We evaluate several likelihood-based generative models, including Gaussian WaveNet, WaveGlow, WaveFlow and autoregressive flow (AF). As in Section 3.2, we implement autoregressive flow from WaveFlow by squeezing the waveforms by its length and setting the filter size as 1 for width dimension. Both WaveNet and AF have 30 layers with dilation cycle [1, 2, · · · , 512] and filter size 3. For WaveGlow and WaveFlow, we investigate different setups, including the number of flows, size of residual channels, and squeezed height h.
Conditioner: We use the 80-band mel-spectrogram of the original audio as the conditioner for WaveNet, WaveGlow, and WaveFlow. We use FFT size 1024, hop size 256, and window size 1024. For WaveNet and WaveFlow, we upsample the mel conditioner 256 times by applying two layers of transposed 2-D convolution (in time and frequency) interleaved with leaky ReLU (α = 0.4). The upsampling strides in time are 16 and the 2-D convolution filter sizes are [32, 3] for both layers. For WaveGlow, we directly use the open source implementation. 5
Training: We train all models on 8 Nvidia 1080Ti GPUs using randomly chosen short clips of 16, 000 samples from each utterance. For WaveFlow and WaveNet, we use the Adam optimizer (Kingma and Ba, 2015) with a batch size of 8 and a constant learning rate of 2× 10−4. For WaveGlow, we use the Adam optimizer with a batch size of 16 and a learning rate of 1× 10−4. We applied weight normalization (Salimans and Kingma, 2016) whenever possible.
5.1 LIKELIHOOD
The test log-likelihoods (LLs) of all models are evaluate at 1M training steps. Note that, i) all of the LLs decrease slowly after 1M steps and ii) it took one month to train the largest WaveGlow (residual channels = 512) for 1M steps. Thus, we chose 1M as the cut-off to compare these models. We summarize the results in Table 4 with models from row (a) to (t). We draw the following observations:
• Stacking a large number of flows improves LLs for WaveFlow, autoregressive flow, and WaveGlow. For example, (m) WaveFlow with 8 flows provide larger LL than (l) WaveFlow with 6 flows. The (b) autoregressive flow obtains the highest likelihood and even outperforms (a) WaveNet with the same amount of parameters. Indeed, AF provides bidirectional modeling by stacking 3 flows interleaved with reverse operations.
• WaveFlow has much larger likelihood than WaveGlow with comparable number of parameters. In particular, a small-footprint (k) WaveFlow has only 5.91M parameters but can provide comparable likelihood (5.023 vs. 5.026) as the largest (g) WaveGlow with 268.29M parameters.
5https://github.com/NVIDIA/waveglow
5.2 SPEECH FIDELITY AND SYNTHESIS SPEED
We train WaveNet for 1M steps. We train WaveGlow and WaveFlow for 2M steps with small residual channels (64, 96 and 128). We train larger models (res. channels 256 and 512) for 1M steps due to the practical time constraint. At synthesis, we sampled Z from an isotropic Gaussian with standard deviation 1.0 and 0.6 (default) for WaveFlow and WaveGlow, respectively. For WaveFlow and WaveGlow, we run synthesis under NVIDIA Apex with 16-bit floating point (FP16) arithmetic, which does not introduce any degradation of audio fidelity and brings about 2× speedup. We use the crowdMOS tookit (Ribeiro et al., 2011) for naturalness evaluation, where test utterances from these models were presented to workers on Mechanical Turk. We also test the synthesis speed on a Nvidia V100 GPU without using any customized inference kernels. We only implement convolution queues (Paine et al., 2016) in Python to cache the intermediate hidden states within WaveFlow for autoregressive inference over the height dimension, which brings about 4× speedup. We use the permutation strategy (ii) described in Section 3.4 for WaveFlow.
We report the 5-scale Mean Opinion Score (MOS), synthesis speed and model footprint in Table 5. We draw the following observations:
• The small WaveFlow (res. channels 64) has 5.91M parameters and can synthesize 22.05 kHz high-fidelity speech (MOS: 4.32) 42.60× faster than real-time. In contrast, the speech quality of small WaveGlow (res. channels 64) is significantly worse (MOS: 2.17). Indeed, WaveGlow (res. channels 256) requires 87.88M parameters for generating high-fidelity speech.
• The large WaveFlow (res. channels 256) outperforms the same size WaveGlow in terms of speech fidelity (MOS: 4.43 vs. 4.34). It also matches the state-of-the-art WaveNet while gener-
6 CONCLUSION
We propose WaveFlow, a compact flow-based model for raw audio, which can be directly trained with maximum likelihood estimation. It provides a unified view of flow-based models for time-domain waveforms, and includes WaveNet and WaveGlow as special cases. WaveFlow requires a small number of sequential steps to generate high-fidelity speech and obtains likelihood comparable to WaveNet. In the end, our small-footprint WaveFlow can generate 22.05kHz high-fidelity speech more than 40× faster than real-time on a GPU without engineered inference kernels. | 1. What is the focus and contribution of the paper on raw audio generation?
2. What are the strengths of the proposed approach, particularly in its theoretical framework and experimental results?
3. How does the reviewer assess the clarity and organization of the paper's content, specifically regarding the subjective evaluation section?
4. Are there any suggestions for improving the presentation of the results, such as reorganizing the data or adding additional plots?
5. Is there a typo or error in the paper that the reviewer noticed? | Review | Review
## Updated review
I have read the rebuttal. The new version of the paper is definitely clearer, especially the contribution section and the experimental results. The new version addresses all my concerns, hence I am upgrading my rating to Accept.
## Original review
This paper presents the WaveGlow model, a generative model for raw audio. The model is based on a 2D-matrix approach, which allows to generate the audio with a fixed amount of step. The model is shown to be a generalization of the two main approaches for raw audio generation, autoregressive flow and bipartite flow. The model is evaluated and compared with related work on an objective evaluation (Log-likelihood) and a subjective evaluation (MOS), and is shown to be a trade-off between memory footprint, generation speed and quality.
I think this paper should be accepted, for the following reasons:
- The theoretical framework presented is novel and significant, as it provides a unified view of the two main approaches for neural waveform generation.
- The experiments are reasonably convincing, although they could be improved.
Detailed comments:
- In the subjective evaluation section (5.2), Table 5 is hard to decipher, especially given that there are three measurements to take into account, so it's not easy to see the benefit of the approach. Maybe the results should be organised differently, for instance grouping them according to one measurement could help, typically showing what speed and MOS each of the three models can achieve for a given model size. Maybe plotting speed vs MOS for the same model size could also be interesting.
- In the same section, is the WaveNet model the original one, or the Parallel WaveNet ? if it's the original, why not include Parallel WaveNet in the table ?
- Typo at the end of Section 1: "We orgnize" -> "organize" |
ICLR | Title
WaveFlow: A Compact Flow-based Model for Raw Audio
Abstract
In this work, we present WaveFlow, a small-footprint generative flow for raw audio, which is trained with maximum likelihood without density distillation and auxiliary losses as used in Parallel WaveNet. It provides a unified view of flowbased models for raw audio, including autoregressive flow (e.g., WaveNet) and bipartite flow (e.g., WaveGlow) as special cases. We systematically study these likelihood-based generative models for raw waveforms in terms of test likelihood and speech fidelity. We demonstrate that WaveFlow can synthesize high-fidelity speech and obtain comparable likelihood as WaveNet, while only requiring a few sequential steps to generate very long waveforms. In particular, our small-footprint WaveFlow has 5.91M parameters and can generate 22.05kHz high-fidelity speech 42.6× faster than real-time on a GPU without engineered inference kernels. 1
1 INTRODUCTION
Deep generative models have obtained noticeable successes for modeling raw audio in high-fidelity speech synthesis and music generation (e.g., van den Oord et al., 2016; Dieleman et al., 2018). Autoregressive models are among the best performing generative models for raw audio waveforms, providing the highest likelihood scores and generating high quality samples (e.g., van den Oord et al., 2016; Kalchbrenner et al., 2018). One of the most successful examples is WaveNet (van den Oord et al., 2016), an autoregressive model for waveform synthesis. It operates at the high temporal resolution of raw audio (e.g., 24kHz) and sequentially generates waveform samples at inference. As a result, WaveNet is prohibitively slow for speech synthesis and one has to develop highly engineered kernels for real-time inference (Arık et al., 2017a; Pharris, 2018). 2
Flow-based models (Dinh et al., 2014; Rezende and Mohamed, 2015) are a family of generative models, in which a simple initial density is transformed into a complex one by applying a series of invertible transformations. One group of models are based on autoregressive transformation, including autoregressive flow (AF) and inverse autoregressive flow (IAF) as the “dual” of each other (Kingma et al., 2016; Papamakarios et al., 2017; Huang et al., 2018). AF is analogous to autoregressive models, which performs parallel density evaluation and sequential synthesis. In contrast, IAF performs parallel synthesis but sequential density evaluation, making likelihood-based training very slow. Parallel WaveNet (van den Oord et al., 2018) distills an IAF from a pretrained autoregressive WaveNet, which gets the best of both worlds. However, it requires the density distillation with Monte Carlo approximation and a set of auxiliary losses for good performance, which complicates the training pipeline and increases the cost of development. Instead, ClariNet (Ping et al., 2019) simplifies the density distillation by computing a regularized KL divergence in closed-form.
Another group of flow-based models are based on bipartite transformation (Dinh et al., 2017; Kingma and Dhariwal, 2018), which provide parallel density evaluation and parallel synthesis. Most recently, WaveGlow (Prenger et al., 2019) and FloWaveNet (Kim et al., 2019) successfully applies Glow (Kingma and Dhariwal, 2018) and RealNVP (Dinh et al., 2017) for waveform synthesis, respectively. However, the bipartite transformations are less expressive than the autoregressive transformations (see Section 2.3 for detailed discussion). In general, these bipartite flows require
1Audio samples are located at: https://waveflow-demo.github.io/. 2Real-time inference is a requirement for most production text-to-speech systems. For example, if the system
can synthesize 1 second of speech in 0.5 seconds, it is 2× faster than real-time.
deeper layers, larger hidden size, and huge number of parameters to reach comparable capacities as autoregressive models. For example, WaveGlow and FloWaveNet have 87.88M and 182.64M parameters with 96 layers and 256 residual channels, respectively. In contrast, a 30-layer WaveNet has only 4.57M parameters with 128 residual channels.
In this work, we present WaveFlow, a compact flow-based model for raw audio. Specifically, we make the following contributions:
1. WaveFlow is trained with maximum likelihood without density distillation and auxiliary losses used in Parallel WaveNet (van den Oord et al., 2018) and ClariNet (Ping et al., 2019), which simplifies the training pipeline and reduces the cost of development.
2. WaveFlow squeezes the 1-D raw waveforms into a 2-D matrix and produces the whole audio within a fixed sequential steps. It also provides a unified view of flow-based models for raw audio and allows us to explicitly trade inference efficiency for model capacity. We implement WaveFlow with a dilated 2-D convolutional architecture (Yu and Koltun, 2015), and it includes both Gaussian WaveNet (Ping et al., 2019) and WaveGlow (Prenger et al., 2019) as special cases.
3. We systematically study the likelihood-based generative models for raw audios in terms of test likelihood and speech quality. We demonstrate that WaveFlow can obtain comparable likelihood and synthesize high-fidelity speech as WaveNet (van den Oord et al., 2016), while only requiring a few sequential steps to generate very long waveforms.
4. Our small-footprint WaveFlow has only 5.91M parameters and synthesizes 22.05 kHz highfidelity speech (MOS: 4.32) more than 40× faster than real-time on a Nvidia V100 GPU. In contrast, WaveGlow (Prenger et al., 2019) requires 87.8M parameters for generating high-fidelity speech. The small memory footprint is preferred in production TTS systems, especially for on-device deployment.
We organize the rest of the paper as follows. Section 2 reviews the flow-based models with autoregressive and bipartite transformations. We present WaveFlow in Section 3 and discuss related work in Section 4. We report experimental results in Section 5 and conclude the paper in Section 6.
2 FLOW-BASED GENERATIVE MODELS
Flow-based models (Dinh et al., 2014; 2017; Rezende and Mohamed, 2015) transform a simple density of latent variables p(z) (e.g., isotropic Gaussian) into a complex data distribution p(x) by applying a bijection x = f(z), where x and z are both n-dimensional. The probability density of x can be obtained through the change of variables formula:
p(x) = p(z) ∣∣∣∣det(∂f−1(x)∂x )∣∣∣∣ , (1)
where z = f−1(x) is the inverse transformation, and det (∂f−1(x)
∂x
) is the determinant of its Jacobian.
In general, it takes O(n3) to compute the determinant, which is not scalable to high-dimensional data. There are two notable groups of flow-based models with triangular Jacobians and tractable determinants. They are based on autoregressive and bipartite transformations, respectively.
2.1 AUTOREGRESSIVE TRANSFORMATION
The autoregressive flow (AF) and inverse autoregressive flow (IAF) (Kingma et al., 2016; Papamakarios et al., 2017) use autoregressive transformations. Specifically, AF defines the inverse transformation z = f−1(x;ϑ) as:
zt = xt · σt(x<t;ϑ) + µt(x<t;ϑ), (2)
where the shifting variables µt(x<t;ϑ) and scaling variables σt(x<t;ϑ) are modeled by an autoregressive architecture parameterized by ϑ (e.g., WaveNet). Note that, the t-th variable zt only depends on x≤t, thus the Jacobian is a triangular matrix as illustrated in Figure 1(a) and its determinant
is the product of the diagonal entries: det ( ∂f−1(x) ∂x ) = ∏ t σt(x<t;ϑ). The density p(x) can be
easily evaluated by change of variables formula, because z = f−1(x) can be computed in parallel from Eq. (2) (i.e., the required O(n) operations can be done in O(1) time on modern GPU hardware). However, AF has to do sequential synthesis, because the forward transformation x = f(z) is autoregressive: xt =
zt−µt(x<t;ϑ) σt(x<t;ϑ)
. In contrast, IAF uses an autoregressive transformation for z = f−1(x):
zt = xt − µt(z<t;ϑ) σt(z<t;ϑ) , (3)
making density evaluation impractically slow for training, but it can do parallel synthesis by xt = zt · σt(z<t;ϑ) + µt(z<t;ϑ). Parallel WaveNet (van den Oord et al., 2018) and ClariNet (Ping et al., 2019) are based on IAF, which lacks efficient density evaluation and relies on distillation from a pretrained autoregressive WaveNet.
2.2 BIPARTITE TRANSFORMATION
RealNVP (Dinh et al., 2017) and Glow (Kingma and Dhariwal, 2018) use bipartite transformation by partitioning the data x into two groups xa and xb, where the indices sets a ∪ b = {1, · · · , n} and a ∩ b = φ. Then, the inverse transformation z = f−1(x,θ) is defined as:
za = xa, zb = xb · σb(xa;θ) + µb(xa;θ). (4)
where the shifting variables µb(xa;θ) and scaling variables σb(xa;θ) are modeled by a feed-forward neural network. The Jacobian ∂f
−1(x) ∂x is a special triangular matrix as illustrated in Figure 1 (b). By
definition, the forward transformation x = f(z,θ) is,
xa = za, xb = zb − µb(xa;θ) σb(xa;θ) , (5)
and can also be done in parallel. As a result, the bipartite transformation provides both parallel density evaluation and parallel synthesis. In previous work, WaveGlow (Prenger et al., 2019) and FloWaveNet (Kim et al., 2019) both squeeze the adjacent audio samples on the channel dimension, and apply the bipartite transformation on the partitioned channel dimension.
2.3 CONNECTIONS
It is worthwhile to mention that the autoregressive transformation is more expressive than bipartite transformation in general. As illustrated in Figure 1(a) and (b), the autoregressive transformation introduces n×(n−1)2 complex non-linear dependencies (dark-blue cells) and n linear dependencies between data x and latents z. In contrast, bipartite transformation introduces only n 2
4 non-linear
dependencies and n2 linear dependencies. Indeed, one can reduce an autoregressive transformation z = f−1(x;ϑ) to a bipartite transformation z = f−1(x;θ) by: (i) picking an autoregressive order o such that all of the indices in set a rank early than the indices in b, and (ii) setting the shifting and scaling variables as,
µt(x<t;ϑ) = { 0 for t ∈ a µt(xa;θ) for t ∈ b , σt(x<t;ϑ) = { 1 for t ∈ a σt(xa;θ) for t ∈ b .
Given the less expressive building block, the bipartite transformation-based flows generally require many more layers and larger hidden size to match the capacity of a compact autoregressive models (e.g., as measured by test likelihood) (Kingma and Dhariwal, 2018; Prenger et al., 2019).
3 WAVEFLOW
In this section, we present WaveFlow and its implementation with dilated 2-D convolutions.
3.1 DEFINITION
We denote the high dimensional 1-D waveform as x = {x1, · · · , xn}. We first squeeze x into a h-row 2-D matrix X ∈ Rh×w by column-major order, where w = nh and adjacent samples are in the same column. We assume Z ∈ Rh×w are sampled from an isotropic Gaussian, and define the inverse transformation Z = f−1(X; Θ) as,
Zi,j = σi,j(X<i,•; Θ) ·Xi,j + µi,j(X<i,•; Θ), (6)
where X<i,• represents all elements above i-th row (see Figure 2 for an illustration). Note that, i) the receptive fields over the squeezed inputs X for computing Zi,j in WaveFlow is strictly larger than that of WaveGlow when h > 2. ii) WaveNet is equivalent to an autoregressive flow with column-major order on the squeezed inputs X . iii) Both WaveFlow and WaveGlow look at future waveform samples in original x for computing Zi,j , whereas WaveNet can not. iv) The autoregressive flow with row-major order has larger receptive fields than WaveFlow and WaveGlow.
The shifting variables µi,j(X<i,•; Θ) and scaling variables σi,j(X<i,•; Θ) in Eq. (6) are modeled by a 2-D convolutional neural network detailed in Section 3.2. By definition, the variable Zi,j only depends on the current Xi,j and previous X<i,• in raw-major order, thus the Jacobian is a triangular matrix and its determinant is:
det
( ∂f−1(X)
∂X
) = h∏ i=1 w∏ j=1 σi,j(X<i,•; Θ). (7)
As a result, the log-likelihood can be calculated in parallel by change of variable formula in Eq. (1),
log p(X) = − h∑ i=1 w∑ j=1 ( Z2i,j + 1 2 log(2π) ) + h∑ i=1 w∑ j=1 log σi,j(X<i,•; Θ), (8)
and one can do maximum likelihood training efficiently. At synthesis, one may first sample Z from the isotropic Gaussian and apply the forward transformation X = f(Z; Θ):
Xi,j = Zi,j − µi,j(X<i,•; Θ)
σi,j(X<i,•; Θ) , (9)
which is only autoregressive on height dimension. Thus, it requires h sequential steps to generate the whole waveform X . In practice, a small h (e.g., 8 or 16) works well, thus we can generate very long waveforms within a few sequential steps.
3.2 IMPLEMENTATION WITH DILATED 2-D CONVOLUTIONS
In this work, we implement WaveFlow with a dilated 2-D convolutional architecture. Specifically, we use a stack of 2-D convolution layers (e.g., 8 layers in all experiments) to model the shifting variables µi,j(X<i,•; Θ) and scaling variables σi,j(X<i,•; Θ) in Eq. (6). We use the similar architecture as
WaveNet (van den Oord et al., 2016) by replacing the dilated 1-D convolution to 2-D convolution (Yu and Koltun, 2015), while still keeping the gated-tanh nonlinearities, residual connections and skip connections.
We set the filter sizes as 3 for both height and width dimensions. We use non-causal convolutions on width dimension and set the dilation cycle as [1, 2, 4, · · · , 27]. The convolutions on height dimension are causal with an autoregressive constraint, and their dilation cycle needs to be designed carefully. In practice, we find the following rules of thumb are important to obtain good results:
• As motivated by the dilation cycle of WaveNet (van den Oord et al., 2016), the dilations of 8 layers should be set as d = [1, 2, · · · , 2s, 1, 2, · · · , 2s, · · · ], where s ≤ 7. 3
• The receptive field r over the height dimension should be larger than the squeezed height h. Otherwise, it explicitly introduces unnecessary conditional independence and leads to lower likelihood (see Table 1 for an example). Note that, the receptive field of a stack of dilated convolutional layers is: r = (k−1)× ∑ i di+1, where k is the filter size and di is the dilation
at i-th layer. Thus, the sum of dilations should satisfy: ∑ i di ≥ h−1 k−1 . However, when h is
larger than or equal to 28 = 512, we simply set the dilation cycle as [1, 2, 4, · · · , 27]. • When the receptive field r has already been larger than h, we find that convolutions with
smaller dilation and fewer holes provide larger likelihood.
We summarize the heights and preferred dilations in our experiments in Table 2. Note that, WaveFlow becomes fully autoregressive when we squeeze x by its length (i.e. h = n) and set its filter size as 1 over the width dimension, which is equivalent to a Gaussian WaveNet learned by MLE (Ping et al., 2019). If we squeeze x by h = 2 and set the filter size as 1 on the height dimension, WaveFlow becomes a bipartite flow and is equivalent to WaveGlow with squeezed channels 2.
3.3 CONDITIONAL GENERATION
In neural speech synthesis, a neural vocoder (e.g., WaveNet) synthesizes the time-domain waveforms. It can be conditioned on linguistic features (van den Oord et al., 2016; Arık et al., 2017a), the mel-spectrograms from a text-to-spectrogram model (Ping et al., 2018; Shen et al., 2018), or the
3We did try different setups, but they all lead to worse likelihood scores.
learned hidden representation within a text-to-wave architecture (Ping et al., 2019). In this work, we test WaveFlow by conditioning it on ground truth mel-spectrograms as in previous work (Prenger et al., 2019; Kim et al., 2019). The mel-spectrogram is upsampled to the same resolution as waveform samples by transposed 2-D convolutions (Ping et al., 2019). To aligned with the squeezed waveform, they are squeezed to the shape c × h × w, where c is the feature dimension (e.g, bands of the spectrogram). After a 1× 1 convolution mapping the features to residual channels, they are added as the bias term at each layer (van den Oord et al., 2016).
3.4 STACKING MULTIPLE FLOWS WITH PERMUTATIONS OVER HEIGHT DIMENSION
Flow-based models require a series of transformations until the distribution p(X) reaches a desired level of complexity (e.g., Rezende and Mohamed, 2015). We let X = Z(n) and repeatedly apply the transformation Z(i−1) = f−1(Z(i); Θ(i)) defined in Eq. (6) from Z(n) → . . . Z(i) → . . . Z(0). We assume Z(0) is from the isotropic Gaussian distribution. The likelihood p(X) can be evaluated by iteratively applying the chain rule:
p(X) = p(Z(0)) n∏ i=1 ∣∣∣∣det(∂f−1(Z(i); Θ(i))∂Z(i) )∣∣∣∣ .
We find that permuting each Z(i) over the height dimension after each transformation can significantly improve the likelihood scores. In particular, we test two permutation strategies for WaveFlow models stacked with 8 flows (i.e., X = Z(8)) in Table 3: (i) we reverse each Z(i) over the height dimension after each transformation, and (ii) we reverse Z(7), Z(6), Z(5), Z(4) over the height dimension as before, but split Z(3), Z(2), Z(1), Z(0) in the middle of the height dimension then reverse each part respectively. 4 Note that, one also needs to permute the conditioner on the height dimension accordingly, which is aligned with Z(i). From Table 3, both (i) and (ii) significantly outperform the model without permutations mainly because of bidirectional modeling. Strategy (ii) outperforms (i) because of its diverse autoregressive orders.
4 RELATED WORK
Deep neural networks for speech synthesis (a.k.a. text-to-speech) have received a lot of attention. Over the past few years, several neural text-to-speech (TTS) systems have been introduced, including WaveNet (van den Oord et al., 2016), Deep Voice (Arık et al., 2017a), Deep Voice 2 (Arık et al., 2017b), Deep Voice 3 (Ping et al., 2018), Tacotron (Wang et al., 2017), Tacotron 2 (Shen et al., 2018), Char2Wav (Sotelo et al., 2017), VoiceLoop (Taigman et al., 2018), WaveRNN (Kalchbrenner et al., 2018), ClariNet (Ping et al., 2019), Transformer TTS (Li et al., 2019), ParaNet (Peng et al., 2019) and FastSpeech (Ren et al., 2019).
Neural vocoders, such as WaveNet, play the most important role in recent advances of speech synthesis. In previous work, the state-of-the-art neural vocoders are autoregressive models (van den Oord et al., 2016; Mehri et al., 2017; Kalchbrenner et al., 2018). Several engineering endeavors have been advocated for speeding up their sequential generation process (Arık et al., 2017a; Kalchbrenner et al., 2018). In particular, Subscale WaveRNN (Kalchbrenner et al., 2018) folds a long waveform sequence x1:n into a batch of shorter sequences and can produces up to 16 samples per step, thus it requires at least n16 steps to generate the whole audio. Note that, this is different from the proposed WaveFlow, which can generate x1:n within a fixed number of steps (e.g., 16). Most recently, flowbased models have been successfully applied for parallel waveform synthesis with comparable fidelity
4After split & reverse operations, the height dimension [0, · · · , h 2 − 1, h 2 , · · · , h− 1] becomes [h 2 − 1, · · · ,
0, h− 1, · · · , h 2 ].
as autoregressive models (van den Oord et al., 2018; Ping et al., 2019; Prenger et al., 2019; Kim et al., 2019; Yamamoto et al., 2019; Serrà et al., 2019). Among these models, WaveGlow (Prenger et al., 2019) and FloWaveNet (Kim et al., 2019) have a simple training pipeline as they solely use the maximum likelihood objective. However, both of them are less expressive than autoregressive models as indicated by their lower likelihood scores.
Flow-based models can either represent the approximate posteriors for variational inference (Rezende and Mohamed, 2015; Kingma et al., 2016; Berg et al., 2018), or can be trained directly on data using the change of variables formula (Dinh et al., 2014; 2017; Kingma and Dhariwal, 2018; Grathwohl et al., 2018). In previous work, Glow (Kingma and Dhariwal, 2018) extends RealNVP (Dinh et al., 2017) with invertible 1× 1 convolution, and can generate high quality images. Later on, Hoogeboom et al. (2019) generalizes the 1× 1 convolution to invertible d× d convolutions which operate both channel and spatial axes.
5 EXPERIMENT
In this section, we compare likelihood-based generative models for raw audio in term of test likelihood, speech quality and synthesis speed.
Data: We use the LJ speech dataset (Ito, 2017) containing about 24 hours of audio with a sampling rate of 22.05kHz recorded on a MacBook Pro in a home enviroment. It consists of 13, 100 audio clips of a single female speaker reading passages from 7 non-fiction books.
Models: We evaluate several likelihood-based generative models, including Gaussian WaveNet, WaveGlow, WaveFlow and autoregressive flow (AF). As in Section 3.2, we implement autoregressive flow from WaveFlow by squeezing the waveforms by its length and setting the filter size as 1 for width dimension. Both WaveNet and AF have 30 layers with dilation cycle [1, 2, · · · , 512] and filter size 3. For WaveGlow and WaveFlow, we investigate different setups, including the number of flows, size of residual channels, and squeezed height h.
Conditioner: We use the 80-band mel-spectrogram of the original audio as the conditioner for WaveNet, WaveGlow, and WaveFlow. We use FFT size 1024, hop size 256, and window size 1024. For WaveNet and WaveFlow, we upsample the mel conditioner 256 times by applying two layers of transposed 2-D convolution (in time and frequency) interleaved with leaky ReLU (α = 0.4). The upsampling strides in time are 16 and the 2-D convolution filter sizes are [32, 3] for both layers. For WaveGlow, we directly use the open source implementation. 5
Training: We train all models on 8 Nvidia 1080Ti GPUs using randomly chosen short clips of 16, 000 samples from each utterance. For WaveFlow and WaveNet, we use the Adam optimizer (Kingma and Ba, 2015) with a batch size of 8 and a constant learning rate of 2× 10−4. For WaveGlow, we use the Adam optimizer with a batch size of 16 and a learning rate of 1× 10−4. We applied weight normalization (Salimans and Kingma, 2016) whenever possible.
5.1 LIKELIHOOD
The test log-likelihoods (LLs) of all models are evaluate at 1M training steps. Note that, i) all of the LLs decrease slowly after 1M steps and ii) it took one month to train the largest WaveGlow (residual channels = 512) for 1M steps. Thus, we chose 1M as the cut-off to compare these models. We summarize the results in Table 4 with models from row (a) to (t). We draw the following observations:
• Stacking a large number of flows improves LLs for WaveFlow, autoregressive flow, and WaveGlow. For example, (m) WaveFlow with 8 flows provide larger LL than (l) WaveFlow with 6 flows. The (b) autoregressive flow obtains the highest likelihood and even outperforms (a) WaveNet with the same amount of parameters. Indeed, AF provides bidirectional modeling by stacking 3 flows interleaved with reverse operations.
• WaveFlow has much larger likelihood than WaveGlow with comparable number of parameters. In particular, a small-footprint (k) WaveFlow has only 5.91M parameters but can provide comparable likelihood (5.023 vs. 5.026) as the largest (g) WaveGlow with 268.29M parameters.
5https://github.com/NVIDIA/waveglow
5.2 SPEECH FIDELITY AND SYNTHESIS SPEED
We train WaveNet for 1M steps. We train WaveGlow and WaveFlow for 2M steps with small residual channels (64, 96 and 128). We train larger models (res. channels 256 and 512) for 1M steps due to the practical time constraint. At synthesis, we sampled Z from an isotropic Gaussian with standard deviation 1.0 and 0.6 (default) for WaveFlow and WaveGlow, respectively. For WaveFlow and WaveGlow, we run synthesis under NVIDIA Apex with 16-bit floating point (FP16) arithmetic, which does not introduce any degradation of audio fidelity and brings about 2× speedup. We use the crowdMOS tookit (Ribeiro et al., 2011) for naturalness evaluation, where test utterances from these models were presented to workers on Mechanical Turk. We also test the synthesis speed on a Nvidia V100 GPU without using any customized inference kernels. We only implement convolution queues (Paine et al., 2016) in Python to cache the intermediate hidden states within WaveFlow for autoregressive inference over the height dimension, which brings about 4× speedup. We use the permutation strategy (ii) described in Section 3.4 for WaveFlow.
We report the 5-scale Mean Opinion Score (MOS), synthesis speed and model footprint in Table 5. We draw the following observations:
• The small WaveFlow (res. channels 64) has 5.91M parameters and can synthesize 22.05 kHz high-fidelity speech (MOS: 4.32) 42.60× faster than real-time. In contrast, the speech quality of small WaveGlow (res. channels 64) is significantly worse (MOS: 2.17). Indeed, WaveGlow (res. channels 256) requires 87.88M parameters for generating high-fidelity speech.
• The large WaveFlow (res. channels 256) outperforms the same size WaveGlow in terms of speech fidelity (MOS: 4.43 vs. 4.34). It also matches the state-of-the-art WaveNet while gener-
6 CONCLUSION
We propose WaveFlow, a compact flow-based model for raw audio, which can be directly trained with maximum likelihood estimation. It provides a unified view of flow-based models for time-domain waveforms, and includes WaveNet and WaveGlow as special cases. WaveFlow requires a small number of sequential steps to generate high-fidelity speech and obtains likelihood comparable to WaveNet. In the end, our small-footprint WaveFlow can generate 22.05kHz high-fidelity speech more than 40× faster than real-time on a GPU without engineered inference kernels. | 1. What is the focus of the paper regarding text-to-speech synthesis?
2. What are the strengths and weaknesses of the proposed approach in comparison to prior works?
3. How does the reviewer assess the clarity and organization of the paper's content?
4. What are the limitations of the method that the author needs to discuss?
5. How does the reviewer evaluate the impact of the proposed approach on model complexity and expressiveness? | Review | Review
This submission belongs to the field of text-to-speech synthesis. In particular it looks at a novel way of formulating a normalising flow using 2D rather than conventional 1D representation. Such reformulation enables to provide interpretations to several existing approaches as well as formulate a new one with quite interesting properties. This submission would benefit from a discussion of limitations of your approach.
I believe there is a great deal of interest in the use of normalising flows in the text-to-speech area. I believe this submission could be a good contribution to the area. The test log-likelihoods look comparable to existing approaches with significantly worse inference times. The mean opinion scores (MOS) seem to approach one of the standard baselines with significantly worse inference times though at the expense of increasing the number of model parameters from 6M to 86M parameters whilst gaining only 0.2 in MOS. The submission would have benefited from discussion about model complexity/expressivity and it's impact on MOS for WaveFlow, WaveNet and other approaches.
The largest issues with this submission are:
1) lack of proper technical description of your model in sections 1 and 2 making reading sections 1,2,3,etc in order awkward. It seems the order should be 3,4,(5),1,2,(5).
2) complete omission of conditioning on text to be synthesised; anyone not familiar deeply with speech synthesis will wonder where does the text come in
3) explicit statement of complexity for the operations involved using proper big-O notation; helps to avoid confusion about what do you mean by "parallel" (autoregressive WaveNet followed by parallel computation != parallel computation) |
ICLR | Title
WaveFlow: A Compact Flow-based Model for Raw Audio
Abstract
In this work, we present WaveFlow, a small-footprint generative flow for raw audio, which is trained with maximum likelihood without density distillation and auxiliary losses as used in Parallel WaveNet. It provides a unified view of flowbased models for raw audio, including autoregressive flow (e.g., WaveNet) and bipartite flow (e.g., WaveGlow) as special cases. We systematically study these likelihood-based generative models for raw waveforms in terms of test likelihood and speech fidelity. We demonstrate that WaveFlow can synthesize high-fidelity speech and obtain comparable likelihood as WaveNet, while only requiring a few sequential steps to generate very long waveforms. In particular, our small-footprint WaveFlow has 5.91M parameters and can generate 22.05kHz high-fidelity speech 42.6× faster than real-time on a GPU without engineered inference kernels. 1
1 INTRODUCTION
Deep generative models have obtained noticeable successes for modeling raw audio in high-fidelity speech synthesis and music generation (e.g., van den Oord et al., 2016; Dieleman et al., 2018). Autoregressive models are among the best performing generative models for raw audio waveforms, providing the highest likelihood scores and generating high quality samples (e.g., van den Oord et al., 2016; Kalchbrenner et al., 2018). One of the most successful examples is WaveNet (van den Oord et al., 2016), an autoregressive model for waveform synthesis. It operates at the high temporal resolution of raw audio (e.g., 24kHz) and sequentially generates waveform samples at inference. As a result, WaveNet is prohibitively slow for speech synthesis and one has to develop highly engineered kernels for real-time inference (Arık et al., 2017a; Pharris, 2018). 2
Flow-based models (Dinh et al., 2014; Rezende and Mohamed, 2015) are a family of generative models, in which a simple initial density is transformed into a complex one by applying a series of invertible transformations. One group of models are based on autoregressive transformation, including autoregressive flow (AF) and inverse autoregressive flow (IAF) as the “dual” of each other (Kingma et al., 2016; Papamakarios et al., 2017; Huang et al., 2018). AF is analogous to autoregressive models, which performs parallel density evaluation and sequential synthesis. In contrast, IAF performs parallel synthesis but sequential density evaluation, making likelihood-based training very slow. Parallel WaveNet (van den Oord et al., 2018) distills an IAF from a pretrained autoregressive WaveNet, which gets the best of both worlds. However, it requires the density distillation with Monte Carlo approximation and a set of auxiliary losses for good performance, which complicates the training pipeline and increases the cost of development. Instead, ClariNet (Ping et al., 2019) simplifies the density distillation by computing a regularized KL divergence in closed-form.
Another group of flow-based models are based on bipartite transformation (Dinh et al., 2017; Kingma and Dhariwal, 2018), which provide parallel density evaluation and parallel synthesis. Most recently, WaveGlow (Prenger et al., 2019) and FloWaveNet (Kim et al., 2019) successfully applies Glow (Kingma and Dhariwal, 2018) and RealNVP (Dinh et al., 2017) for waveform synthesis, respectively. However, the bipartite transformations are less expressive than the autoregressive transformations (see Section 2.3 for detailed discussion). In general, these bipartite flows require
1Audio samples are located at: https://waveflow-demo.github.io/. 2Real-time inference is a requirement for most production text-to-speech systems. For example, if the system
can synthesize 1 second of speech in 0.5 seconds, it is 2× faster than real-time.
deeper layers, larger hidden size, and huge number of parameters to reach comparable capacities as autoregressive models. For example, WaveGlow and FloWaveNet have 87.88M and 182.64M parameters with 96 layers and 256 residual channels, respectively. In contrast, a 30-layer WaveNet has only 4.57M parameters with 128 residual channels.
In this work, we present WaveFlow, a compact flow-based model for raw audio. Specifically, we make the following contributions:
1. WaveFlow is trained with maximum likelihood without density distillation and auxiliary losses used in Parallel WaveNet (van den Oord et al., 2018) and ClariNet (Ping et al., 2019), which simplifies the training pipeline and reduces the cost of development.
2. WaveFlow squeezes the 1-D raw waveforms into a 2-D matrix and produces the whole audio within a fixed sequential steps. It also provides a unified view of flow-based models for raw audio and allows us to explicitly trade inference efficiency for model capacity. We implement WaveFlow with a dilated 2-D convolutional architecture (Yu and Koltun, 2015), and it includes both Gaussian WaveNet (Ping et al., 2019) and WaveGlow (Prenger et al., 2019) as special cases.
3. We systematically study the likelihood-based generative models for raw audios in terms of test likelihood and speech quality. We demonstrate that WaveFlow can obtain comparable likelihood and synthesize high-fidelity speech as WaveNet (van den Oord et al., 2016), while only requiring a few sequential steps to generate very long waveforms.
4. Our small-footprint WaveFlow has only 5.91M parameters and synthesizes 22.05 kHz highfidelity speech (MOS: 4.32) more than 40× faster than real-time on a Nvidia V100 GPU. In contrast, WaveGlow (Prenger et al., 2019) requires 87.8M parameters for generating high-fidelity speech. The small memory footprint is preferred in production TTS systems, especially for on-device deployment.
We organize the rest of the paper as follows. Section 2 reviews the flow-based models with autoregressive and bipartite transformations. We present WaveFlow in Section 3 and discuss related work in Section 4. We report experimental results in Section 5 and conclude the paper in Section 6.
2 FLOW-BASED GENERATIVE MODELS
Flow-based models (Dinh et al., 2014; 2017; Rezende and Mohamed, 2015) transform a simple density of latent variables p(z) (e.g., isotropic Gaussian) into a complex data distribution p(x) by applying a bijection x = f(z), where x and z are both n-dimensional. The probability density of x can be obtained through the change of variables formula:
p(x) = p(z) ∣∣∣∣det(∂f−1(x)∂x )∣∣∣∣ , (1)
where z = f−1(x) is the inverse transformation, and det (∂f−1(x)
∂x
) is the determinant of its Jacobian.
In general, it takes O(n3) to compute the determinant, which is not scalable to high-dimensional data. There are two notable groups of flow-based models with triangular Jacobians and tractable determinants. They are based on autoregressive and bipartite transformations, respectively.
2.1 AUTOREGRESSIVE TRANSFORMATION
The autoregressive flow (AF) and inverse autoregressive flow (IAF) (Kingma et al., 2016; Papamakarios et al., 2017) use autoregressive transformations. Specifically, AF defines the inverse transformation z = f−1(x;ϑ) as:
zt = xt · σt(x<t;ϑ) + µt(x<t;ϑ), (2)
where the shifting variables µt(x<t;ϑ) and scaling variables σt(x<t;ϑ) are modeled by an autoregressive architecture parameterized by ϑ (e.g., WaveNet). Note that, the t-th variable zt only depends on x≤t, thus the Jacobian is a triangular matrix as illustrated in Figure 1(a) and its determinant
is the product of the diagonal entries: det ( ∂f−1(x) ∂x ) = ∏ t σt(x<t;ϑ). The density p(x) can be
easily evaluated by change of variables formula, because z = f−1(x) can be computed in parallel from Eq. (2) (i.e., the required O(n) operations can be done in O(1) time on modern GPU hardware). However, AF has to do sequential synthesis, because the forward transformation x = f(z) is autoregressive: xt =
zt−µt(x<t;ϑ) σt(x<t;ϑ)
. In contrast, IAF uses an autoregressive transformation for z = f−1(x):
zt = xt − µt(z<t;ϑ) σt(z<t;ϑ) , (3)
making density evaluation impractically slow for training, but it can do parallel synthesis by xt = zt · σt(z<t;ϑ) + µt(z<t;ϑ). Parallel WaveNet (van den Oord et al., 2018) and ClariNet (Ping et al., 2019) are based on IAF, which lacks efficient density evaluation and relies on distillation from a pretrained autoregressive WaveNet.
2.2 BIPARTITE TRANSFORMATION
RealNVP (Dinh et al., 2017) and Glow (Kingma and Dhariwal, 2018) use bipartite transformation by partitioning the data x into two groups xa and xb, where the indices sets a ∪ b = {1, · · · , n} and a ∩ b = φ. Then, the inverse transformation z = f−1(x,θ) is defined as:
za = xa, zb = xb · σb(xa;θ) + µb(xa;θ). (4)
where the shifting variables µb(xa;θ) and scaling variables σb(xa;θ) are modeled by a feed-forward neural network. The Jacobian ∂f
−1(x) ∂x is a special triangular matrix as illustrated in Figure 1 (b). By
definition, the forward transformation x = f(z,θ) is,
xa = za, xb = zb − µb(xa;θ) σb(xa;θ) , (5)
and can also be done in parallel. As a result, the bipartite transformation provides both parallel density evaluation and parallel synthesis. In previous work, WaveGlow (Prenger et al., 2019) and FloWaveNet (Kim et al., 2019) both squeeze the adjacent audio samples on the channel dimension, and apply the bipartite transformation on the partitioned channel dimension.
2.3 CONNECTIONS
It is worthwhile to mention that the autoregressive transformation is more expressive than bipartite transformation in general. As illustrated in Figure 1(a) and (b), the autoregressive transformation introduces n×(n−1)2 complex non-linear dependencies (dark-blue cells) and n linear dependencies between data x and latents z. In contrast, bipartite transformation introduces only n 2
4 non-linear
dependencies and n2 linear dependencies. Indeed, one can reduce an autoregressive transformation z = f−1(x;ϑ) to a bipartite transformation z = f−1(x;θ) by: (i) picking an autoregressive order o such that all of the indices in set a rank early than the indices in b, and (ii) setting the shifting and scaling variables as,
µt(x<t;ϑ) = { 0 for t ∈ a µt(xa;θ) for t ∈ b , σt(x<t;ϑ) = { 1 for t ∈ a σt(xa;θ) for t ∈ b .
Given the less expressive building block, the bipartite transformation-based flows generally require many more layers and larger hidden size to match the capacity of a compact autoregressive models (e.g., as measured by test likelihood) (Kingma and Dhariwal, 2018; Prenger et al., 2019).
3 WAVEFLOW
In this section, we present WaveFlow and its implementation with dilated 2-D convolutions.
3.1 DEFINITION
We denote the high dimensional 1-D waveform as x = {x1, · · · , xn}. We first squeeze x into a h-row 2-D matrix X ∈ Rh×w by column-major order, where w = nh and adjacent samples are in the same column. We assume Z ∈ Rh×w are sampled from an isotropic Gaussian, and define the inverse transformation Z = f−1(X; Θ) as,
Zi,j = σi,j(X<i,•; Θ) ·Xi,j + µi,j(X<i,•; Θ), (6)
where X<i,• represents all elements above i-th row (see Figure 2 for an illustration). Note that, i) the receptive fields over the squeezed inputs X for computing Zi,j in WaveFlow is strictly larger than that of WaveGlow when h > 2. ii) WaveNet is equivalent to an autoregressive flow with column-major order on the squeezed inputs X . iii) Both WaveFlow and WaveGlow look at future waveform samples in original x for computing Zi,j , whereas WaveNet can not. iv) The autoregressive flow with row-major order has larger receptive fields than WaveFlow and WaveGlow.
The shifting variables µi,j(X<i,•; Θ) and scaling variables σi,j(X<i,•; Θ) in Eq. (6) are modeled by a 2-D convolutional neural network detailed in Section 3.2. By definition, the variable Zi,j only depends on the current Xi,j and previous X<i,• in raw-major order, thus the Jacobian is a triangular matrix and its determinant is:
det
( ∂f−1(X)
∂X
) = h∏ i=1 w∏ j=1 σi,j(X<i,•; Θ). (7)
As a result, the log-likelihood can be calculated in parallel by change of variable formula in Eq. (1),
log p(X) = − h∑ i=1 w∑ j=1 ( Z2i,j + 1 2 log(2π) ) + h∑ i=1 w∑ j=1 log σi,j(X<i,•; Θ), (8)
and one can do maximum likelihood training efficiently. At synthesis, one may first sample Z from the isotropic Gaussian and apply the forward transformation X = f(Z; Θ):
Xi,j = Zi,j − µi,j(X<i,•; Θ)
σi,j(X<i,•; Θ) , (9)
which is only autoregressive on height dimension. Thus, it requires h sequential steps to generate the whole waveform X . In practice, a small h (e.g., 8 or 16) works well, thus we can generate very long waveforms within a few sequential steps.
3.2 IMPLEMENTATION WITH DILATED 2-D CONVOLUTIONS
In this work, we implement WaveFlow with a dilated 2-D convolutional architecture. Specifically, we use a stack of 2-D convolution layers (e.g., 8 layers in all experiments) to model the shifting variables µi,j(X<i,•; Θ) and scaling variables σi,j(X<i,•; Θ) in Eq. (6). We use the similar architecture as
WaveNet (van den Oord et al., 2016) by replacing the dilated 1-D convolution to 2-D convolution (Yu and Koltun, 2015), while still keeping the gated-tanh nonlinearities, residual connections and skip connections.
We set the filter sizes as 3 for both height and width dimensions. We use non-causal convolutions on width dimension and set the dilation cycle as [1, 2, 4, · · · , 27]. The convolutions on height dimension are causal with an autoregressive constraint, and their dilation cycle needs to be designed carefully. In practice, we find the following rules of thumb are important to obtain good results:
• As motivated by the dilation cycle of WaveNet (van den Oord et al., 2016), the dilations of 8 layers should be set as d = [1, 2, · · · , 2s, 1, 2, · · · , 2s, · · · ], where s ≤ 7. 3
• The receptive field r over the height dimension should be larger than the squeezed height h. Otherwise, it explicitly introduces unnecessary conditional independence and leads to lower likelihood (see Table 1 for an example). Note that, the receptive field of a stack of dilated convolutional layers is: r = (k−1)× ∑ i di+1, where k is the filter size and di is the dilation
at i-th layer. Thus, the sum of dilations should satisfy: ∑ i di ≥ h−1 k−1 . However, when h is
larger than or equal to 28 = 512, we simply set the dilation cycle as [1, 2, 4, · · · , 27]. • When the receptive field r has already been larger than h, we find that convolutions with
smaller dilation and fewer holes provide larger likelihood.
We summarize the heights and preferred dilations in our experiments in Table 2. Note that, WaveFlow becomes fully autoregressive when we squeeze x by its length (i.e. h = n) and set its filter size as 1 over the width dimension, which is equivalent to a Gaussian WaveNet learned by MLE (Ping et al., 2019). If we squeeze x by h = 2 and set the filter size as 1 on the height dimension, WaveFlow becomes a bipartite flow and is equivalent to WaveGlow with squeezed channels 2.
3.3 CONDITIONAL GENERATION
In neural speech synthesis, a neural vocoder (e.g., WaveNet) synthesizes the time-domain waveforms. It can be conditioned on linguistic features (van den Oord et al., 2016; Arık et al., 2017a), the mel-spectrograms from a text-to-spectrogram model (Ping et al., 2018; Shen et al., 2018), or the
3We did try different setups, but they all lead to worse likelihood scores.
learned hidden representation within a text-to-wave architecture (Ping et al., 2019). In this work, we test WaveFlow by conditioning it on ground truth mel-spectrograms as in previous work (Prenger et al., 2019; Kim et al., 2019). The mel-spectrogram is upsampled to the same resolution as waveform samples by transposed 2-D convolutions (Ping et al., 2019). To aligned with the squeezed waveform, they are squeezed to the shape c × h × w, where c is the feature dimension (e.g, bands of the spectrogram). After a 1× 1 convolution mapping the features to residual channels, they are added as the bias term at each layer (van den Oord et al., 2016).
3.4 STACKING MULTIPLE FLOWS WITH PERMUTATIONS OVER HEIGHT DIMENSION
Flow-based models require a series of transformations until the distribution p(X) reaches a desired level of complexity (e.g., Rezende and Mohamed, 2015). We let X = Z(n) and repeatedly apply the transformation Z(i−1) = f−1(Z(i); Θ(i)) defined in Eq. (6) from Z(n) → . . . Z(i) → . . . Z(0). We assume Z(0) is from the isotropic Gaussian distribution. The likelihood p(X) can be evaluated by iteratively applying the chain rule:
p(X) = p(Z(0)) n∏ i=1 ∣∣∣∣det(∂f−1(Z(i); Θ(i))∂Z(i) )∣∣∣∣ .
We find that permuting each Z(i) over the height dimension after each transformation can significantly improve the likelihood scores. In particular, we test two permutation strategies for WaveFlow models stacked with 8 flows (i.e., X = Z(8)) in Table 3: (i) we reverse each Z(i) over the height dimension after each transformation, and (ii) we reverse Z(7), Z(6), Z(5), Z(4) over the height dimension as before, but split Z(3), Z(2), Z(1), Z(0) in the middle of the height dimension then reverse each part respectively. 4 Note that, one also needs to permute the conditioner on the height dimension accordingly, which is aligned with Z(i). From Table 3, both (i) and (ii) significantly outperform the model without permutations mainly because of bidirectional modeling. Strategy (ii) outperforms (i) because of its diverse autoregressive orders.
4 RELATED WORK
Deep neural networks for speech synthesis (a.k.a. text-to-speech) have received a lot of attention. Over the past few years, several neural text-to-speech (TTS) systems have been introduced, including WaveNet (van den Oord et al., 2016), Deep Voice (Arık et al., 2017a), Deep Voice 2 (Arık et al., 2017b), Deep Voice 3 (Ping et al., 2018), Tacotron (Wang et al., 2017), Tacotron 2 (Shen et al., 2018), Char2Wav (Sotelo et al., 2017), VoiceLoop (Taigman et al., 2018), WaveRNN (Kalchbrenner et al., 2018), ClariNet (Ping et al., 2019), Transformer TTS (Li et al., 2019), ParaNet (Peng et al., 2019) and FastSpeech (Ren et al., 2019).
Neural vocoders, such as WaveNet, play the most important role in recent advances of speech synthesis. In previous work, the state-of-the-art neural vocoders are autoregressive models (van den Oord et al., 2016; Mehri et al., 2017; Kalchbrenner et al., 2018). Several engineering endeavors have been advocated for speeding up their sequential generation process (Arık et al., 2017a; Kalchbrenner et al., 2018). In particular, Subscale WaveRNN (Kalchbrenner et al., 2018) folds a long waveform sequence x1:n into a batch of shorter sequences and can produces up to 16 samples per step, thus it requires at least n16 steps to generate the whole audio. Note that, this is different from the proposed WaveFlow, which can generate x1:n within a fixed number of steps (e.g., 16). Most recently, flowbased models have been successfully applied for parallel waveform synthesis with comparable fidelity
4After split & reverse operations, the height dimension [0, · · · , h 2 − 1, h 2 , · · · , h− 1] becomes [h 2 − 1, · · · ,
0, h− 1, · · · , h 2 ].
as autoregressive models (van den Oord et al., 2018; Ping et al., 2019; Prenger et al., 2019; Kim et al., 2019; Yamamoto et al., 2019; Serrà et al., 2019). Among these models, WaveGlow (Prenger et al., 2019) and FloWaveNet (Kim et al., 2019) have a simple training pipeline as they solely use the maximum likelihood objective. However, both of them are less expressive than autoregressive models as indicated by their lower likelihood scores.
Flow-based models can either represent the approximate posteriors for variational inference (Rezende and Mohamed, 2015; Kingma et al., 2016; Berg et al., 2018), or can be trained directly on data using the change of variables formula (Dinh et al., 2014; 2017; Kingma and Dhariwal, 2018; Grathwohl et al., 2018). In previous work, Glow (Kingma and Dhariwal, 2018) extends RealNVP (Dinh et al., 2017) with invertible 1× 1 convolution, and can generate high quality images. Later on, Hoogeboom et al. (2019) generalizes the 1× 1 convolution to invertible d× d convolutions which operate both channel and spatial axes.
5 EXPERIMENT
In this section, we compare likelihood-based generative models for raw audio in term of test likelihood, speech quality and synthesis speed.
Data: We use the LJ speech dataset (Ito, 2017) containing about 24 hours of audio with a sampling rate of 22.05kHz recorded on a MacBook Pro in a home enviroment. It consists of 13, 100 audio clips of a single female speaker reading passages from 7 non-fiction books.
Models: We evaluate several likelihood-based generative models, including Gaussian WaveNet, WaveGlow, WaveFlow and autoregressive flow (AF). As in Section 3.2, we implement autoregressive flow from WaveFlow by squeezing the waveforms by its length and setting the filter size as 1 for width dimension. Both WaveNet and AF have 30 layers with dilation cycle [1, 2, · · · , 512] and filter size 3. For WaveGlow and WaveFlow, we investigate different setups, including the number of flows, size of residual channels, and squeezed height h.
Conditioner: We use the 80-band mel-spectrogram of the original audio as the conditioner for WaveNet, WaveGlow, and WaveFlow. We use FFT size 1024, hop size 256, and window size 1024. For WaveNet and WaveFlow, we upsample the mel conditioner 256 times by applying two layers of transposed 2-D convolution (in time and frequency) interleaved with leaky ReLU (α = 0.4). The upsampling strides in time are 16 and the 2-D convolution filter sizes are [32, 3] for both layers. For WaveGlow, we directly use the open source implementation. 5
Training: We train all models on 8 Nvidia 1080Ti GPUs using randomly chosen short clips of 16, 000 samples from each utterance. For WaveFlow and WaveNet, we use the Adam optimizer (Kingma and Ba, 2015) with a batch size of 8 and a constant learning rate of 2× 10−4. For WaveGlow, we use the Adam optimizer with a batch size of 16 and a learning rate of 1× 10−4. We applied weight normalization (Salimans and Kingma, 2016) whenever possible.
5.1 LIKELIHOOD
The test log-likelihoods (LLs) of all models are evaluate at 1M training steps. Note that, i) all of the LLs decrease slowly after 1M steps and ii) it took one month to train the largest WaveGlow (residual channels = 512) for 1M steps. Thus, we chose 1M as the cut-off to compare these models. We summarize the results in Table 4 with models from row (a) to (t). We draw the following observations:
• Stacking a large number of flows improves LLs for WaveFlow, autoregressive flow, and WaveGlow. For example, (m) WaveFlow with 8 flows provide larger LL than (l) WaveFlow with 6 flows. The (b) autoregressive flow obtains the highest likelihood and even outperforms (a) WaveNet with the same amount of parameters. Indeed, AF provides bidirectional modeling by stacking 3 flows interleaved with reverse operations.
• WaveFlow has much larger likelihood than WaveGlow with comparable number of parameters. In particular, a small-footprint (k) WaveFlow has only 5.91M parameters but can provide comparable likelihood (5.023 vs. 5.026) as the largest (g) WaveGlow with 268.29M parameters.
5https://github.com/NVIDIA/waveglow
5.2 SPEECH FIDELITY AND SYNTHESIS SPEED
We train WaveNet for 1M steps. We train WaveGlow and WaveFlow for 2M steps with small residual channels (64, 96 and 128). We train larger models (res. channels 256 and 512) for 1M steps due to the practical time constraint. At synthesis, we sampled Z from an isotropic Gaussian with standard deviation 1.0 and 0.6 (default) for WaveFlow and WaveGlow, respectively. For WaveFlow and WaveGlow, we run synthesis under NVIDIA Apex with 16-bit floating point (FP16) arithmetic, which does not introduce any degradation of audio fidelity and brings about 2× speedup. We use the crowdMOS tookit (Ribeiro et al., 2011) for naturalness evaluation, where test utterances from these models were presented to workers on Mechanical Turk. We also test the synthesis speed on a Nvidia V100 GPU without using any customized inference kernels. We only implement convolution queues (Paine et al., 2016) in Python to cache the intermediate hidden states within WaveFlow for autoregressive inference over the height dimension, which brings about 4× speedup. We use the permutation strategy (ii) described in Section 3.4 for WaveFlow.
We report the 5-scale Mean Opinion Score (MOS), synthesis speed and model footprint in Table 5. We draw the following observations:
• The small WaveFlow (res. channels 64) has 5.91M parameters and can synthesize 22.05 kHz high-fidelity speech (MOS: 4.32) 42.60× faster than real-time. In contrast, the speech quality of small WaveGlow (res. channels 64) is significantly worse (MOS: 2.17). Indeed, WaveGlow (res. channels 256) requires 87.88M parameters for generating high-fidelity speech.
• The large WaveFlow (res. channels 256) outperforms the same size WaveGlow in terms of speech fidelity (MOS: 4.43 vs. 4.34). It also matches the state-of-the-art WaveNet while gener-
6 CONCLUSION
We propose WaveFlow, a compact flow-based model for raw audio, which can be directly trained with maximum likelihood estimation. It provides a unified view of flow-based models for time-domain waveforms, and includes WaveNet and WaveGlow as special cases. WaveFlow requires a small number of sequential steps to generate high-fidelity speech and obtains likelihood comparable to WaveNet. In the end, our small-footprint WaveFlow can generate 22.05kHz high-fidelity speech more than 40× faster than real-time on a GPU without engineered inference kernels. | 1. What is the main contribution of the paper regarding waveform synthesis?
2. How does the reviewer assess the quality and organization of the paper's content?
3. What are the strengths and weaknesses of the proposed approach in comparison to prior works?
4. Do you have any concerns about the experimental design or analysis?
5. Are there any suggestions for future research directions or improvements to the current method? | Review | Review
This paper re-organized the high dimensional 1-D raw waveform as 2-D matrix. This method simulated the autoregressive flow. Log-likelihood could be calculated in parallel. Autoregressive flow was only run on row dimension. The number of required parameters was desirable to synthesize high-fidelity speech with the speed faster than real time. Although this method could not achieve top one in ranking in every measurements, the resulting performance was still obtained with the best average results.
In general, this paper is clearly written, well organized and easy to follow. The authors carried out sufficient experiments and analyses, and proposed some rules of thumb to build a good model. On one hand, we may catch the contributions. But, on the other hand, the contributions were not clearly explained. The results were averaged but were not clearly explained.
The authors suggested to specify a bigger receptive field than the squeezed height. The property of getting better performance using deeper wavenet was "not" clearly explained and investigated. In the experiments, a small number of generative steps was considered. This is because short sequence based on autoregressive model was used.
This paper mentioned that using convolution queue could improve the synthesis speed. But, the synthesis speed has been fast enough because it is almost 15 times faster than real time. In practical applications, 100x faster is almost the same as 15x faster for humans. But, the task isn’t interacted with human. It is suggested to focuse on reducing the number of parameters or enhancing the log likelihood. |
ICLR | Title
The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Minima and Regularization Effects
Abstract
Understanding the behavior of stochastic gradient descent (SGD) in the context of deep neural networks has raised lots of concerns recently. Along this line, we theoretically study a general form of gradient based optimization dynamics with unbiased noise, which unifies SGD and standard Langevin dynamics. Through investigating this general optimization dynamics, we analyze the behavior of SGD on escaping from minima and its regularization effects. A novel indicator is derived to characterize the efficiency of escaping from minima through measuring the alignment of noise covariance and the curvature of loss function. Based on this indicator, two conditions are established to show which type of noise structure is superior to isotropic noise in term of escaping efficiency. We further show that the anisotropic noise in SGD satisfies the two conditions, and thus helps to escape from sharp and poor minima effectively, towards more stable and flat minima that typically generalize well. We verify our understanding through comparing this anisotropic diffusion with full gradient descent plus isotropic diffusion (i.e. Langevin dynamics) and other types of position-dependent noise.
1 INTRODUCTION
As a successful learning algorithm, stochastic gradient descent (SGD) was originally adopted for dealing with the computational bottleneck of training neural networks with large-scale datasets (Bottou, 1991). Its empirical efficiency and effectiveness have attracted lots of attention. And thus, SGD and its variants have become standard workhorse for learning deep models. Besides the aspect of empirical efficiency, recently, researchers started to analyze the optimization behaviors of SGD and its impacts on generalization.
The optimization properties of SGD have been studied from various perspectives. The convergence behaviors of SGD for simple one hidden layer neural networks were investigated in (Li & Yuan, 2017; Brutzkus et al., 2017). In non-convex settings, the characterization of how SGD escapes from stationary points, including saddle points and local minima, was analyzed in (Daneshmand et al., 2018; Jin et al., 2017; Hu et al., 2017).
On the other hand, in the context of deep learning, researchers realized that the noise introduced by SGD impacts the generalization, thanks to the research on the phenomenon that training with a large batch could cause a significant drop of test accuracy (Keskar et al., 2017). Particularly, several works attempted to investigate how the magnitude of the noise influences the generalization during the process of SGD optimization, including the batch size and learning rate (Hoffer et al., 2017; Goyal et al., 2017; Chaudhari & Soatto, 2017; Jastrzębski et al., 2017). Another line of research interpreted SGD from a Bayesian perspective. In (Mandt et al., 2017; Chaudhari & Soatto, 2017), SGD was interpreted as performing variational inference, where certain entropic regularization involves to prevent overfitting. And the work (Smith & Le, 2018) tried to provide an understanding based on model evidence. These explanations are compatible with the flat/sharp minima argument (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017), since Bayesian inference tends to targeting the region with large probability mass, corresponding to the flat minima.
However, when analyzing the optimization behavior and regularization effects of SGD, most of existing works only assume the noise covariance of SGD is constant or upper bounded by some
constant, and what role the noise structure of stochastic gradient plays in optimization and generalization was rarely discussed in literature.
In this work, we theoretically study a general form of gradient-based optimization dynamics with unbiased noise, which unifies SGD and standard Langevin dynamics. By investigating this general dynamics, we analyze how the noise structure of SGD influences the escaping behavior from minima and its regularization effects. Several novel theoretical results and empirical justifications are made.
1. We derive a key indicator to characterize the efficiency of escaping from minima through measuring the alignment of noise covariance and the curvature of loss function. Based on this indicator, two conditions are established to show which type of noise structure is superior to isotropic noise in term of escaping efficiency;
2. We further justify that SGD in the context of deep neural networks satisfies these two conditions, and thus provide a plausible explanation why SGD can escape from sharp minima more efficiently, converging to flat minima with a higher probability. Moreover, these flat minima typically generalize well according to various works (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017; Neyshabur et al., 2017; Wu et al., 2017). We also show that Langevin dynamics with well tuned isotropic noise cannot beat SGD, which further confirms the importance of noise structure of SGD;
3. A large number of experiments are designed systematically to justify our understanding on the behavior of the anisotropic diffusion of SGD. We compare SGD with full gradient descent with different types of diffusion noise, including isotropic and positiondependent/independent noise. All these comparisons demonstrate the effectiveness of anisotropic diffusion for good generalization in training deep networks.
The remaining of the paper is organized as follows. In Section 2, we introduce the background of SGD and a general form of optimization dynamics of interest. We then theoretically study the behaviors of escaping from minima in Ornstein-Uhlenbeck process in Section 3, and establish two conditions for characterizing the noise structure that affects the escaping efficiency. In Section 4, we show that the noise of SGD in the context of deep learning meets the two conditions, and thus explains its superior efficiency of escaping from sharp minima over other dynamics with isotropic noise. Various experiments are conducted for verifying our understanding in Section 5, and we conclude the paper in Section 6.
2 BACKGROUND
In general, supervised learning usually involves an optimization process of minimizing an empirical loss over training data, L(θ) := 1/N ∑N i=1 `(f(xi; θ), yi), where {(xi, yi)}Ni=1 denotes the training set with N i.i.d. samples, the prediction function f is often parameterized by θ ∈ RD, such as deep neural networks. And `(·, ·) is the loss function, such as mean squared error and cross entropy, typically corresponding to certain negative log likelihood. Due to the over parameterization and non-convexity of the loss function in deep networks, there exist multiple global minima, exhibiting diverse generalization performance. We call those solutions generalizing well good solutions or minima, and vice versa.
Gradient descent and its stochastic variants A typical approach to minimize the loss function is gradient descent (GD), the dynamics of which in each iteration t is, θt+1 = θt − ηtg0(θt), where g0(θt) = ∇θL(θt) denotes the full gradient and ηt denotes the learning rate. In non-convex optimization, a more useful kind of gradient based optimizers act like GD with an unbiased noise, including gradient Langevin dynamics (GLD), θt+1 = θt − ηtg0(θt) + σt t, t ∼ N (0, I), and stochastic gradient descent (SGD), during each iteration t of which, a minibatch of training samples with size m are randomly selected, with index set Bt ⊂ {1, 2, . . . , N}, and a stochastic gradient is evaluated based on the chosen minibatch, g̃(θt) = ∑ i∈Bt ∇θ`(f(xi; θt), yi)/m, which is an unbiased estimator of the full gradient g0(θt). Then, the parameters are updated with some learning rate ηt as θt+1 = θt − ηtg̃(θt). Denote g(θ) = ∇θ`((f(x; θ), y), the gradient for loss with a single data point (x, y), and assume that the size of minibatch is large enough for the central limit theorem to
hold, and thus g̃(θt) follows a Gaussian distribution (Mandt et al., 2017; Li et al., 2017), g̃(θt) ∼ N ( g0(θt), 1
m Σ(θt)
) , where Σ(θt) ≈ 1
N N∑ i=1 ( g(θt;xi)− g0(θt) ) ( g(θt;xi)− g0(θt) )T .
(1) Note that the covariance matrix Σ depends on the model architecture, dataset and the current parameter θt. Now we can rewrite the update of SGD as,
θt+1 = θt − ηtg0(θt) + ηt√ m t, t ∼ N
( 0,Σ(θt) ) . (2)
Inspired by GLD and SGD, we may consider a general kind of optimization dynamics, namely, gradient descent with unbiased noise,
θt+1 = θt − ηtg0(θt) + σt t, t ∼ N (0,Σt) . (3)
For small enough constant learning rate ηt = η, the above iteration in Eq. (3) can be treated as the numerical discretization of the following stochastic differential equation (Li et al., 2017; Jastrzębski et al., 2017; Chaudhari & Soatto, 2017),
dθt = −∇θL(θt) dt+ √ ησ2tΣt dWt. (4)
Considering √ ησ2tΣt as the coefficient of noise term, existing works (Hoffer et al., 2017; Jastrzębski et al., 2017) studied the influence of noise magnitude of SGD on generalization, i.e. ησ2t = η/m.
In this work, we focus on studying the benefits of anisotropic structure of Σt in SGD helping escape from minima by bridging the covariance matrix with the Hessian of the loss surface, and its implicit regularization effects on generalization, especially in deep learning context. For the purpose of eliminating the influence of the noise magnitude, we constrain it to be a constant when studying different structures of noise covariance. The noise magnitude could be evaluated as the expectation of the squared norm of the noise vector,
E[( √ ησt t) T ( √ ησt t)] = ησ 2 tE[ T ] = ησ2t TrE[ T ] = ησ2t Tr Σt. (5)
Thus, we introduce the following constraint,
given time t, ησ2t Tr (Σt) is constant. (6)
From the statistical physics point of view, Tr(ησ2tΣt) characterizes the kinetic energy (Gardiner), thus it is natural to force the energy to be unchanging, otherwise it is trivial that the higher the energy is, the less stable the system is.
For simplicity, we absorb ησ2t into Σt, denoting ησ 2 tΣt as Σt. If not pointed out, the subscript t of matrix Σt is omitted to emphasize that we are fixing t and discussing the varying structure of Σ.
3 THE BEHAVIORS OF ESCAPING FROM MINIMA IN ORNSTEIN-UHLENBECK PROCESS
For a general loss function L(θ) = EX`X(θ) (the expectation could be either population or empirical), where X denotes data example and θ denoted parameters to be optimized, under suitable smoothness assumptions, the SDE associated with the gradient variant optimizer as shown in Eq. (4) can be written as follows (Li et al., 2017; Jastrzębski et al., 2017; Chaudhari & Soatto, 2017; Hu et al., 2017), with little abuse of notation,
dθt = −∇θL(θt) dt+ Σ 1 2 t dWt. (7)
Let L0 = L(θ0) be one of the minimal values of L(θ), then for a fixed t small enough (such that Lt−L0 ≥ 0), Eθt [Lt−L0] characterizes the efficiency of θ escaping from the minimum θ0 of L(θ). It is natural to measure the escaping efficiency using E[Lt − L0] since it characterizes the increase of the potential, i.e., the increase of the loss L. And also note that Lt − L0 ≥ 0, for any δ > 0, the escaping probability P (Lt − L0 ≥ δ) can be controlled by the expectation E[Lt − L0] since by Markov’s inequality, we have P (Lt − L0 ≥ δ) ≤ E[Lt−L0]δ .
Proposition 1 (Escaping efficiency for general process). For the process (7), provided mild smoothness assumptions, the escaping efficiency from the minimum θ0 is,
E[Lt − L0] = − ∫ t
0
E [ ∇LT∇L ] + ∫ t 0 1 2 ETr(HtΣt) dt, (8)
where Ht denotes the Hessian of L(θt) at θt.
We provide the proof in Appendix, and the same for the other propositions.
The escaping efficiency for general processes is hard to analyze due to the intractableness of the integral in Eq. (8). However, we may consider the second-order approximation locally near the minima θ0, where L(θ) ≈ L0 + 12 (θ − θ0)
TH(θ − θ0). Without losing generality, we suppose θ0 = 0. Further, suppose that H is a positive definite matrix and the diffusion covariance Σt = Σ is constant for t. Then the SDE (7) becomes an Ornstein-Uhlenbeck process,
dθt = −Hθt dt+ Σ 1 2 dWt, θ0 = 0. (9)
Proposition 2 (Escaping efficiency of Ornstein-Uhlenbeck process). For Ornstein-Uhlenbeck process (9), with t small enough, the escaping efficiency from minimum θ0 = 0 is,
E[Lt − L0] = 1
4 Tr
(( I − e−2Ht ) Σ ) ≈ t
2 Tr (HΣ) . (10)
Inspired by Proposition 1 and Proposition 2, we propose Tr (HΣ) as an empirical indicator measuring the efficiency for a stochastic process escaping from minima. Now we turn to analysis which kind of noise covariance structure Σ will benefit escaping sharp minima, under the constraint Eq. (6).
Firstly, for the isotropic loss surface, i.e., H = λI , the escaping efficiency is E[Lt−L0] = λt2 Tr Σ, which is invariant under the constraint that Tr Σ is constant (Eq. (6)). Thus it is only nontrivial to study the impact of noise structure when the Hessian of loss surface is anisotropic.
Secondly, H and Σ being semi-positive definite, to achieve the maximum of Tr(HΣ) under constraint (6), Σ should be Σ∗ = (Tr Σ) · λ1u1uT1 , where λ1, u1 are the maximal eigenvalue and corresponding unit eigenvector of H . Note that the rank-1 matrix Σ∗ is highly anisotropic. More generally, the following Proposition 3 characterizes one kind of anisotropic noise significantly outperforming isotropic noise in order of number of parameters D, given H is ill-conditioned.
Proposition 3 (The benefits of anisotropic noise). With semi-positive definite H and Σ, assume
(1) H is ill-conditioned. Let λ1 ≥ λ2 ≥ . . . ,≥ λD ≥ 0 be the eigenvalues of H in descent order, and for some constant k D and d > 12 ,
λ1 > 0, λk+1, λk+2, . . . , λD < λ1D −d, (11)
(2) Σ is “aligned” with H . Let ui be the corresponding unit eigenvector of eigenvalue λi, for some projection coefficient a > 0,
uT1 Σu1 ≥ aλ1 Tr Σ
TrH , (12)
then we have the benefit of the anisotropic noise over the isotropic one in term of escaping efficiency, which can be characterized by the follow ratio,
Tr (HΣ) Tr(HΣ̄) = O
( aD(2d−1) ) , (13)
where Σ̄ = Tr ΣD I denotes the covariance of isotropic noise, to meet the constraint Eq. (6).
To give some geometric intuitions on the left hand side of Eq. (12), let the maximal eigenvalue and its corresponding unit eigenvector of Σ be γ1, v1, then the right hand side has a lower bound as uT1 Σu1 ≥ uT1 v1γ1vT1 u1 = γ1 〈u1, v1〉
2. Thus if the maximal eigenvalues of H and Σ are aligned in proportion, γ1/Tr Σ ≥ a1λ1/TrH , and the angle of their corresponding unit eigenvectors is close to zero, 〈u1, v1〉 ≥ a2, the second condition Eq. (12) in Proposition 3 holds for a = a1a2.
Typically, in the scenario of modern deep neural networks, due to the over-parameterization, Hessian and the gradient covariance are usually ill-conditioned and anistropic near minima, as shown by (Sagun et al., 2017) and (Chaudhari & Soatto, 2017). Thus the first condition in Eq. (11) usually holds for deep neural networks, and we further justify it by experiments in Section 5.3. Therefore, in the following section, we turn to focus on how the gradient covariance, i.e. the covariance of SGD noise meets the second condition of Proposition 3 in the context of deep neural networks.
4 THE ANISOTROPIC NOISE OF SGD IN DEEP NETWORKS
In this section, we mainly investigate the anisotropic structure of gradient covariance in SGD, and explore its connection with the Hessian of loss surface.
Around the true parameter According to the classic statistical theory (Pawitan, 2001, Chap. 8), for population loss L(θ) = EX`(θ), with ` being the negative log likelihood, when evaluating at the true parameter θ∗, there is the exact equivalence between the Hessian H of the population loss and Fisher information matrix F ,
F (θ∗) := EX [∇θ`(θ∗)∇θ`(θ∗)T ] = EX [∇2θ`(θ∗)] = ∇2θL(θ∗) =: H(θ∗). (14)
In practice, with the assumptions that the sample size N is large enough (i.e. indicating asymptotic behavior) and suitable smoothness conditions, when the current parameter θt is not far from the ground truth, Fisher is close to Hessian. Thus we can obtain the following approximate equality between gradient covariance and Hessian,
Σ(θt) = F (θt)−∇θLT (θt)∇θL(θt) ≈ F (θt) ≈ H(θt).
The first approximation is due to the dominance of noise over the mean of gradient in the later stage of SGD optimization, which has been shown in (Shwartz-Ziv & Tishby, 2017). A similar experiment as (Shwartz-Ziv & Tishby, 2017) has been conducted to demonstrate this observation, which is left in Appendix due to the limit of space.
In the following, we theoretically characterize the closeness between Σ and H in the context of one hidden layer neural networks; and show that the gradient covariance introduced by SGD indeed has more benefits than isotropic one in term of escaping from minima, provided some assumptions.
One hidden layer neural network with fixed output layer parameters For binary classification neural network with one hidden layer in classic setups (with softmax and cross-entropy loss), we have following results to globally bound Fisher and Hessian with each other. Proposition 4 (The relationship between Fisher and Hessian in one hidden layer neural network). Consider the binary classification problem with data {(xi, yi)}i∈I , y ∈ {0, 1}, and typical (either population or empirical) loss as L(θ) = E[φ ◦ f(x; θ)], where f denotes the output of neural network, and φ denotes the cross-entropy loss with softmax,
φ(f(x), y) = − ( y log ef(x)
1 + ef(x) + (1− y) log 1 1 + ef(x)
) , y ∈ {0, 1}.
If: (1) the neural network f is with one hidden layer and piece-wise linear activation. And the parameters of output layer are fixed during training; (2) the optimization happens on a set U such that, f(x; θ) ∈ (−C,C),∀θ ∈ U,∀x, i.e., the output of the classifier is bounded during optimization. Then, we have the following relationship between (either population or empirical) Fisher F and Hessian H almost everywhere:
e−CF (θ) H(θ) eCF (θ).
A B means that (B −A) is semi-positive definite.
There are a few remarks on Proposition 4. Firstly, as shown in (Brutzkus et al., 2017), the considered neural networks in Proposition 4 are non-convex and have multiple minima, and thus it is still nontrivial to consider the escaping from minima. Secondly, the Proposition 4 holds in both population and empirical sense, since the proof does not distinguish the two circumstances. Thirdly, the bound
between F and H holds "globally" in the set U where the output f is bounded, rather than merely around the true global minima as discussed previously.
By Proposition 4, the following relationship between gradient covariance and Hessian could be derived. Proposition 5 (The relationship between gradient covariance and Hessian in one hidden layer neural network). Assume the conditions in Proposition 4 hold, then for some small δ > 0 and for θ close enough to minima θ∗ (local or global),
uTΣu ≥ e−2(C+δ)λTr Σ TrH
(15)
holds for any positive eigenvalue λ and its corresponding unit eigenvector u of Hessian H .
As a direct corollary of Proposition 5, for such neural networks, the second condition Eq. (12) in Proposition 3 holds in a very loose sense.
Therefore, based on the discussion on population loss around the true parameters and one hidden layer neural network with fixed output layer parameters, given the ill-conditioning of H due to the over-parameterization of modern deep networks, according to Proposition 3, we can conclude the noise structure of SGD helps to escape from sharp minima much faster than the dynamics with isotropic noise, and converge to flatter solutions with a high probability. These flat minima typically generalize well (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017; Neyshabur et al., 2017; Wu et al., 2017). Thus, we attribute such properties of SGD on its better generalization performance comparing to GD, GLD and other dynamics with isotropic noise (Hoffer et al., 2017; Goyal et al., 2017; Keskar et al., 2017).
In the following, we conduct a series of experiments systematically to verify our understanding on the behavior of escaping from minima and its regularization effects for different optimization dynamics.
5 EXPERIMENTS
To better understanding the behavior of anisotropic noise different from isotropic ones, we introduce dynamics with different kinds of noise structure to empirical study with, as shown on Table 1.
Table 1: Compared dynamics defined in Eq. (3). For GLD dynamic, GLD diagonal, GLD Hessian and GLD 1st eigvec(H), σt are adjusted to make σt t share the same expected norm as that of SGD. For GLD leading, σt is same as in SGD. Note that GLD 1st eigvec(H) achieves the best escaping efficiency as our indicator suggested.
Noise t Remarks SGD t ∼ N
(
0,Σsgdt
)
Σsgdt is defined as in Eq. (1), and σt = ηt√ m
GLD constant t ∼ N (0, I) σt is a tunable constant GLD dynamic t ∼ N (0, I) σt is adjusted to make σt t share the same expected norm as that of SGD GLD diagonal t ∼ N ( 0, diag(Σt)
) The covariance diag(Σt) is the diagonal of the covariance of SGD noise.
GLD leading t ∼ N ( 0, Σ̃t )
Σ̃t = ∑k i=1 γiviv T i . γi, vi are the first k leading eigenvalues and corresponding eigenvalues of the covariance of SGD noise, respectively. (A low rank approximation of Σsgdt )
GLD Hessian t ∼ N ( 0, H̃t )
H̃t is a low rank approximation of the Hessian matrix of loss L(θ) by its the first k leading eigenvalues and corresponding eigenvalues.
GLD 1st eigven(H) t ∼ N ( 0, λ1u1u T 1 ) λ1, u1 are the maximal eigenvalue and its corresponding unit eigenvector of the Hessian matrix of lossL(θt).
5.1 TWO-DIMENSIONAL TOY EXAMPLE
We design a 2-D toy example L(w1, w2) with two basins, a small one and a large one, corresponding to a sharp and flat minima, (1, 1) and (−1,−1), respectively, both of which are global minima.
Please refer to Appendix for the detailed constructions. We initialize the dynamics of interest with the sharp minimum (w1, w2) = (1, 1), and run them to study their behaviors escaping from this sharp minimum.
To explicitly control the noise magnitude, we only conduct experiments on GD, GLD const, GLD diag, GLD leading (with k = 2 = D in Table 1, or in other words, the exactly covariance of SGD noise), GLD Hessian (k = 2) and GLD 1st eigven(H). And we adjust σt in each dynamics to force their noise to share the same expected squared norm as defined in Eq. (6). Figure 1(a) shows the trajectories of the dynamics escaping from the sharp minimum (1, 1) towards the flat one (−1,−1), while Figure 1(b) presents the success rate of escaping for each dynamic during 100 repeated experiments.
As shown in Figure 1, GLD 1st eigvec(H) achieves the highest success rate, indicating the fastest escaping speed from the sharp minimum. The dynamics with anisotropic noise aligned with Hessian well, including GLD 1st eigvec(H), GLD Hessian and GLD leading, greatly outperform GD, GLD const with isotropic noise, and GLD diag with noise poorly aligned with Hessian. These experiments are consistent with our theoretical analysis on Ornstein-Uhlenbeck process shown Proposition 2 and 3, demonstrating the benefits of anisotropic noise for escaping from sharp minima.
5.2 ONE HIDDEN LAYER NEURAL NETWORK WITH FIXED OUTPUT LAYER PARAMETERS
We empirically show that in one hidden layer neural network with fixed output layer parameters, the anisotropic noise induced by SGD indeed helps escape from sharp minima more efficiently than isotropic noise. Three networks are trained to binary classify 1, 000 linearly separable two-dimensional points. The number of hidden nodes for each network varies in {20, 200, 2000}. We plot the empirical indicator Tr (HΣ) in Figure 2. We can easily observe that as the increase of the number of hidden nodes, the ratio Tr(HΣ)
Tr(HΣ̄) is enlarged significantly, which is
consistent with the Eq. (13) described in Proposition 3.
5.3 PRACTICAL DATASETS
In this part, we conduct a series of experiments in real deep learning scenarios to demonstrate the behavior of SGD noise and its implicit regularization effects. We con-
struct a noisy training set based on FashionMNIST dataset1. Concretely, the training set consist of 1000 images with correct labels, and another 200 images with random labels. All the test data are with clean labels. A small LeNet-like network is utilized such that the spectrum decomposition over
1https://github.com/zalandoresearch/fashion-mnist
gradient covariance matrix and Hessian matrix are computationally feasible. The network consists of two convolutional layers and two fully-connected layers, with 11, 330 parameters in total.
We firstly run the standard gradient decent for 3000 iterations to arrive at the parameters θ∗GD near the global minima with near zero training loss and 100% training accuracy, which are typically sharp minima that generalize poorly (Neyshabur et al., 2017). And then all other compared methods are initialized with θ∗GD and run for optimization with the same learning rate ηt = 0.07 and same batch size m = 20 (if needed) for fair comparison2.
Verification of SGD noise satisfying the conditions in Proposition 3 To see whether the noise of SGD in real deep learning circumstance satisfies the two conditions in Proposition 3, we run SGD optimizer initialized from θ∗GD, i.e. the sharp minima found by GD. Figure 3(a) shows the first 400 eigenvalues of Hessian at θ∗GD, from which we see that the 140th eigenvalue has already decayed to about 1% of the first eigenvalue. Note that Hessian H ∈ RD×D, D = 11330, thus H around θ∗GD approximately meets the ill-conditioning requirement in Proposition 3. Figure 3(b) shows the projection coefficient estimated by â = u
T 1 Σu1 TrH λ1 Tr Σ
along the trajectory of SGD. The plot indicates that the projection coefficient is in a descent scale comparing toD2d−1, thus satisfying the second condition in Proposition 3. Therefore, Proposition 3 ensures that SGD would escape from minima θ∗GD faster than GLD in order ofO(D2d−1), as shown in Figure 3(c). An interesting observation is that in the later stage of SGD optimization, Tr(HΣ) becomes significantly (107 times) smaller than in the beginning stage, implying that SGD has already converged to minima being almost impossible to escape from. This phenomenon demonstrates the reasonability to employ Tr(HΣ) as an empirical indicator for escaping efficiency.
λ1 Tr Σ
, as shown in
Proposition 3. (c)Tr(HtΣt) versus Tr(HtΣ̄t) during SGD optimization initialized from θ∗GD , Σ̄t = Tr ΣtD I denotes the isotropic noise with same expected squared norm as SGD noise.
Behaviors of different dynamics escaping from minima and its generalization effects To compare the different dynamics on escaping behaviors and generalization performance, we run dynamics initialized from the sharp minima θ∗GD found by GD. The settings for each compared method are as follows. The hyperparameter σ2 for GLD const has already been tuned as optimal (σ = 0.001) by grid search. For GLD leading, we set k = 20 for comprising the computational cost and approximation accuracy. As for GLD Hessian, to reduce the expensive evaluation of such a huge Hessian in each iteration, we set k = 20 and update the Hessian every 10 iterations. We adjust σt in GLD dynamic, GLD Hessian and GLD 1st eigvec(H) to guarantee that they share the same expected squred noise norm defined in Eq. (6) as that of SGD. And we measure the expected sharpness of different minima as Eν∼N (0,δ2I) [ L(θ + ν) ] − L(θ), as defined in ((Neyshabur et al., 2017), Eq.(7)). The results are shown in Figure 4.
As shown in Figure 4, SGD, GLD 1st eigvec(H), GLD leading and GLD Hessian successfully escape from the sharp minima found by GD, while GLD, GLD dynamic and GLD diag are trapped in the minima. This demonstrates that the methods with anisotropic noise “aligned” with loss curvature can help to find flatter minima that generalize well.
We also provide experiments on standard CIFAR-10 with VGG11 in Appendix.
2In fact, in our experiment, we test the equally spacing learning rates in the range [0.01, 0.1], and the final results are consistent with each other.
6 CONCLUSION
We theoretically investigate a general optimization dynamics with unbiased noise, which unifies various existing optimization methods, including SGD. We provide some novel results on the behaviors of escaping from minima and its regularization effects. A novel indicator is derived for characterizing the escaping efficiency. Based on this indicator, two conditions are constructed for showing what type of noise structure is superior to isotropic noise in term of escaping. We then analyze the noise structure of SGD in deep learning and find that it indeed satisfies the two conditions, thus explaining the widely know observation that SGD can escape from sharp minima efficiently toward flat minina that generalize well. Various experimental evidence supports our arguments on the behavior of SGD and its effects on generalization. Our study also shows that isotropic noise helps little for escaping from sharp minima, due to the highly anisotropic nature of landscape. This indicates that it is not sufficient to analyze SGD by treating it as an isotropic diffusion over landscape (Zhang et al., 2017; Mou et al., 2017). A better understanding of this out-of-equilibrium behavior (Chaudhari & Soatto, 2017) is on demand.
A PROOFS OF PROPOSITIONS IN MAIN PAPER
A.1 PROOF OF PROPOSITION 1
Proof. The "mild smoothness assumptions" refers that Lt = L(θt) ∈ C2. Then the Ito’s lemma holds (Øksendal, 2003).
And by Ito’s lemma, the SDE of Lt is
dLt =
( −∇LT∇L+ 1
2 Tr
( Σ 1 2 t HtΣ 1 2 t )) dt+∇LTΣ 1 2 t dWt
= ( −∇LT∇L+ 1
2 Tr (HtΣt)
) dt+∇LTΣ 1 2 t dWt.
Taking expectation with respect to the distribution of θt,
dELt = E ( −∇LT∇L+ 1
2 Tr(HtΣt)
) dt, (16)
for the expectation of Brownian motion is zero. Thus the solution of EYt is,
ELt = L0 − ∫ t
0
E ( ∇LT∇L ) + ∫ t 0 1 2 ETr(HtΣt) dt.
A.2 PROOF OF PROPOSITION 2
Proof. Without losing generality, we assume that L0 = 0.
For multivariate Ornstein-Uhlenbeck process, when θ0 = 0 is an constant, θt follows a multivariate Gaussian distribution (Øksendal, 2003).
Consider change of variables θ → φ(θ, t) = eHtθt. Here, for symmetric matrix A,
eA := Udiag(eλ1 , . . . , eλn)U,
where λ1, . . . , λn and U are the eigenvalues and eigenvector matrix of A. Note that with this notation,
deHt
dt = HeHt.
Applying Ito’s lemma, we have
dφ(θt, t) = e HtΣ 1 2 dWt,
which we can integrate form 0 to t to get
θt = 0 + ∫ t 0 eH(s−t)Σ 1 2 dWs
The expectation of θt is zero. And by Ito’s isometry (Øksendal, 2003), the covariance of θt is,
EθtθTt = E ∫ t 0 eH(s−t)Σ 1 2 dWs (∫ t 0 eH(r−t)Σ 1 2 dWr )T = E
[∫ t 0 eH(s−t)Σ 1 2 Σ 1 2 eH(s−t) ds ]
= E [∫ t 0 eH(s−t)ΣeH(s−t) ds ]
= ∫ t 0 eH(s−t)ΣeH(s−t) ds. (for H and Σ are both constant.)
Thus,
EL(θt) = 1
2 ETr
( θTt Hθt ) = 1 2 Tr ( HEθtθTt
) = 1
2 ∫ t 0 Tr ( HeH(s−t)ΣeH(s−t) ) ds
= 1
2 ∫ t 0 Tr ( eH(s−t)HΣeH(s−t) ) ds (for H is symmetric.)
= 1
2 ∫ t 0 Tr ( e2H(s−t)HΣ ) ds
= 1
2 Tr
( 1
2 H−1
( I − e−2Ht ) HΣ ) = 1
4 Tr
(( I − e−2Ht ) Σ ) .
The last approximation is by Taylor’s expansion.
A.3 PROOF OF PROPOSITION 3
Proof. Firstly, Tr(HΣ) has the decomposition as Tr(HΣ) = ∑D i=1 λiu T i Σui.
Secondly, compute Tr(HΣ),Tr(HΣ̄) respectively,
Tr(HΣ) ≥ uT1 Σu1 ≥ aλ1 Tr Σ
TrH ,
Tr(HΣ̄) = Tr Σ
D TrH,
and bound their quotient,
Tr(HΣ) Tr(HΣ̄) ≥ aλ1D (TrH) 2 ≥ aλ1D( kλ1 + (D − k)D−dλ1 )2 = O (aD2d−1) . (17) The proof is finished.
A.4 PROOF OF PROPOSITION 4
Proof. Firstly compute the gradients and Hessian of φ,
∂φ ∂f =
ef
1 + ef − y =
{ ef
1+ef > 0 y = 0, − 1 1+ef < 0 y = 1.
∂2φ ∂f2 =
ef
(1 + ef )2 .
And note the Gauss-Newton decomposition for functions with the form of L = φ ◦ f ,
H = E(x,y) ∂ 2`((x,y);θ) ∂θ2
= E(x,y) ∂ 2φ ∂f2 ∂f ∂θ ∂fT ∂θ + E(x,y) ∂φ ∂f ∂2f ∂θ2 .
Since the output layer parameters for f is fixed and the activation functions are piece-wise linear, f(x; θ) is a piece-wise linear function on its parameters θ. Therefore ∂
2f ∂θ2 = 0, a.e., and H =
E(x,y) ∂ 2φ ∂f2 ∂f ∂θ ∂fT ∂θ .
It is easy to check that e−C ( ∂φ ∂f )2 ≤ ∂ 2φ ∂f2 ≤ e C ( ∂φ ∂f )2 . Thus,
H = E(x,y) ∂ 2φ ∂f2 ∂f ∂θ ∂fT ∂θ E(x,y)e C ( ∂φ ∂f )2 ∂f ∂θ ∂fT ∂θ = E(x,y)e C ( ∂φ ∂f ∂f ∂θ )( ∂φ ∂f ∂f ∂θ )T = eCF.
H = E(x,y) ∂ 2φ ∂f2 ∂f ∂x ∂fT ∂x E(x,y)e −C ( ∂φ ∂f )2 ∂f ∂x ∂fT ∂x = E(x,y)e −C ( ∂φ ∂f ∂f ∂θ )( ∂φ ∂f ∂f ∂θ )T = e−CF.
A.5 PROOF OF PROPOSITION 5
Proof. For simplicity, we define g := ∇`, g0 := ∇L = E∇`. The gradient covariance and Fisher has the following relationship,
F = Eg · gT = E(g0 + )(g0 + )T = g0gT0 + E T = g0gT0 + Σ.
Applying Taylor’s expansion to g0(θ), g0(θ) = g0(θ ∗) +H(θ∗)(θ − θ∗) + o(θ − θ∗) = H(θ∗)(θ − θ∗) + o(θ − θ∗). Hence, ∥∥g0(θ)∥∥22 ≤‖H‖22‖θ − θ∗‖22 + o(‖θ − θ∗‖22) =‖H‖22‖θ − θ∗‖22 + o(‖θ − θ∗‖22) . Therefore, with the condition‖θ − θ∗‖2 ≤ √ δuTFu ‖H‖2
, we have∥∥g0(θ)∥∥22 ≤ δuTFu+ o (|δ|) . Thus,
uTΣu Tr Σ = uTFu− uT g0gT0 u TrF − Tr(g0gT0 ) ≥ uTFu−‖g0‖22 TrF −‖g0‖22 ≥ uTFu−‖g0‖22 TrF
= uTFu
TrF ( 1− ‖g0‖22 uTFu ) ≥ u TFu TrF ( 1− δ − o ( |δ| )) ≥ u TFu TrF e−2δ,
for δ small enough.
On the other hand, Proposition 4 indicates that e−CF H eCF , which means, ∀u, uT (eCF −H)u ≥ 0
and Tr(H − e−CF ) ≥ 0.
Thus u TFu TrF ≥ uT (e−CH)u Tr(eCH) .
Therefore, for λ, u being a positive eigenvalue and the corresponding unit eigenvector of H , we have
uTFu
TrF ≥ e−2C λ
TrH uTΣu
Tr Σ ≥ u
TFu
TrF e−2δ ≥ e−2(C+δ) λ TrH .
B ADDITIONAL EXPERIMENTS
B.1 DOMINANCE OF NOISE OVER GRADIENT
Figure 5 shows the comparison of gradient mean and the expected norm of noise during training using SGD. The dataset and model are same as the experiments of FashionMNIST in main paper, or as in Section C.2. From Figure 5, we see that in the later stage of SGD optimization, noise indeed dominates gradient.
These experiments are implemented by TensorFlow 1.5.0.
B.2 THE FIRST 50 ITERATIONS OF FASHIONMNIST EXPERIMENTS IN MAIN PAPER
Figure 6 shows the first 50 iterations of FashionMNIST experiments in main paper. We observe that SGD, GLD 1st eigvec(H), GLD Hessian and GLD leading successfully escape from the sharp minima found by GD, while GLD diag, GLD dynamic, GLD const and GD do not.
These experiments are implemented by TensorFlow 1.5.0.
B.3 ADDITIONAL EXPERIMENTS ON STANDARD CIFAR-10 AND VGG11
Dataset Standard CIFAR-10 dataset without data augmentation.
Model Standard VGG11 network without any regularizations including dropout, batch normalization, weight decay, etc. The total number of parameters of this network is 9, 750, 922.
Training details Learning rates ηt = 0.05 are fixed for all optimizers, which is tuned for the best generalization performance of GD. The batch size of SGD is m = 100. The noise std of GLD constant is σ = 10−3, which is tuned to best. Due to computational limitation, we only conduct experiments on GD, GLD const, GLD dynamic, GLD diag and SGD.
Estimation of Sharpness The sharpness are estimated by
1
M M∑ j=1 L(θ + νj)− L(θ), νj ∼ N (0, δ2I),
with M = 100 and δ = 0.01.
Experiments Similar experiments are conducted as in main paper for CIFAR-10 and VGG11, as shown in Figure 7. The observations and conclusions consist with main paper.
These experiments are implemented by PyTorch 0.3.0.
C DETAILED SETUPS FOR EXPERIMENTS IN MAIN PAPER
C.1 TWO-DIMENSIONAL TOY EXAMPLE
Loss Surface The loss surface L(w1, w2) is constructed by, s1 = w1 − 1− x1, s2 = w2 − 1− x2, `(w1, w2;x1, x2) = min{10(s1 cos θ − s2 sin θ)2
+ 100(s1 cos θ + s2 sin θ) 2, (w1 − x1 + 1)2 + (w2 − x2 + 1)2},
L(w1, w2) = 1
N N∑ k=1 `(w1, w2;x k 1 , x k 2),
where
θ = 1
4 π,
N = 100, xk ∼ N (0,Σ), Σ = (
cos θ sin θ − sin θ cos θ
) .
Note that Σ is the inverse of the Hessian of the quadric form generalizeing the sharp minima. And the 3-dimensional plot of the loss surface is shown in Figure 8.
Hyperparameters All learning rates are equal to 0.005. All dynamics concerned are tuned to share the same expected square norm, 0.01. The number of iteration during one run is 500.
These experiments are implemented by PyTorch 0.3.0.
C.2 FASHIONMNIST WITH CORRUPTED LABELS
Dataset Our training set consists of 1200 examples randomly sampled from original FashionMNIST training set, and we further specify 200 of them with randomly wrong labels. The test set is same as the original FashionMNIST test set.
Model Network architecture: input⇒ conv1⇒ max_pool⇒ ReLU⇒ conv2⇒ max_pool
⇒ ReLU⇒ fc1⇒ ReLU⇒ fc2⇒ output. Both two convolutional layers use 5 × 5 kernels with 10 channels and no padding. The number of hidden units between fully connected layers are 50. The total number of parameters of this network are 11, 330.
w1 1.5 1.0 0.50.0 0.5 1.0 1.5
w2
1.5 1.0
0.5 0.0
0.5 1.0
1.5
loss 2 4 6 8 10 12
2 4 6 8 10 12
Training details
Estimation of Sharpness The sharpness are estimated by
1
M M∑ j=1 L(θ + νj)− L(θ), νj ∼ N (0, δ2I),
with M = 1, 000 and δ = 0.01.
These experiments are implemented by TensorFlow 1.5.0. | 1. What is the main contribution of the paper regarding SGD optimization for training deep networks?
2. What are the strengths of the paper, particularly in its theoretical analysis?
3. Do you have any concerns or questions about the paper, especially regarding its proof and experimental design?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
The paper studies the benefit of an anisotropic gradient covariance matrix in SGD optimization for training deep network in terms of escaping sharp minima (which has been discussed to correlate with poor generalization in recent literature).
In order to do so, SGD is studied as a discrete approximation of stochastic differential equation (SDE). To analyze the benefits of anisotropic nature and remove the confounding effect from scale of noise, the scale of noise in the SDE is considered fixed during the analysis. The authors identify the expected loss around a minimum as the efficient of escaping the minimum and show its relation with the hessian and gradient covariance at the minimum. It is then shown that when all the positive eigenvalues of the covariance matrix concentrate along the top eigenvector and this eigenvector is aligned with the top eigenvector of the Hessian of the loss w.r.t. the parameters, SGD is most efficient at escaping sharp minima. These characteristics are analytically shown to hold true for a 1 hidden layer network and experiments are conducted on toy and real datasets to verify the theoretical predictions.
Comments:
I find the main claim of the paper intuitive-- at any particular minimum, if noise in SGD is more aligned with the direction along which loss surface has a large curvature (thus the minimum is sharp along this direction), SGD will escape this minimum more efficiently. On the other hand, isotropic noise will be wasteful because a sample from isotropic noise distribution may point along flat directions of the loss even though there may exist other directions along which the loss curvature is large. However, I have several concerns which I find difficult to point out because *many equations are not numbered*.
1. In proposition 2, it is assumed under the argument of no loss of generality that both the loss at the minimum L_0=0 and the corresponding theta_0 =0. Can the authors clarify how both can be simultaneously true without any loss of generality?
2. A number of steps in proposition 2 are missing which makes it difficult to verify. When applying Ito's lemma and taking the integral from 0 to t, it is not mentioned that both sides are also multiplied with the inverse of exp(Ht).
3. In proposition 2, when computing E[L(theta_t)] on page 12, the equalities after line 3 are not clear how they are derived. Please clarify or update the proof with sufficient details.
4. It is mentioned below proposition 2 that the maximum of Tr(H. Sigma) under constraint (6) is achieved when Sigma* = Tr(Sigma). lambda_1 u1.u1^T, where lambda_1 is the top eigenvalue of H. How is lambda_1 a factor in Sigma*? I think Sigma* should be Tr(Sigma). u1.u1^T because this way the sum of eigenvalues of Sigma remains unchanged which is what constraint (6) states.
5. The proof of proposition 5 is highly unclear.Where did the inequality ||g_0(theta)||^2 <= delta.u^TFu + o(|delta|) come from? Also, the inequality right below it involves the assumption that u^Tg_0 g_0u <= ||g_0||^2 and no justification has been provided behind this assumption.
Regarding experiments, the toy experiment in section 5.1 is interesting, but it is not mentioned what network architecture is used in this experiment. I found the experiments in section 5.3 and specifically Fig 4 and Fig 7 insightful. I do have a concern regarding this experiment though. In the experiment on FashionMNIST in Fig 4, it can be seen that both SGD and GLD 1st eigvec escapes sharp minimum, and this is coherrent with the theory. However, for the experiment on CIFAR-10 in Fig 7, experiment with GLD 1st eigvec is missing. Can the authors show the result for GLD 1st eigvec on CIFAR-10? I think it is an important verification of the theory and CIFAR-10 is a more realistic dataset compared with FashionMNIST.
A few minor points:
1. In the last paragraph of page 3, it is mentioned that the probability of escaping can be controlled by the expected loss around minimum due to Markov's inequality. This statement is inaccurate. A large expected loss upper bounds the escaping probability, it does not control it.
2. Section 4 is titled "The anisotropic noise of SGD in deep networks", but the sections analyses a 1 hidden layes network. This seems inappropriate.
3. In the conclusion section, it is mentioned that the theory in the paper unifies various existing optimization mentods. Please clarify.
Overall, I found the argument of the paper somewhat interesting but I am not fully convinced because of the concerns mentioned above. |
ICLR | Title
The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Minima and Regularization Effects
Abstract
Understanding the behavior of stochastic gradient descent (SGD) in the context of deep neural networks has raised lots of concerns recently. Along this line, we theoretically study a general form of gradient based optimization dynamics with unbiased noise, which unifies SGD and standard Langevin dynamics. Through investigating this general optimization dynamics, we analyze the behavior of SGD on escaping from minima and its regularization effects. A novel indicator is derived to characterize the efficiency of escaping from minima through measuring the alignment of noise covariance and the curvature of loss function. Based on this indicator, two conditions are established to show which type of noise structure is superior to isotropic noise in term of escaping efficiency. We further show that the anisotropic noise in SGD satisfies the two conditions, and thus helps to escape from sharp and poor minima effectively, towards more stable and flat minima that typically generalize well. We verify our understanding through comparing this anisotropic diffusion with full gradient descent plus isotropic diffusion (i.e. Langevin dynamics) and other types of position-dependent noise.
1 INTRODUCTION
As a successful learning algorithm, stochastic gradient descent (SGD) was originally adopted for dealing with the computational bottleneck of training neural networks with large-scale datasets (Bottou, 1991). Its empirical efficiency and effectiveness have attracted lots of attention. And thus, SGD and its variants have become standard workhorse for learning deep models. Besides the aspect of empirical efficiency, recently, researchers started to analyze the optimization behaviors of SGD and its impacts on generalization.
The optimization properties of SGD have been studied from various perspectives. The convergence behaviors of SGD for simple one hidden layer neural networks were investigated in (Li & Yuan, 2017; Brutzkus et al., 2017). In non-convex settings, the characterization of how SGD escapes from stationary points, including saddle points and local minima, was analyzed in (Daneshmand et al., 2018; Jin et al., 2017; Hu et al., 2017).
On the other hand, in the context of deep learning, researchers realized that the noise introduced by SGD impacts the generalization, thanks to the research on the phenomenon that training with a large batch could cause a significant drop of test accuracy (Keskar et al., 2017). Particularly, several works attempted to investigate how the magnitude of the noise influences the generalization during the process of SGD optimization, including the batch size and learning rate (Hoffer et al., 2017; Goyal et al., 2017; Chaudhari & Soatto, 2017; Jastrzębski et al., 2017). Another line of research interpreted SGD from a Bayesian perspective. In (Mandt et al., 2017; Chaudhari & Soatto, 2017), SGD was interpreted as performing variational inference, where certain entropic regularization involves to prevent overfitting. And the work (Smith & Le, 2018) tried to provide an understanding based on model evidence. These explanations are compatible with the flat/sharp minima argument (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017), since Bayesian inference tends to targeting the region with large probability mass, corresponding to the flat minima.
However, when analyzing the optimization behavior and regularization effects of SGD, most of existing works only assume the noise covariance of SGD is constant or upper bounded by some
constant, and what role the noise structure of stochastic gradient plays in optimization and generalization was rarely discussed in literature.
In this work, we theoretically study a general form of gradient-based optimization dynamics with unbiased noise, which unifies SGD and standard Langevin dynamics. By investigating this general dynamics, we analyze how the noise structure of SGD influences the escaping behavior from minima and its regularization effects. Several novel theoretical results and empirical justifications are made.
1. We derive a key indicator to characterize the efficiency of escaping from minima through measuring the alignment of noise covariance and the curvature of loss function. Based on this indicator, two conditions are established to show which type of noise structure is superior to isotropic noise in term of escaping efficiency;
2. We further justify that SGD in the context of deep neural networks satisfies these two conditions, and thus provide a plausible explanation why SGD can escape from sharp minima more efficiently, converging to flat minima with a higher probability. Moreover, these flat minima typically generalize well according to various works (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017; Neyshabur et al., 2017; Wu et al., 2017). We also show that Langevin dynamics with well tuned isotropic noise cannot beat SGD, which further confirms the importance of noise structure of SGD;
3. A large number of experiments are designed systematically to justify our understanding on the behavior of the anisotropic diffusion of SGD. We compare SGD with full gradient descent with different types of diffusion noise, including isotropic and positiondependent/independent noise. All these comparisons demonstrate the effectiveness of anisotropic diffusion for good generalization in training deep networks.
The remaining of the paper is organized as follows. In Section 2, we introduce the background of SGD and a general form of optimization dynamics of interest. We then theoretically study the behaviors of escaping from minima in Ornstein-Uhlenbeck process in Section 3, and establish two conditions for characterizing the noise structure that affects the escaping efficiency. In Section 4, we show that the noise of SGD in the context of deep learning meets the two conditions, and thus explains its superior efficiency of escaping from sharp minima over other dynamics with isotropic noise. Various experiments are conducted for verifying our understanding in Section 5, and we conclude the paper in Section 6.
2 BACKGROUND
In general, supervised learning usually involves an optimization process of minimizing an empirical loss over training data, L(θ) := 1/N ∑N i=1 `(f(xi; θ), yi), where {(xi, yi)}Ni=1 denotes the training set with N i.i.d. samples, the prediction function f is often parameterized by θ ∈ RD, such as deep neural networks. And `(·, ·) is the loss function, such as mean squared error and cross entropy, typically corresponding to certain negative log likelihood. Due to the over parameterization and non-convexity of the loss function in deep networks, there exist multiple global minima, exhibiting diverse generalization performance. We call those solutions generalizing well good solutions or minima, and vice versa.
Gradient descent and its stochastic variants A typical approach to minimize the loss function is gradient descent (GD), the dynamics of which in each iteration t is, θt+1 = θt − ηtg0(θt), where g0(θt) = ∇θL(θt) denotes the full gradient and ηt denotes the learning rate. In non-convex optimization, a more useful kind of gradient based optimizers act like GD with an unbiased noise, including gradient Langevin dynamics (GLD), θt+1 = θt − ηtg0(θt) + σt t, t ∼ N (0, I), and stochastic gradient descent (SGD), during each iteration t of which, a minibatch of training samples with size m are randomly selected, with index set Bt ⊂ {1, 2, . . . , N}, and a stochastic gradient is evaluated based on the chosen minibatch, g̃(θt) = ∑ i∈Bt ∇θ`(f(xi; θt), yi)/m, which is an unbiased estimator of the full gradient g0(θt). Then, the parameters are updated with some learning rate ηt as θt+1 = θt − ηtg̃(θt). Denote g(θ) = ∇θ`((f(x; θ), y), the gradient for loss with a single data point (x, y), and assume that the size of minibatch is large enough for the central limit theorem to
hold, and thus g̃(θt) follows a Gaussian distribution (Mandt et al., 2017; Li et al., 2017), g̃(θt) ∼ N ( g0(θt), 1
m Σ(θt)
) , where Σ(θt) ≈ 1
N N∑ i=1 ( g(θt;xi)− g0(θt) ) ( g(θt;xi)− g0(θt) )T .
(1) Note that the covariance matrix Σ depends on the model architecture, dataset and the current parameter θt. Now we can rewrite the update of SGD as,
θt+1 = θt − ηtg0(θt) + ηt√ m t, t ∼ N
( 0,Σ(θt) ) . (2)
Inspired by GLD and SGD, we may consider a general kind of optimization dynamics, namely, gradient descent with unbiased noise,
θt+1 = θt − ηtg0(θt) + σt t, t ∼ N (0,Σt) . (3)
For small enough constant learning rate ηt = η, the above iteration in Eq. (3) can be treated as the numerical discretization of the following stochastic differential equation (Li et al., 2017; Jastrzębski et al., 2017; Chaudhari & Soatto, 2017),
dθt = −∇θL(θt) dt+ √ ησ2tΣt dWt. (4)
Considering √ ησ2tΣt as the coefficient of noise term, existing works (Hoffer et al., 2017; Jastrzębski et al., 2017) studied the influence of noise magnitude of SGD on generalization, i.e. ησ2t = η/m.
In this work, we focus on studying the benefits of anisotropic structure of Σt in SGD helping escape from minima by bridging the covariance matrix with the Hessian of the loss surface, and its implicit regularization effects on generalization, especially in deep learning context. For the purpose of eliminating the influence of the noise magnitude, we constrain it to be a constant when studying different structures of noise covariance. The noise magnitude could be evaluated as the expectation of the squared norm of the noise vector,
E[( √ ησt t) T ( √ ησt t)] = ησ 2 tE[ T ] = ησ2t TrE[ T ] = ησ2t Tr Σt. (5)
Thus, we introduce the following constraint,
given time t, ησ2t Tr (Σt) is constant. (6)
From the statistical physics point of view, Tr(ησ2tΣt) characterizes the kinetic energy (Gardiner), thus it is natural to force the energy to be unchanging, otherwise it is trivial that the higher the energy is, the less stable the system is.
For simplicity, we absorb ησ2t into Σt, denoting ησ 2 tΣt as Σt. If not pointed out, the subscript t of matrix Σt is omitted to emphasize that we are fixing t and discussing the varying structure of Σ.
3 THE BEHAVIORS OF ESCAPING FROM MINIMA IN ORNSTEIN-UHLENBECK PROCESS
For a general loss function L(θ) = EX`X(θ) (the expectation could be either population or empirical), where X denotes data example and θ denoted parameters to be optimized, under suitable smoothness assumptions, the SDE associated with the gradient variant optimizer as shown in Eq. (4) can be written as follows (Li et al., 2017; Jastrzębski et al., 2017; Chaudhari & Soatto, 2017; Hu et al., 2017), with little abuse of notation,
dθt = −∇θL(θt) dt+ Σ 1 2 t dWt. (7)
Let L0 = L(θ0) be one of the minimal values of L(θ), then for a fixed t small enough (such that Lt−L0 ≥ 0), Eθt [Lt−L0] characterizes the efficiency of θ escaping from the minimum θ0 of L(θ). It is natural to measure the escaping efficiency using E[Lt − L0] since it characterizes the increase of the potential, i.e., the increase of the loss L. And also note that Lt − L0 ≥ 0, for any δ > 0, the escaping probability P (Lt − L0 ≥ δ) can be controlled by the expectation E[Lt − L0] since by Markov’s inequality, we have P (Lt − L0 ≥ δ) ≤ E[Lt−L0]δ .
Proposition 1 (Escaping efficiency for general process). For the process (7), provided mild smoothness assumptions, the escaping efficiency from the minimum θ0 is,
E[Lt − L0] = − ∫ t
0
E [ ∇LT∇L ] + ∫ t 0 1 2 ETr(HtΣt) dt, (8)
where Ht denotes the Hessian of L(θt) at θt.
We provide the proof in Appendix, and the same for the other propositions.
The escaping efficiency for general processes is hard to analyze due to the intractableness of the integral in Eq. (8). However, we may consider the second-order approximation locally near the minima θ0, where L(θ) ≈ L0 + 12 (θ − θ0)
TH(θ − θ0). Without losing generality, we suppose θ0 = 0. Further, suppose that H is a positive definite matrix and the diffusion covariance Σt = Σ is constant for t. Then the SDE (7) becomes an Ornstein-Uhlenbeck process,
dθt = −Hθt dt+ Σ 1 2 dWt, θ0 = 0. (9)
Proposition 2 (Escaping efficiency of Ornstein-Uhlenbeck process). For Ornstein-Uhlenbeck process (9), with t small enough, the escaping efficiency from minimum θ0 = 0 is,
E[Lt − L0] = 1
4 Tr
(( I − e−2Ht ) Σ ) ≈ t
2 Tr (HΣ) . (10)
Inspired by Proposition 1 and Proposition 2, we propose Tr (HΣ) as an empirical indicator measuring the efficiency for a stochastic process escaping from minima. Now we turn to analysis which kind of noise covariance structure Σ will benefit escaping sharp minima, under the constraint Eq. (6).
Firstly, for the isotropic loss surface, i.e., H = λI , the escaping efficiency is E[Lt−L0] = λt2 Tr Σ, which is invariant under the constraint that Tr Σ is constant (Eq. (6)). Thus it is only nontrivial to study the impact of noise structure when the Hessian of loss surface is anisotropic.
Secondly, H and Σ being semi-positive definite, to achieve the maximum of Tr(HΣ) under constraint (6), Σ should be Σ∗ = (Tr Σ) · λ1u1uT1 , where λ1, u1 are the maximal eigenvalue and corresponding unit eigenvector of H . Note that the rank-1 matrix Σ∗ is highly anisotropic. More generally, the following Proposition 3 characterizes one kind of anisotropic noise significantly outperforming isotropic noise in order of number of parameters D, given H is ill-conditioned.
Proposition 3 (The benefits of anisotropic noise). With semi-positive definite H and Σ, assume
(1) H is ill-conditioned. Let λ1 ≥ λ2 ≥ . . . ,≥ λD ≥ 0 be the eigenvalues of H in descent order, and for some constant k D and d > 12 ,
λ1 > 0, λk+1, λk+2, . . . , λD < λ1D −d, (11)
(2) Σ is “aligned” with H . Let ui be the corresponding unit eigenvector of eigenvalue λi, for some projection coefficient a > 0,
uT1 Σu1 ≥ aλ1 Tr Σ
TrH , (12)
then we have the benefit of the anisotropic noise over the isotropic one in term of escaping efficiency, which can be characterized by the follow ratio,
Tr (HΣ) Tr(HΣ̄) = O
( aD(2d−1) ) , (13)
where Σ̄ = Tr ΣD I denotes the covariance of isotropic noise, to meet the constraint Eq. (6).
To give some geometric intuitions on the left hand side of Eq. (12), let the maximal eigenvalue and its corresponding unit eigenvector of Σ be γ1, v1, then the right hand side has a lower bound as uT1 Σu1 ≥ uT1 v1γ1vT1 u1 = γ1 〈u1, v1〉
2. Thus if the maximal eigenvalues of H and Σ are aligned in proportion, γ1/Tr Σ ≥ a1λ1/TrH , and the angle of their corresponding unit eigenvectors is close to zero, 〈u1, v1〉 ≥ a2, the second condition Eq. (12) in Proposition 3 holds for a = a1a2.
Typically, in the scenario of modern deep neural networks, due to the over-parameterization, Hessian and the gradient covariance are usually ill-conditioned and anistropic near minima, as shown by (Sagun et al., 2017) and (Chaudhari & Soatto, 2017). Thus the first condition in Eq. (11) usually holds for deep neural networks, and we further justify it by experiments in Section 5.3. Therefore, in the following section, we turn to focus on how the gradient covariance, i.e. the covariance of SGD noise meets the second condition of Proposition 3 in the context of deep neural networks.
4 THE ANISOTROPIC NOISE OF SGD IN DEEP NETWORKS
In this section, we mainly investigate the anisotropic structure of gradient covariance in SGD, and explore its connection with the Hessian of loss surface.
Around the true parameter According to the classic statistical theory (Pawitan, 2001, Chap. 8), for population loss L(θ) = EX`(θ), with ` being the negative log likelihood, when evaluating at the true parameter θ∗, there is the exact equivalence between the Hessian H of the population loss and Fisher information matrix F ,
F (θ∗) := EX [∇θ`(θ∗)∇θ`(θ∗)T ] = EX [∇2θ`(θ∗)] = ∇2θL(θ∗) =: H(θ∗). (14)
In practice, with the assumptions that the sample size N is large enough (i.e. indicating asymptotic behavior) and suitable smoothness conditions, when the current parameter θt is not far from the ground truth, Fisher is close to Hessian. Thus we can obtain the following approximate equality between gradient covariance and Hessian,
Σ(θt) = F (θt)−∇θLT (θt)∇θL(θt) ≈ F (θt) ≈ H(θt).
The first approximation is due to the dominance of noise over the mean of gradient in the later stage of SGD optimization, which has been shown in (Shwartz-Ziv & Tishby, 2017). A similar experiment as (Shwartz-Ziv & Tishby, 2017) has been conducted to demonstrate this observation, which is left in Appendix due to the limit of space.
In the following, we theoretically characterize the closeness between Σ and H in the context of one hidden layer neural networks; and show that the gradient covariance introduced by SGD indeed has more benefits than isotropic one in term of escaping from minima, provided some assumptions.
One hidden layer neural network with fixed output layer parameters For binary classification neural network with one hidden layer in classic setups (with softmax and cross-entropy loss), we have following results to globally bound Fisher and Hessian with each other. Proposition 4 (The relationship between Fisher and Hessian in one hidden layer neural network). Consider the binary classification problem with data {(xi, yi)}i∈I , y ∈ {0, 1}, and typical (either population or empirical) loss as L(θ) = E[φ ◦ f(x; θ)], where f denotes the output of neural network, and φ denotes the cross-entropy loss with softmax,
φ(f(x), y) = − ( y log ef(x)
1 + ef(x) + (1− y) log 1 1 + ef(x)
) , y ∈ {0, 1}.
If: (1) the neural network f is with one hidden layer and piece-wise linear activation. And the parameters of output layer are fixed during training; (2) the optimization happens on a set U such that, f(x; θ) ∈ (−C,C),∀θ ∈ U,∀x, i.e., the output of the classifier is bounded during optimization. Then, we have the following relationship between (either population or empirical) Fisher F and Hessian H almost everywhere:
e−CF (θ) H(θ) eCF (θ).
A B means that (B −A) is semi-positive definite.
There are a few remarks on Proposition 4. Firstly, as shown in (Brutzkus et al., 2017), the considered neural networks in Proposition 4 are non-convex and have multiple minima, and thus it is still nontrivial to consider the escaping from minima. Secondly, the Proposition 4 holds in both population and empirical sense, since the proof does not distinguish the two circumstances. Thirdly, the bound
between F and H holds "globally" in the set U where the output f is bounded, rather than merely around the true global minima as discussed previously.
By Proposition 4, the following relationship between gradient covariance and Hessian could be derived. Proposition 5 (The relationship between gradient covariance and Hessian in one hidden layer neural network). Assume the conditions in Proposition 4 hold, then for some small δ > 0 and for θ close enough to minima θ∗ (local or global),
uTΣu ≥ e−2(C+δ)λTr Σ TrH
(15)
holds for any positive eigenvalue λ and its corresponding unit eigenvector u of Hessian H .
As a direct corollary of Proposition 5, for such neural networks, the second condition Eq. (12) in Proposition 3 holds in a very loose sense.
Therefore, based on the discussion on population loss around the true parameters and one hidden layer neural network with fixed output layer parameters, given the ill-conditioning of H due to the over-parameterization of modern deep networks, according to Proposition 3, we can conclude the noise structure of SGD helps to escape from sharp minima much faster than the dynamics with isotropic noise, and converge to flatter solutions with a high probability. These flat minima typically generalize well (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017; Neyshabur et al., 2017; Wu et al., 2017). Thus, we attribute such properties of SGD on its better generalization performance comparing to GD, GLD and other dynamics with isotropic noise (Hoffer et al., 2017; Goyal et al., 2017; Keskar et al., 2017).
In the following, we conduct a series of experiments systematically to verify our understanding on the behavior of escaping from minima and its regularization effects for different optimization dynamics.
5 EXPERIMENTS
To better understanding the behavior of anisotropic noise different from isotropic ones, we introduce dynamics with different kinds of noise structure to empirical study with, as shown on Table 1.
Table 1: Compared dynamics defined in Eq. (3). For GLD dynamic, GLD diagonal, GLD Hessian and GLD 1st eigvec(H), σt are adjusted to make σt t share the same expected norm as that of SGD. For GLD leading, σt is same as in SGD. Note that GLD 1st eigvec(H) achieves the best escaping efficiency as our indicator suggested.
Noise t Remarks SGD t ∼ N
(
0,Σsgdt
)
Σsgdt is defined as in Eq. (1), and σt = ηt√ m
GLD constant t ∼ N (0, I) σt is a tunable constant GLD dynamic t ∼ N (0, I) σt is adjusted to make σt t share the same expected norm as that of SGD GLD diagonal t ∼ N ( 0, diag(Σt)
) The covariance diag(Σt) is the diagonal of the covariance of SGD noise.
GLD leading t ∼ N ( 0, Σ̃t )
Σ̃t = ∑k i=1 γiviv T i . γi, vi are the first k leading eigenvalues and corresponding eigenvalues of the covariance of SGD noise, respectively. (A low rank approximation of Σsgdt )
GLD Hessian t ∼ N ( 0, H̃t )
H̃t is a low rank approximation of the Hessian matrix of loss L(θ) by its the first k leading eigenvalues and corresponding eigenvalues.
GLD 1st eigven(H) t ∼ N ( 0, λ1u1u T 1 ) λ1, u1 are the maximal eigenvalue and its corresponding unit eigenvector of the Hessian matrix of lossL(θt).
5.1 TWO-DIMENSIONAL TOY EXAMPLE
We design a 2-D toy example L(w1, w2) with two basins, a small one and a large one, corresponding to a sharp and flat minima, (1, 1) and (−1,−1), respectively, both of which are global minima.
Please refer to Appendix for the detailed constructions. We initialize the dynamics of interest with the sharp minimum (w1, w2) = (1, 1), and run them to study their behaviors escaping from this sharp minimum.
To explicitly control the noise magnitude, we only conduct experiments on GD, GLD const, GLD diag, GLD leading (with k = 2 = D in Table 1, or in other words, the exactly covariance of SGD noise), GLD Hessian (k = 2) and GLD 1st eigven(H). And we adjust σt in each dynamics to force their noise to share the same expected squared norm as defined in Eq. (6). Figure 1(a) shows the trajectories of the dynamics escaping from the sharp minimum (1, 1) towards the flat one (−1,−1), while Figure 1(b) presents the success rate of escaping for each dynamic during 100 repeated experiments.
As shown in Figure 1, GLD 1st eigvec(H) achieves the highest success rate, indicating the fastest escaping speed from the sharp minimum. The dynamics with anisotropic noise aligned with Hessian well, including GLD 1st eigvec(H), GLD Hessian and GLD leading, greatly outperform GD, GLD const with isotropic noise, and GLD diag with noise poorly aligned with Hessian. These experiments are consistent with our theoretical analysis on Ornstein-Uhlenbeck process shown Proposition 2 and 3, demonstrating the benefits of anisotropic noise for escaping from sharp minima.
5.2 ONE HIDDEN LAYER NEURAL NETWORK WITH FIXED OUTPUT LAYER PARAMETERS
We empirically show that in one hidden layer neural network with fixed output layer parameters, the anisotropic noise induced by SGD indeed helps escape from sharp minima more efficiently than isotropic noise. Three networks are trained to binary classify 1, 000 linearly separable two-dimensional points. The number of hidden nodes for each network varies in {20, 200, 2000}. We plot the empirical indicator Tr (HΣ) in Figure 2. We can easily observe that as the increase of the number of hidden nodes, the ratio Tr(HΣ)
Tr(HΣ̄) is enlarged significantly, which is
consistent with the Eq. (13) described in Proposition 3.
5.3 PRACTICAL DATASETS
In this part, we conduct a series of experiments in real deep learning scenarios to demonstrate the behavior of SGD noise and its implicit regularization effects. We con-
struct a noisy training set based on FashionMNIST dataset1. Concretely, the training set consist of 1000 images with correct labels, and another 200 images with random labels. All the test data are with clean labels. A small LeNet-like network is utilized such that the spectrum decomposition over
1https://github.com/zalandoresearch/fashion-mnist
gradient covariance matrix and Hessian matrix are computationally feasible. The network consists of two convolutional layers and two fully-connected layers, with 11, 330 parameters in total.
We firstly run the standard gradient decent for 3000 iterations to arrive at the parameters θ∗GD near the global minima with near zero training loss and 100% training accuracy, which are typically sharp minima that generalize poorly (Neyshabur et al., 2017). And then all other compared methods are initialized with θ∗GD and run for optimization with the same learning rate ηt = 0.07 and same batch size m = 20 (if needed) for fair comparison2.
Verification of SGD noise satisfying the conditions in Proposition 3 To see whether the noise of SGD in real deep learning circumstance satisfies the two conditions in Proposition 3, we run SGD optimizer initialized from θ∗GD, i.e. the sharp minima found by GD. Figure 3(a) shows the first 400 eigenvalues of Hessian at θ∗GD, from which we see that the 140th eigenvalue has already decayed to about 1% of the first eigenvalue. Note that Hessian H ∈ RD×D, D = 11330, thus H around θ∗GD approximately meets the ill-conditioning requirement in Proposition 3. Figure 3(b) shows the projection coefficient estimated by â = u
T 1 Σu1 TrH λ1 Tr Σ
along the trajectory of SGD. The plot indicates that the projection coefficient is in a descent scale comparing toD2d−1, thus satisfying the second condition in Proposition 3. Therefore, Proposition 3 ensures that SGD would escape from minima θ∗GD faster than GLD in order ofO(D2d−1), as shown in Figure 3(c). An interesting observation is that in the later stage of SGD optimization, Tr(HΣ) becomes significantly (107 times) smaller than in the beginning stage, implying that SGD has already converged to minima being almost impossible to escape from. This phenomenon demonstrates the reasonability to employ Tr(HΣ) as an empirical indicator for escaping efficiency.
λ1 Tr Σ
, as shown in
Proposition 3. (c)Tr(HtΣt) versus Tr(HtΣ̄t) during SGD optimization initialized from θ∗GD , Σ̄t = Tr ΣtD I denotes the isotropic noise with same expected squared norm as SGD noise.
Behaviors of different dynamics escaping from minima and its generalization effects To compare the different dynamics on escaping behaviors and generalization performance, we run dynamics initialized from the sharp minima θ∗GD found by GD. The settings for each compared method are as follows. The hyperparameter σ2 for GLD const has already been tuned as optimal (σ = 0.001) by grid search. For GLD leading, we set k = 20 for comprising the computational cost and approximation accuracy. As for GLD Hessian, to reduce the expensive evaluation of such a huge Hessian in each iteration, we set k = 20 and update the Hessian every 10 iterations. We adjust σt in GLD dynamic, GLD Hessian and GLD 1st eigvec(H) to guarantee that they share the same expected squred noise norm defined in Eq. (6) as that of SGD. And we measure the expected sharpness of different minima as Eν∼N (0,δ2I) [ L(θ + ν) ] − L(θ), as defined in ((Neyshabur et al., 2017), Eq.(7)). The results are shown in Figure 4.
As shown in Figure 4, SGD, GLD 1st eigvec(H), GLD leading and GLD Hessian successfully escape from the sharp minima found by GD, while GLD, GLD dynamic and GLD diag are trapped in the minima. This demonstrates that the methods with anisotropic noise “aligned” with loss curvature can help to find flatter minima that generalize well.
We also provide experiments on standard CIFAR-10 with VGG11 in Appendix.
2In fact, in our experiment, we test the equally spacing learning rates in the range [0.01, 0.1], and the final results are consistent with each other.
6 CONCLUSION
We theoretically investigate a general optimization dynamics with unbiased noise, which unifies various existing optimization methods, including SGD. We provide some novel results on the behaviors of escaping from minima and its regularization effects. A novel indicator is derived for characterizing the escaping efficiency. Based on this indicator, two conditions are constructed for showing what type of noise structure is superior to isotropic noise in term of escaping. We then analyze the noise structure of SGD in deep learning and find that it indeed satisfies the two conditions, thus explaining the widely know observation that SGD can escape from sharp minima efficiently toward flat minina that generalize well. Various experimental evidence supports our arguments on the behavior of SGD and its effects on generalization. Our study also shows that isotropic noise helps little for escaping from sharp minima, due to the highly anisotropic nature of landscape. This indicates that it is not sufficient to analyze SGD by treating it as an isotropic diffusion over landscape (Zhang et al., 2017; Mou et al., 2017). A better understanding of this out-of-equilibrium behavior (Chaudhari & Soatto, 2017) is on demand.
A PROOFS OF PROPOSITIONS IN MAIN PAPER
A.1 PROOF OF PROPOSITION 1
Proof. The "mild smoothness assumptions" refers that Lt = L(θt) ∈ C2. Then the Ito’s lemma holds (Øksendal, 2003).
And by Ito’s lemma, the SDE of Lt is
dLt =
( −∇LT∇L+ 1
2 Tr
( Σ 1 2 t HtΣ 1 2 t )) dt+∇LTΣ 1 2 t dWt
= ( −∇LT∇L+ 1
2 Tr (HtΣt)
) dt+∇LTΣ 1 2 t dWt.
Taking expectation with respect to the distribution of θt,
dELt = E ( −∇LT∇L+ 1
2 Tr(HtΣt)
) dt, (16)
for the expectation of Brownian motion is zero. Thus the solution of EYt is,
ELt = L0 − ∫ t
0
E ( ∇LT∇L ) + ∫ t 0 1 2 ETr(HtΣt) dt.
A.2 PROOF OF PROPOSITION 2
Proof. Without losing generality, we assume that L0 = 0.
For multivariate Ornstein-Uhlenbeck process, when θ0 = 0 is an constant, θt follows a multivariate Gaussian distribution (Øksendal, 2003).
Consider change of variables θ → φ(θ, t) = eHtθt. Here, for symmetric matrix A,
eA := Udiag(eλ1 , . . . , eλn)U,
where λ1, . . . , λn and U are the eigenvalues and eigenvector matrix of A. Note that with this notation,
deHt
dt = HeHt.
Applying Ito’s lemma, we have
dφ(θt, t) = e HtΣ 1 2 dWt,
which we can integrate form 0 to t to get
θt = 0 + ∫ t 0 eH(s−t)Σ 1 2 dWs
The expectation of θt is zero. And by Ito’s isometry (Øksendal, 2003), the covariance of θt is,
EθtθTt = E ∫ t 0 eH(s−t)Σ 1 2 dWs (∫ t 0 eH(r−t)Σ 1 2 dWr )T = E
[∫ t 0 eH(s−t)Σ 1 2 Σ 1 2 eH(s−t) ds ]
= E [∫ t 0 eH(s−t)ΣeH(s−t) ds ]
= ∫ t 0 eH(s−t)ΣeH(s−t) ds. (for H and Σ are both constant.)
Thus,
EL(θt) = 1
2 ETr
( θTt Hθt ) = 1 2 Tr ( HEθtθTt
) = 1
2 ∫ t 0 Tr ( HeH(s−t)ΣeH(s−t) ) ds
= 1
2 ∫ t 0 Tr ( eH(s−t)HΣeH(s−t) ) ds (for H is symmetric.)
= 1
2 ∫ t 0 Tr ( e2H(s−t)HΣ ) ds
= 1
2 Tr
( 1
2 H−1
( I − e−2Ht ) HΣ ) = 1
4 Tr
(( I − e−2Ht ) Σ ) .
The last approximation is by Taylor’s expansion.
A.3 PROOF OF PROPOSITION 3
Proof. Firstly, Tr(HΣ) has the decomposition as Tr(HΣ) = ∑D i=1 λiu T i Σui.
Secondly, compute Tr(HΣ),Tr(HΣ̄) respectively,
Tr(HΣ) ≥ uT1 Σu1 ≥ aλ1 Tr Σ
TrH ,
Tr(HΣ̄) = Tr Σ
D TrH,
and bound their quotient,
Tr(HΣ) Tr(HΣ̄) ≥ aλ1D (TrH) 2 ≥ aλ1D( kλ1 + (D − k)D−dλ1 )2 = O (aD2d−1) . (17) The proof is finished.
A.4 PROOF OF PROPOSITION 4
Proof. Firstly compute the gradients and Hessian of φ,
∂φ ∂f =
ef
1 + ef − y =
{ ef
1+ef > 0 y = 0, − 1 1+ef < 0 y = 1.
∂2φ ∂f2 =
ef
(1 + ef )2 .
And note the Gauss-Newton decomposition for functions with the form of L = φ ◦ f ,
H = E(x,y) ∂ 2`((x,y);θ) ∂θ2
= E(x,y) ∂ 2φ ∂f2 ∂f ∂θ ∂fT ∂θ + E(x,y) ∂φ ∂f ∂2f ∂θ2 .
Since the output layer parameters for f is fixed and the activation functions are piece-wise linear, f(x; θ) is a piece-wise linear function on its parameters θ. Therefore ∂
2f ∂θ2 = 0, a.e., and H =
E(x,y) ∂ 2φ ∂f2 ∂f ∂θ ∂fT ∂θ .
It is easy to check that e−C ( ∂φ ∂f )2 ≤ ∂ 2φ ∂f2 ≤ e C ( ∂φ ∂f )2 . Thus,
H = E(x,y) ∂ 2φ ∂f2 ∂f ∂θ ∂fT ∂θ E(x,y)e C ( ∂φ ∂f )2 ∂f ∂θ ∂fT ∂θ = E(x,y)e C ( ∂φ ∂f ∂f ∂θ )( ∂φ ∂f ∂f ∂θ )T = eCF.
H = E(x,y) ∂ 2φ ∂f2 ∂f ∂x ∂fT ∂x E(x,y)e −C ( ∂φ ∂f )2 ∂f ∂x ∂fT ∂x = E(x,y)e −C ( ∂φ ∂f ∂f ∂θ )( ∂φ ∂f ∂f ∂θ )T = e−CF.
A.5 PROOF OF PROPOSITION 5
Proof. For simplicity, we define g := ∇`, g0 := ∇L = E∇`. The gradient covariance and Fisher has the following relationship,
F = Eg · gT = E(g0 + )(g0 + )T = g0gT0 + E T = g0gT0 + Σ.
Applying Taylor’s expansion to g0(θ), g0(θ) = g0(θ ∗) +H(θ∗)(θ − θ∗) + o(θ − θ∗) = H(θ∗)(θ − θ∗) + o(θ − θ∗). Hence, ∥∥g0(θ)∥∥22 ≤‖H‖22‖θ − θ∗‖22 + o(‖θ − θ∗‖22) =‖H‖22‖θ − θ∗‖22 + o(‖θ − θ∗‖22) . Therefore, with the condition‖θ − θ∗‖2 ≤ √ δuTFu ‖H‖2
, we have∥∥g0(θ)∥∥22 ≤ δuTFu+ o (|δ|) . Thus,
uTΣu Tr Σ = uTFu− uT g0gT0 u TrF − Tr(g0gT0 ) ≥ uTFu−‖g0‖22 TrF −‖g0‖22 ≥ uTFu−‖g0‖22 TrF
= uTFu
TrF ( 1− ‖g0‖22 uTFu ) ≥ u TFu TrF ( 1− δ − o ( |δ| )) ≥ u TFu TrF e−2δ,
for δ small enough.
On the other hand, Proposition 4 indicates that e−CF H eCF , which means, ∀u, uT (eCF −H)u ≥ 0
and Tr(H − e−CF ) ≥ 0.
Thus u TFu TrF ≥ uT (e−CH)u Tr(eCH) .
Therefore, for λ, u being a positive eigenvalue and the corresponding unit eigenvector of H , we have
uTFu
TrF ≥ e−2C λ
TrH uTΣu
Tr Σ ≥ u
TFu
TrF e−2δ ≥ e−2(C+δ) λ TrH .
B ADDITIONAL EXPERIMENTS
B.1 DOMINANCE OF NOISE OVER GRADIENT
Figure 5 shows the comparison of gradient mean and the expected norm of noise during training using SGD. The dataset and model are same as the experiments of FashionMNIST in main paper, or as in Section C.2. From Figure 5, we see that in the later stage of SGD optimization, noise indeed dominates gradient.
These experiments are implemented by TensorFlow 1.5.0.
B.2 THE FIRST 50 ITERATIONS OF FASHIONMNIST EXPERIMENTS IN MAIN PAPER
Figure 6 shows the first 50 iterations of FashionMNIST experiments in main paper. We observe that SGD, GLD 1st eigvec(H), GLD Hessian and GLD leading successfully escape from the sharp minima found by GD, while GLD diag, GLD dynamic, GLD const and GD do not.
These experiments are implemented by TensorFlow 1.5.0.
B.3 ADDITIONAL EXPERIMENTS ON STANDARD CIFAR-10 AND VGG11
Dataset Standard CIFAR-10 dataset without data augmentation.
Model Standard VGG11 network without any regularizations including dropout, batch normalization, weight decay, etc. The total number of parameters of this network is 9, 750, 922.
Training details Learning rates ηt = 0.05 are fixed for all optimizers, which is tuned for the best generalization performance of GD. The batch size of SGD is m = 100. The noise std of GLD constant is σ = 10−3, which is tuned to best. Due to computational limitation, we only conduct experiments on GD, GLD const, GLD dynamic, GLD diag and SGD.
Estimation of Sharpness The sharpness are estimated by
1
M M∑ j=1 L(θ + νj)− L(θ), νj ∼ N (0, δ2I),
with M = 100 and δ = 0.01.
Experiments Similar experiments are conducted as in main paper for CIFAR-10 and VGG11, as shown in Figure 7. The observations and conclusions consist with main paper.
These experiments are implemented by PyTorch 0.3.0.
C DETAILED SETUPS FOR EXPERIMENTS IN MAIN PAPER
C.1 TWO-DIMENSIONAL TOY EXAMPLE
Loss Surface The loss surface L(w1, w2) is constructed by, s1 = w1 − 1− x1, s2 = w2 − 1− x2, `(w1, w2;x1, x2) = min{10(s1 cos θ − s2 sin θ)2
+ 100(s1 cos θ + s2 sin θ) 2, (w1 − x1 + 1)2 + (w2 − x2 + 1)2},
L(w1, w2) = 1
N N∑ k=1 `(w1, w2;x k 1 , x k 2),
where
θ = 1
4 π,
N = 100, xk ∼ N (0,Σ), Σ = (
cos θ sin θ − sin θ cos θ
) .
Note that Σ is the inverse of the Hessian of the quadric form generalizeing the sharp minima. And the 3-dimensional plot of the loss surface is shown in Figure 8.
Hyperparameters All learning rates are equal to 0.005. All dynamics concerned are tuned to share the same expected square norm, 0.01. The number of iteration during one run is 500.
These experiments are implemented by PyTorch 0.3.0.
C.2 FASHIONMNIST WITH CORRUPTED LABELS
Dataset Our training set consists of 1200 examples randomly sampled from original FashionMNIST training set, and we further specify 200 of them with randomly wrong labels. The test set is same as the original FashionMNIST test set.
Model Network architecture: input⇒ conv1⇒ max_pool⇒ ReLU⇒ conv2⇒ max_pool
⇒ ReLU⇒ fc1⇒ ReLU⇒ fc2⇒ output. Both two convolutional layers use 5 × 5 kernels with 10 channels and no padding. The number of hidden units between fully connected layers are 50. The total number of parameters of this network are 11, 330.
w1 1.5 1.0 0.50.0 0.5 1.0 1.5
w2
1.5 1.0
0.5 0.0
0.5 1.0
1.5
loss 2 4 6 8 10 12
2 4 6 8 10 12
Training details
Estimation of Sharpness The sharpness are estimated by
1
M M∑ j=1 L(θ + νj)− L(θ), νj ∼ N (0, δ2I),
with M = 1, 000 and δ = 0.01.
These experiments are implemented by TensorFlow 1.5.0. | 1. What is the main contribution of the paper regarding anisotropic noise in stochastic optimization algorithms?
2. What are the concerns regarding the novelty of the paper, particularly in its theoretical development?
3. Do you have any questions about the validity of Proposition 4 in the context of non-convex neural networks?
4. How do you assess the quality and relevance of the experimental results presented in the paper, especially in comparison to prior works?
5. Would a more in-depth analysis of SGD with anisotropic noise, building upon previous research on isotropic noise or convergence properties of Lagrange dynamics, strengthen the paper's contributions? | Review | Review
This paper studies the effort of anisotropic noise in stochastic optimization algorithms. The goal is to show that SGD escapes from sharp minima due to such noise. The paper provides preliminary empirical results using different kinds of noise to suggest that anisotropic noise is effective for generalization of deep networks.
Detailed comments:
1. I have concerns about the novelty of the paper: It builds heavily upon previous work on modeling SGD as a stochastic differential equation to understand its noise characteristics. The theoretical development of this manuscript is straightforward until simplistic assumptions such as the Ornstein-Uhlenbeck process (which amounts to a local analysis of SGD near a critical point) and a neural network with one hidden layer. Similar results have also been in the the literature before in a number of places, e.g., https://arxiv.org/abs/1704.04289 and references therein.
2. Proposition 4 looks incorrect. If the neural network is non-convex, how can the positive semi-definite Fisher information matrix F sandwich the Hessian which may have strictly negative eigenvalues at places?
3. Section 5 contains toy experiments on a 2D problem, a one layer neural network and a 1000-image subset of the FashionMNIST dataset. It is hard to validate the claims of the paper using these experiments, they need to be more thorough. The Appendix contains highly preliminary experiments on CIFAR-10 using VGG-11.
4. A rigorous theoretical understanding of SGD with isotropic noise or convergence properties of Lagevin dynamics has been developed in the literature previously, it’d be beneficial to analyze SGD with anisotropic noise in a similar vein. |
ICLR | Title
The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Minima and Regularization Effects
Abstract
Understanding the behavior of stochastic gradient descent (SGD) in the context of deep neural networks has raised lots of concerns recently. Along this line, we theoretically study a general form of gradient based optimization dynamics with unbiased noise, which unifies SGD and standard Langevin dynamics. Through investigating this general optimization dynamics, we analyze the behavior of SGD on escaping from minima and its regularization effects. A novel indicator is derived to characterize the efficiency of escaping from minima through measuring the alignment of noise covariance and the curvature of loss function. Based on this indicator, two conditions are established to show which type of noise structure is superior to isotropic noise in term of escaping efficiency. We further show that the anisotropic noise in SGD satisfies the two conditions, and thus helps to escape from sharp and poor minima effectively, towards more stable and flat minima that typically generalize well. We verify our understanding through comparing this anisotropic diffusion with full gradient descent plus isotropic diffusion (i.e. Langevin dynamics) and other types of position-dependent noise.
1 INTRODUCTION
As a successful learning algorithm, stochastic gradient descent (SGD) was originally adopted for dealing with the computational bottleneck of training neural networks with large-scale datasets (Bottou, 1991). Its empirical efficiency and effectiveness have attracted lots of attention. And thus, SGD and its variants have become standard workhorse for learning deep models. Besides the aspect of empirical efficiency, recently, researchers started to analyze the optimization behaviors of SGD and its impacts on generalization.
The optimization properties of SGD have been studied from various perspectives. The convergence behaviors of SGD for simple one hidden layer neural networks were investigated in (Li & Yuan, 2017; Brutzkus et al., 2017). In non-convex settings, the characterization of how SGD escapes from stationary points, including saddle points and local minima, was analyzed in (Daneshmand et al., 2018; Jin et al., 2017; Hu et al., 2017).
On the other hand, in the context of deep learning, researchers realized that the noise introduced by SGD impacts the generalization, thanks to the research on the phenomenon that training with a large batch could cause a significant drop of test accuracy (Keskar et al., 2017). Particularly, several works attempted to investigate how the magnitude of the noise influences the generalization during the process of SGD optimization, including the batch size and learning rate (Hoffer et al., 2017; Goyal et al., 2017; Chaudhari & Soatto, 2017; Jastrzębski et al., 2017). Another line of research interpreted SGD from a Bayesian perspective. In (Mandt et al., 2017; Chaudhari & Soatto, 2017), SGD was interpreted as performing variational inference, where certain entropic regularization involves to prevent overfitting. And the work (Smith & Le, 2018) tried to provide an understanding based on model evidence. These explanations are compatible with the flat/sharp minima argument (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017), since Bayesian inference tends to targeting the region with large probability mass, corresponding to the flat minima.
However, when analyzing the optimization behavior and regularization effects of SGD, most of existing works only assume the noise covariance of SGD is constant or upper bounded by some
constant, and what role the noise structure of stochastic gradient plays in optimization and generalization was rarely discussed in literature.
In this work, we theoretically study a general form of gradient-based optimization dynamics with unbiased noise, which unifies SGD and standard Langevin dynamics. By investigating this general dynamics, we analyze how the noise structure of SGD influences the escaping behavior from minima and its regularization effects. Several novel theoretical results and empirical justifications are made.
1. We derive a key indicator to characterize the efficiency of escaping from minima through measuring the alignment of noise covariance and the curvature of loss function. Based on this indicator, two conditions are established to show which type of noise structure is superior to isotropic noise in term of escaping efficiency;
2. We further justify that SGD in the context of deep neural networks satisfies these two conditions, and thus provide a plausible explanation why SGD can escape from sharp minima more efficiently, converging to flat minima with a higher probability. Moreover, these flat minima typically generalize well according to various works (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017; Neyshabur et al., 2017; Wu et al., 2017). We also show that Langevin dynamics with well tuned isotropic noise cannot beat SGD, which further confirms the importance of noise structure of SGD;
3. A large number of experiments are designed systematically to justify our understanding on the behavior of the anisotropic diffusion of SGD. We compare SGD with full gradient descent with different types of diffusion noise, including isotropic and positiondependent/independent noise. All these comparisons demonstrate the effectiveness of anisotropic diffusion for good generalization in training deep networks.
The remaining of the paper is organized as follows. In Section 2, we introduce the background of SGD and a general form of optimization dynamics of interest. We then theoretically study the behaviors of escaping from minima in Ornstein-Uhlenbeck process in Section 3, and establish two conditions for characterizing the noise structure that affects the escaping efficiency. In Section 4, we show that the noise of SGD in the context of deep learning meets the two conditions, and thus explains its superior efficiency of escaping from sharp minima over other dynamics with isotropic noise. Various experiments are conducted for verifying our understanding in Section 5, and we conclude the paper in Section 6.
2 BACKGROUND
In general, supervised learning usually involves an optimization process of minimizing an empirical loss over training data, L(θ) := 1/N ∑N i=1 `(f(xi; θ), yi), where {(xi, yi)}Ni=1 denotes the training set with N i.i.d. samples, the prediction function f is often parameterized by θ ∈ RD, such as deep neural networks. And `(·, ·) is the loss function, such as mean squared error and cross entropy, typically corresponding to certain negative log likelihood. Due to the over parameterization and non-convexity of the loss function in deep networks, there exist multiple global minima, exhibiting diverse generalization performance. We call those solutions generalizing well good solutions or minima, and vice versa.
Gradient descent and its stochastic variants A typical approach to minimize the loss function is gradient descent (GD), the dynamics of which in each iteration t is, θt+1 = θt − ηtg0(θt), where g0(θt) = ∇θL(θt) denotes the full gradient and ηt denotes the learning rate. In non-convex optimization, a more useful kind of gradient based optimizers act like GD with an unbiased noise, including gradient Langevin dynamics (GLD), θt+1 = θt − ηtg0(θt) + σt t, t ∼ N (0, I), and stochastic gradient descent (SGD), during each iteration t of which, a minibatch of training samples with size m are randomly selected, with index set Bt ⊂ {1, 2, . . . , N}, and a stochastic gradient is evaluated based on the chosen minibatch, g̃(θt) = ∑ i∈Bt ∇θ`(f(xi; θt), yi)/m, which is an unbiased estimator of the full gradient g0(θt). Then, the parameters are updated with some learning rate ηt as θt+1 = θt − ηtg̃(θt). Denote g(θ) = ∇θ`((f(x; θ), y), the gradient for loss with a single data point (x, y), and assume that the size of minibatch is large enough for the central limit theorem to
hold, and thus g̃(θt) follows a Gaussian distribution (Mandt et al., 2017; Li et al., 2017), g̃(θt) ∼ N ( g0(θt), 1
m Σ(θt)
) , where Σ(θt) ≈ 1
N N∑ i=1 ( g(θt;xi)− g0(θt) ) ( g(θt;xi)− g0(θt) )T .
(1) Note that the covariance matrix Σ depends on the model architecture, dataset and the current parameter θt. Now we can rewrite the update of SGD as,
θt+1 = θt − ηtg0(θt) + ηt√ m t, t ∼ N
( 0,Σ(θt) ) . (2)
Inspired by GLD and SGD, we may consider a general kind of optimization dynamics, namely, gradient descent with unbiased noise,
θt+1 = θt − ηtg0(θt) + σt t, t ∼ N (0,Σt) . (3)
For small enough constant learning rate ηt = η, the above iteration in Eq. (3) can be treated as the numerical discretization of the following stochastic differential equation (Li et al., 2017; Jastrzębski et al., 2017; Chaudhari & Soatto, 2017),
dθt = −∇θL(θt) dt+ √ ησ2tΣt dWt. (4)
Considering √ ησ2tΣt as the coefficient of noise term, existing works (Hoffer et al., 2017; Jastrzębski et al., 2017) studied the influence of noise magnitude of SGD on generalization, i.e. ησ2t = η/m.
In this work, we focus on studying the benefits of anisotropic structure of Σt in SGD helping escape from minima by bridging the covariance matrix with the Hessian of the loss surface, and its implicit regularization effects on generalization, especially in deep learning context. For the purpose of eliminating the influence of the noise magnitude, we constrain it to be a constant when studying different structures of noise covariance. The noise magnitude could be evaluated as the expectation of the squared norm of the noise vector,
E[( √ ησt t) T ( √ ησt t)] = ησ 2 tE[ T ] = ησ2t TrE[ T ] = ησ2t Tr Σt. (5)
Thus, we introduce the following constraint,
given time t, ησ2t Tr (Σt) is constant. (6)
From the statistical physics point of view, Tr(ησ2tΣt) characterizes the kinetic energy (Gardiner), thus it is natural to force the energy to be unchanging, otherwise it is trivial that the higher the energy is, the less stable the system is.
For simplicity, we absorb ησ2t into Σt, denoting ησ 2 tΣt as Σt. If not pointed out, the subscript t of matrix Σt is omitted to emphasize that we are fixing t and discussing the varying structure of Σ.
3 THE BEHAVIORS OF ESCAPING FROM MINIMA IN ORNSTEIN-UHLENBECK PROCESS
For a general loss function L(θ) = EX`X(θ) (the expectation could be either population or empirical), where X denotes data example and θ denoted parameters to be optimized, under suitable smoothness assumptions, the SDE associated with the gradient variant optimizer as shown in Eq. (4) can be written as follows (Li et al., 2017; Jastrzębski et al., 2017; Chaudhari & Soatto, 2017; Hu et al., 2017), with little abuse of notation,
dθt = −∇θL(θt) dt+ Σ 1 2 t dWt. (7)
Let L0 = L(θ0) be one of the minimal values of L(θ), then for a fixed t small enough (such that Lt−L0 ≥ 0), Eθt [Lt−L0] characterizes the efficiency of θ escaping from the minimum θ0 of L(θ). It is natural to measure the escaping efficiency using E[Lt − L0] since it characterizes the increase of the potential, i.e., the increase of the loss L. And also note that Lt − L0 ≥ 0, for any δ > 0, the escaping probability P (Lt − L0 ≥ δ) can be controlled by the expectation E[Lt − L0] since by Markov’s inequality, we have P (Lt − L0 ≥ δ) ≤ E[Lt−L0]δ .
Proposition 1 (Escaping efficiency for general process). For the process (7), provided mild smoothness assumptions, the escaping efficiency from the minimum θ0 is,
E[Lt − L0] = − ∫ t
0
E [ ∇LT∇L ] + ∫ t 0 1 2 ETr(HtΣt) dt, (8)
where Ht denotes the Hessian of L(θt) at θt.
We provide the proof in Appendix, and the same for the other propositions.
The escaping efficiency for general processes is hard to analyze due to the intractableness of the integral in Eq. (8). However, we may consider the second-order approximation locally near the minima θ0, where L(θ) ≈ L0 + 12 (θ − θ0)
TH(θ − θ0). Without losing generality, we suppose θ0 = 0. Further, suppose that H is a positive definite matrix and the diffusion covariance Σt = Σ is constant for t. Then the SDE (7) becomes an Ornstein-Uhlenbeck process,
dθt = −Hθt dt+ Σ 1 2 dWt, θ0 = 0. (9)
Proposition 2 (Escaping efficiency of Ornstein-Uhlenbeck process). For Ornstein-Uhlenbeck process (9), with t small enough, the escaping efficiency from minimum θ0 = 0 is,
E[Lt − L0] = 1
4 Tr
(( I − e−2Ht ) Σ ) ≈ t
2 Tr (HΣ) . (10)
Inspired by Proposition 1 and Proposition 2, we propose Tr (HΣ) as an empirical indicator measuring the efficiency for a stochastic process escaping from minima. Now we turn to analysis which kind of noise covariance structure Σ will benefit escaping sharp minima, under the constraint Eq. (6).
Firstly, for the isotropic loss surface, i.e., H = λI , the escaping efficiency is E[Lt−L0] = λt2 Tr Σ, which is invariant under the constraint that Tr Σ is constant (Eq. (6)). Thus it is only nontrivial to study the impact of noise structure when the Hessian of loss surface is anisotropic.
Secondly, H and Σ being semi-positive definite, to achieve the maximum of Tr(HΣ) under constraint (6), Σ should be Σ∗ = (Tr Σ) · λ1u1uT1 , where λ1, u1 are the maximal eigenvalue and corresponding unit eigenvector of H . Note that the rank-1 matrix Σ∗ is highly anisotropic. More generally, the following Proposition 3 characterizes one kind of anisotropic noise significantly outperforming isotropic noise in order of number of parameters D, given H is ill-conditioned.
Proposition 3 (The benefits of anisotropic noise). With semi-positive definite H and Σ, assume
(1) H is ill-conditioned. Let λ1 ≥ λ2 ≥ . . . ,≥ λD ≥ 0 be the eigenvalues of H in descent order, and for some constant k D and d > 12 ,
λ1 > 0, λk+1, λk+2, . . . , λD < λ1D −d, (11)
(2) Σ is “aligned” with H . Let ui be the corresponding unit eigenvector of eigenvalue λi, for some projection coefficient a > 0,
uT1 Σu1 ≥ aλ1 Tr Σ
TrH , (12)
then we have the benefit of the anisotropic noise over the isotropic one in term of escaping efficiency, which can be characterized by the follow ratio,
Tr (HΣ) Tr(HΣ̄) = O
( aD(2d−1) ) , (13)
where Σ̄ = Tr ΣD I denotes the covariance of isotropic noise, to meet the constraint Eq. (6).
To give some geometric intuitions on the left hand side of Eq. (12), let the maximal eigenvalue and its corresponding unit eigenvector of Σ be γ1, v1, then the right hand side has a lower bound as uT1 Σu1 ≥ uT1 v1γ1vT1 u1 = γ1 〈u1, v1〉
2. Thus if the maximal eigenvalues of H and Σ are aligned in proportion, γ1/Tr Σ ≥ a1λ1/TrH , and the angle of their corresponding unit eigenvectors is close to zero, 〈u1, v1〉 ≥ a2, the second condition Eq. (12) in Proposition 3 holds for a = a1a2.
Typically, in the scenario of modern deep neural networks, due to the over-parameterization, Hessian and the gradient covariance are usually ill-conditioned and anistropic near minima, as shown by (Sagun et al., 2017) and (Chaudhari & Soatto, 2017). Thus the first condition in Eq. (11) usually holds for deep neural networks, and we further justify it by experiments in Section 5.3. Therefore, in the following section, we turn to focus on how the gradient covariance, i.e. the covariance of SGD noise meets the second condition of Proposition 3 in the context of deep neural networks.
4 THE ANISOTROPIC NOISE OF SGD IN DEEP NETWORKS
In this section, we mainly investigate the anisotropic structure of gradient covariance in SGD, and explore its connection with the Hessian of loss surface.
Around the true parameter According to the classic statistical theory (Pawitan, 2001, Chap. 8), for population loss L(θ) = EX`(θ), with ` being the negative log likelihood, when evaluating at the true parameter θ∗, there is the exact equivalence between the Hessian H of the population loss and Fisher information matrix F ,
F (θ∗) := EX [∇θ`(θ∗)∇θ`(θ∗)T ] = EX [∇2θ`(θ∗)] = ∇2θL(θ∗) =: H(θ∗). (14)
In practice, with the assumptions that the sample size N is large enough (i.e. indicating asymptotic behavior) and suitable smoothness conditions, when the current parameter θt is not far from the ground truth, Fisher is close to Hessian. Thus we can obtain the following approximate equality between gradient covariance and Hessian,
Σ(θt) = F (θt)−∇θLT (θt)∇θL(θt) ≈ F (θt) ≈ H(θt).
The first approximation is due to the dominance of noise over the mean of gradient in the later stage of SGD optimization, which has been shown in (Shwartz-Ziv & Tishby, 2017). A similar experiment as (Shwartz-Ziv & Tishby, 2017) has been conducted to demonstrate this observation, which is left in Appendix due to the limit of space.
In the following, we theoretically characterize the closeness between Σ and H in the context of one hidden layer neural networks; and show that the gradient covariance introduced by SGD indeed has more benefits than isotropic one in term of escaping from minima, provided some assumptions.
One hidden layer neural network with fixed output layer parameters For binary classification neural network with one hidden layer in classic setups (with softmax and cross-entropy loss), we have following results to globally bound Fisher and Hessian with each other. Proposition 4 (The relationship between Fisher and Hessian in one hidden layer neural network). Consider the binary classification problem with data {(xi, yi)}i∈I , y ∈ {0, 1}, and typical (either population or empirical) loss as L(θ) = E[φ ◦ f(x; θ)], where f denotes the output of neural network, and φ denotes the cross-entropy loss with softmax,
φ(f(x), y) = − ( y log ef(x)
1 + ef(x) + (1− y) log 1 1 + ef(x)
) , y ∈ {0, 1}.
If: (1) the neural network f is with one hidden layer and piece-wise linear activation. And the parameters of output layer are fixed during training; (2) the optimization happens on a set U such that, f(x; θ) ∈ (−C,C),∀θ ∈ U,∀x, i.e., the output of the classifier is bounded during optimization. Then, we have the following relationship between (either population or empirical) Fisher F and Hessian H almost everywhere:
e−CF (θ) H(θ) eCF (θ).
A B means that (B −A) is semi-positive definite.
There are a few remarks on Proposition 4. Firstly, as shown in (Brutzkus et al., 2017), the considered neural networks in Proposition 4 are non-convex and have multiple minima, and thus it is still nontrivial to consider the escaping from minima. Secondly, the Proposition 4 holds in both population and empirical sense, since the proof does not distinguish the two circumstances. Thirdly, the bound
between F and H holds "globally" in the set U where the output f is bounded, rather than merely around the true global minima as discussed previously.
By Proposition 4, the following relationship between gradient covariance and Hessian could be derived. Proposition 5 (The relationship between gradient covariance and Hessian in one hidden layer neural network). Assume the conditions in Proposition 4 hold, then for some small δ > 0 and for θ close enough to minima θ∗ (local or global),
uTΣu ≥ e−2(C+δ)λTr Σ TrH
(15)
holds for any positive eigenvalue λ and its corresponding unit eigenvector u of Hessian H .
As a direct corollary of Proposition 5, for such neural networks, the second condition Eq. (12) in Proposition 3 holds in a very loose sense.
Therefore, based on the discussion on population loss around the true parameters and one hidden layer neural network with fixed output layer parameters, given the ill-conditioning of H due to the over-parameterization of modern deep networks, according to Proposition 3, we can conclude the noise structure of SGD helps to escape from sharp minima much faster than the dynamics with isotropic noise, and converge to flatter solutions with a high probability. These flat minima typically generalize well (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017; Neyshabur et al., 2017; Wu et al., 2017). Thus, we attribute such properties of SGD on its better generalization performance comparing to GD, GLD and other dynamics with isotropic noise (Hoffer et al., 2017; Goyal et al., 2017; Keskar et al., 2017).
In the following, we conduct a series of experiments systematically to verify our understanding on the behavior of escaping from minima and its regularization effects for different optimization dynamics.
5 EXPERIMENTS
To better understanding the behavior of anisotropic noise different from isotropic ones, we introduce dynamics with different kinds of noise structure to empirical study with, as shown on Table 1.
Table 1: Compared dynamics defined in Eq. (3). For GLD dynamic, GLD diagonal, GLD Hessian and GLD 1st eigvec(H), σt are adjusted to make σt t share the same expected norm as that of SGD. For GLD leading, σt is same as in SGD. Note that GLD 1st eigvec(H) achieves the best escaping efficiency as our indicator suggested.
Noise t Remarks SGD t ∼ N
(
0,Σsgdt
)
Σsgdt is defined as in Eq. (1), and σt = ηt√ m
GLD constant t ∼ N (0, I) σt is a tunable constant GLD dynamic t ∼ N (0, I) σt is adjusted to make σt t share the same expected norm as that of SGD GLD diagonal t ∼ N ( 0, diag(Σt)
) The covariance diag(Σt) is the diagonal of the covariance of SGD noise.
GLD leading t ∼ N ( 0, Σ̃t )
Σ̃t = ∑k i=1 γiviv T i . γi, vi are the first k leading eigenvalues and corresponding eigenvalues of the covariance of SGD noise, respectively. (A low rank approximation of Σsgdt )
GLD Hessian t ∼ N ( 0, H̃t )
H̃t is a low rank approximation of the Hessian matrix of loss L(θ) by its the first k leading eigenvalues and corresponding eigenvalues.
GLD 1st eigven(H) t ∼ N ( 0, λ1u1u T 1 ) λ1, u1 are the maximal eigenvalue and its corresponding unit eigenvector of the Hessian matrix of lossL(θt).
5.1 TWO-DIMENSIONAL TOY EXAMPLE
We design a 2-D toy example L(w1, w2) with two basins, a small one and a large one, corresponding to a sharp and flat minima, (1, 1) and (−1,−1), respectively, both of which are global minima.
Please refer to Appendix for the detailed constructions. We initialize the dynamics of interest with the sharp minimum (w1, w2) = (1, 1), and run them to study their behaviors escaping from this sharp minimum.
To explicitly control the noise magnitude, we only conduct experiments on GD, GLD const, GLD diag, GLD leading (with k = 2 = D in Table 1, or in other words, the exactly covariance of SGD noise), GLD Hessian (k = 2) and GLD 1st eigven(H). And we adjust σt in each dynamics to force their noise to share the same expected squared norm as defined in Eq. (6). Figure 1(a) shows the trajectories of the dynamics escaping from the sharp minimum (1, 1) towards the flat one (−1,−1), while Figure 1(b) presents the success rate of escaping for each dynamic during 100 repeated experiments.
As shown in Figure 1, GLD 1st eigvec(H) achieves the highest success rate, indicating the fastest escaping speed from the sharp minimum. The dynamics with anisotropic noise aligned with Hessian well, including GLD 1st eigvec(H), GLD Hessian and GLD leading, greatly outperform GD, GLD const with isotropic noise, and GLD diag with noise poorly aligned with Hessian. These experiments are consistent with our theoretical analysis on Ornstein-Uhlenbeck process shown Proposition 2 and 3, demonstrating the benefits of anisotropic noise for escaping from sharp minima.
5.2 ONE HIDDEN LAYER NEURAL NETWORK WITH FIXED OUTPUT LAYER PARAMETERS
We empirically show that in one hidden layer neural network with fixed output layer parameters, the anisotropic noise induced by SGD indeed helps escape from sharp minima more efficiently than isotropic noise. Three networks are trained to binary classify 1, 000 linearly separable two-dimensional points. The number of hidden nodes for each network varies in {20, 200, 2000}. We plot the empirical indicator Tr (HΣ) in Figure 2. We can easily observe that as the increase of the number of hidden nodes, the ratio Tr(HΣ)
Tr(HΣ̄) is enlarged significantly, which is
consistent with the Eq. (13) described in Proposition 3.
5.3 PRACTICAL DATASETS
In this part, we conduct a series of experiments in real deep learning scenarios to demonstrate the behavior of SGD noise and its implicit regularization effects. We con-
struct a noisy training set based on FashionMNIST dataset1. Concretely, the training set consist of 1000 images with correct labels, and another 200 images with random labels. All the test data are with clean labels. A small LeNet-like network is utilized such that the spectrum decomposition over
1https://github.com/zalandoresearch/fashion-mnist
gradient covariance matrix and Hessian matrix are computationally feasible. The network consists of two convolutional layers and two fully-connected layers, with 11, 330 parameters in total.
We firstly run the standard gradient decent for 3000 iterations to arrive at the parameters θ∗GD near the global minima with near zero training loss and 100% training accuracy, which are typically sharp minima that generalize poorly (Neyshabur et al., 2017). And then all other compared methods are initialized with θ∗GD and run for optimization with the same learning rate ηt = 0.07 and same batch size m = 20 (if needed) for fair comparison2.
Verification of SGD noise satisfying the conditions in Proposition 3 To see whether the noise of SGD in real deep learning circumstance satisfies the two conditions in Proposition 3, we run SGD optimizer initialized from θ∗GD, i.e. the sharp minima found by GD. Figure 3(a) shows the first 400 eigenvalues of Hessian at θ∗GD, from which we see that the 140th eigenvalue has already decayed to about 1% of the first eigenvalue. Note that Hessian H ∈ RD×D, D = 11330, thus H around θ∗GD approximately meets the ill-conditioning requirement in Proposition 3. Figure 3(b) shows the projection coefficient estimated by â = u
T 1 Σu1 TrH λ1 Tr Σ
along the trajectory of SGD. The plot indicates that the projection coefficient is in a descent scale comparing toD2d−1, thus satisfying the second condition in Proposition 3. Therefore, Proposition 3 ensures that SGD would escape from minima θ∗GD faster than GLD in order ofO(D2d−1), as shown in Figure 3(c). An interesting observation is that in the later stage of SGD optimization, Tr(HΣ) becomes significantly (107 times) smaller than in the beginning stage, implying that SGD has already converged to minima being almost impossible to escape from. This phenomenon demonstrates the reasonability to employ Tr(HΣ) as an empirical indicator for escaping efficiency.
λ1 Tr Σ
, as shown in
Proposition 3. (c)Tr(HtΣt) versus Tr(HtΣ̄t) during SGD optimization initialized from θ∗GD , Σ̄t = Tr ΣtD I denotes the isotropic noise with same expected squared norm as SGD noise.
Behaviors of different dynamics escaping from minima and its generalization effects To compare the different dynamics on escaping behaviors and generalization performance, we run dynamics initialized from the sharp minima θ∗GD found by GD. The settings for each compared method are as follows. The hyperparameter σ2 for GLD const has already been tuned as optimal (σ = 0.001) by grid search. For GLD leading, we set k = 20 for comprising the computational cost and approximation accuracy. As for GLD Hessian, to reduce the expensive evaluation of such a huge Hessian in each iteration, we set k = 20 and update the Hessian every 10 iterations. We adjust σt in GLD dynamic, GLD Hessian and GLD 1st eigvec(H) to guarantee that they share the same expected squred noise norm defined in Eq. (6) as that of SGD. And we measure the expected sharpness of different minima as Eν∼N (0,δ2I) [ L(θ + ν) ] − L(θ), as defined in ((Neyshabur et al., 2017), Eq.(7)). The results are shown in Figure 4.
As shown in Figure 4, SGD, GLD 1st eigvec(H), GLD leading and GLD Hessian successfully escape from the sharp minima found by GD, while GLD, GLD dynamic and GLD diag are trapped in the minima. This demonstrates that the methods with anisotropic noise “aligned” with loss curvature can help to find flatter minima that generalize well.
We also provide experiments on standard CIFAR-10 with VGG11 in Appendix.
2In fact, in our experiment, we test the equally spacing learning rates in the range [0.01, 0.1], and the final results are consistent with each other.
6 CONCLUSION
We theoretically investigate a general optimization dynamics with unbiased noise, which unifies various existing optimization methods, including SGD. We provide some novel results on the behaviors of escaping from minima and its regularization effects. A novel indicator is derived for characterizing the escaping efficiency. Based on this indicator, two conditions are constructed for showing what type of noise structure is superior to isotropic noise in term of escaping. We then analyze the noise structure of SGD in deep learning and find that it indeed satisfies the two conditions, thus explaining the widely know observation that SGD can escape from sharp minima efficiently toward flat minina that generalize well. Various experimental evidence supports our arguments on the behavior of SGD and its effects on generalization. Our study also shows that isotropic noise helps little for escaping from sharp minima, due to the highly anisotropic nature of landscape. This indicates that it is not sufficient to analyze SGD by treating it as an isotropic diffusion over landscape (Zhang et al., 2017; Mou et al., 2017). A better understanding of this out-of-equilibrium behavior (Chaudhari & Soatto, 2017) is on demand.
A PROOFS OF PROPOSITIONS IN MAIN PAPER
A.1 PROOF OF PROPOSITION 1
Proof. The "mild smoothness assumptions" refers that Lt = L(θt) ∈ C2. Then the Ito’s lemma holds (Øksendal, 2003).
And by Ito’s lemma, the SDE of Lt is
dLt =
( −∇LT∇L+ 1
2 Tr
( Σ 1 2 t HtΣ 1 2 t )) dt+∇LTΣ 1 2 t dWt
= ( −∇LT∇L+ 1
2 Tr (HtΣt)
) dt+∇LTΣ 1 2 t dWt.
Taking expectation with respect to the distribution of θt,
dELt = E ( −∇LT∇L+ 1
2 Tr(HtΣt)
) dt, (16)
for the expectation of Brownian motion is zero. Thus the solution of EYt is,
ELt = L0 − ∫ t
0
E ( ∇LT∇L ) + ∫ t 0 1 2 ETr(HtΣt) dt.
A.2 PROOF OF PROPOSITION 2
Proof. Without losing generality, we assume that L0 = 0.
For multivariate Ornstein-Uhlenbeck process, when θ0 = 0 is an constant, θt follows a multivariate Gaussian distribution (Øksendal, 2003).
Consider change of variables θ → φ(θ, t) = eHtθt. Here, for symmetric matrix A,
eA := Udiag(eλ1 , . . . , eλn)U,
where λ1, . . . , λn and U are the eigenvalues and eigenvector matrix of A. Note that with this notation,
deHt
dt = HeHt.
Applying Ito’s lemma, we have
dφ(θt, t) = e HtΣ 1 2 dWt,
which we can integrate form 0 to t to get
θt = 0 + ∫ t 0 eH(s−t)Σ 1 2 dWs
The expectation of θt is zero. And by Ito’s isometry (Øksendal, 2003), the covariance of θt is,
EθtθTt = E ∫ t 0 eH(s−t)Σ 1 2 dWs (∫ t 0 eH(r−t)Σ 1 2 dWr )T = E
[∫ t 0 eH(s−t)Σ 1 2 Σ 1 2 eH(s−t) ds ]
= E [∫ t 0 eH(s−t)ΣeH(s−t) ds ]
= ∫ t 0 eH(s−t)ΣeH(s−t) ds. (for H and Σ are both constant.)
Thus,
EL(θt) = 1
2 ETr
( θTt Hθt ) = 1 2 Tr ( HEθtθTt
) = 1
2 ∫ t 0 Tr ( HeH(s−t)ΣeH(s−t) ) ds
= 1
2 ∫ t 0 Tr ( eH(s−t)HΣeH(s−t) ) ds (for H is symmetric.)
= 1
2 ∫ t 0 Tr ( e2H(s−t)HΣ ) ds
= 1
2 Tr
( 1
2 H−1
( I − e−2Ht ) HΣ ) = 1
4 Tr
(( I − e−2Ht ) Σ ) .
The last approximation is by Taylor’s expansion.
A.3 PROOF OF PROPOSITION 3
Proof. Firstly, Tr(HΣ) has the decomposition as Tr(HΣ) = ∑D i=1 λiu T i Σui.
Secondly, compute Tr(HΣ),Tr(HΣ̄) respectively,
Tr(HΣ) ≥ uT1 Σu1 ≥ aλ1 Tr Σ
TrH ,
Tr(HΣ̄) = Tr Σ
D TrH,
and bound their quotient,
Tr(HΣ) Tr(HΣ̄) ≥ aλ1D (TrH) 2 ≥ aλ1D( kλ1 + (D − k)D−dλ1 )2 = O (aD2d−1) . (17) The proof is finished.
A.4 PROOF OF PROPOSITION 4
Proof. Firstly compute the gradients and Hessian of φ,
∂φ ∂f =
ef
1 + ef − y =
{ ef
1+ef > 0 y = 0, − 1 1+ef < 0 y = 1.
∂2φ ∂f2 =
ef
(1 + ef )2 .
And note the Gauss-Newton decomposition for functions with the form of L = φ ◦ f ,
H = E(x,y) ∂ 2`((x,y);θ) ∂θ2
= E(x,y) ∂ 2φ ∂f2 ∂f ∂θ ∂fT ∂θ + E(x,y) ∂φ ∂f ∂2f ∂θ2 .
Since the output layer parameters for f is fixed and the activation functions are piece-wise linear, f(x; θ) is a piece-wise linear function on its parameters θ. Therefore ∂
2f ∂θ2 = 0, a.e., and H =
E(x,y) ∂ 2φ ∂f2 ∂f ∂θ ∂fT ∂θ .
It is easy to check that e−C ( ∂φ ∂f )2 ≤ ∂ 2φ ∂f2 ≤ e C ( ∂φ ∂f )2 . Thus,
H = E(x,y) ∂ 2φ ∂f2 ∂f ∂θ ∂fT ∂θ E(x,y)e C ( ∂φ ∂f )2 ∂f ∂θ ∂fT ∂θ = E(x,y)e C ( ∂φ ∂f ∂f ∂θ )( ∂φ ∂f ∂f ∂θ )T = eCF.
H = E(x,y) ∂ 2φ ∂f2 ∂f ∂x ∂fT ∂x E(x,y)e −C ( ∂φ ∂f )2 ∂f ∂x ∂fT ∂x = E(x,y)e −C ( ∂φ ∂f ∂f ∂θ )( ∂φ ∂f ∂f ∂θ )T = e−CF.
A.5 PROOF OF PROPOSITION 5
Proof. For simplicity, we define g := ∇`, g0 := ∇L = E∇`. The gradient covariance and Fisher has the following relationship,
F = Eg · gT = E(g0 + )(g0 + )T = g0gT0 + E T = g0gT0 + Σ.
Applying Taylor’s expansion to g0(θ), g0(θ) = g0(θ ∗) +H(θ∗)(θ − θ∗) + o(θ − θ∗) = H(θ∗)(θ − θ∗) + o(θ − θ∗). Hence, ∥∥g0(θ)∥∥22 ≤‖H‖22‖θ − θ∗‖22 + o(‖θ − θ∗‖22) =‖H‖22‖θ − θ∗‖22 + o(‖θ − θ∗‖22) . Therefore, with the condition‖θ − θ∗‖2 ≤ √ δuTFu ‖H‖2
, we have∥∥g0(θ)∥∥22 ≤ δuTFu+ o (|δ|) . Thus,
uTΣu Tr Σ = uTFu− uT g0gT0 u TrF − Tr(g0gT0 ) ≥ uTFu−‖g0‖22 TrF −‖g0‖22 ≥ uTFu−‖g0‖22 TrF
= uTFu
TrF ( 1− ‖g0‖22 uTFu ) ≥ u TFu TrF ( 1− δ − o ( |δ| )) ≥ u TFu TrF e−2δ,
for δ small enough.
On the other hand, Proposition 4 indicates that e−CF H eCF , which means, ∀u, uT (eCF −H)u ≥ 0
and Tr(H − e−CF ) ≥ 0.
Thus u TFu TrF ≥ uT (e−CH)u Tr(eCH) .
Therefore, for λ, u being a positive eigenvalue and the corresponding unit eigenvector of H , we have
uTFu
TrF ≥ e−2C λ
TrH uTΣu
Tr Σ ≥ u
TFu
TrF e−2δ ≥ e−2(C+δ) λ TrH .
B ADDITIONAL EXPERIMENTS
B.1 DOMINANCE OF NOISE OVER GRADIENT
Figure 5 shows the comparison of gradient mean and the expected norm of noise during training using SGD. The dataset and model are same as the experiments of FashionMNIST in main paper, or as in Section C.2. From Figure 5, we see that in the later stage of SGD optimization, noise indeed dominates gradient.
These experiments are implemented by TensorFlow 1.5.0.
B.2 THE FIRST 50 ITERATIONS OF FASHIONMNIST EXPERIMENTS IN MAIN PAPER
Figure 6 shows the first 50 iterations of FashionMNIST experiments in main paper. We observe that SGD, GLD 1st eigvec(H), GLD Hessian and GLD leading successfully escape from the sharp minima found by GD, while GLD diag, GLD dynamic, GLD const and GD do not.
These experiments are implemented by TensorFlow 1.5.0.
B.3 ADDITIONAL EXPERIMENTS ON STANDARD CIFAR-10 AND VGG11
Dataset Standard CIFAR-10 dataset without data augmentation.
Model Standard VGG11 network without any regularizations including dropout, batch normalization, weight decay, etc. The total number of parameters of this network is 9, 750, 922.
Training details Learning rates ηt = 0.05 are fixed for all optimizers, which is tuned for the best generalization performance of GD. The batch size of SGD is m = 100. The noise std of GLD constant is σ = 10−3, which is tuned to best. Due to computational limitation, we only conduct experiments on GD, GLD const, GLD dynamic, GLD diag and SGD.
Estimation of Sharpness The sharpness are estimated by
1
M M∑ j=1 L(θ + νj)− L(θ), νj ∼ N (0, δ2I),
with M = 100 and δ = 0.01.
Experiments Similar experiments are conducted as in main paper for CIFAR-10 and VGG11, as shown in Figure 7. The observations and conclusions consist with main paper.
These experiments are implemented by PyTorch 0.3.0.
C DETAILED SETUPS FOR EXPERIMENTS IN MAIN PAPER
C.1 TWO-DIMENSIONAL TOY EXAMPLE
Loss Surface The loss surface L(w1, w2) is constructed by, s1 = w1 − 1− x1, s2 = w2 − 1− x2, `(w1, w2;x1, x2) = min{10(s1 cos θ − s2 sin θ)2
+ 100(s1 cos θ + s2 sin θ) 2, (w1 − x1 + 1)2 + (w2 − x2 + 1)2},
L(w1, w2) = 1
N N∑ k=1 `(w1, w2;x k 1 , x k 2),
where
θ = 1
4 π,
N = 100, xk ∼ N (0,Σ), Σ = (
cos θ sin θ − sin θ cos θ
) .
Note that Σ is the inverse of the Hessian of the quadric form generalizeing the sharp minima. And the 3-dimensional plot of the loss surface is shown in Figure 8.
Hyperparameters All learning rates are equal to 0.005. All dynamics concerned are tuned to share the same expected square norm, 0.01. The number of iteration during one run is 500.
These experiments are implemented by PyTorch 0.3.0.
C.2 FASHIONMNIST WITH CORRUPTED LABELS
Dataset Our training set consists of 1200 examples randomly sampled from original FashionMNIST training set, and we further specify 200 of them with randomly wrong labels. The test set is same as the original FashionMNIST test set.
Model Network architecture: input⇒ conv1⇒ max_pool⇒ ReLU⇒ conv2⇒ max_pool
⇒ ReLU⇒ fc1⇒ ReLU⇒ fc2⇒ output. Both two convolutional layers use 5 × 5 kernels with 10 channels and no padding. The number of hidden units between fully connected layers are 50. The total number of parameters of this network are 11, 330.
w1 1.5 1.0 0.50.0 0.5 1.0 1.5
w2
1.5 1.0
0.5 0.0
0.5 1.0
1.5
loss 2 4 6 8 10 12
2 4 6 8 10 12
Training details
Estimation of Sharpness The sharpness are estimated by
1
M M∑ j=1 L(θ + νj)− L(θ), νj ∼ N (0, δ2I),
with M = 1, 000 and δ = 0.01.
These experiments are implemented by TensorFlow 1.5.0. | 1. What is the focus of the paper regarding the effect of anisotropic noise on SGD's ability to escape local optima?
2. What is the novel aspect of the proposed approach compared to prior works?
3. Do you have any concerns or questions about the assumption in Proposition 3 (2)?
4. How does the reviewer assess the significance and quality of the paper's contribution? | Review | Review
The authors studied the effect of the anisotropic noise of SGD on the algorithm’s ability to escape from local optima. To this end, the authors depart from the established approximation of SGD in the vicinity of an optimum as a continuous-time Ornstein-Uhlenbeck process. Furthermore, the authors argue that in certain deep learning models, the anisotropic noise indeed leads to a good escaping from local optima.
Proposition 3 (2) seems to assume that the eigenvectors of the noise-covariance of SGD are aligned with the eigenvectors of the Hessian. Did I understand this correctly and is this sufficient? Maybe this is actually not even necessary, since the stationary distribution for the multivariate Ornstein-Uhlenbeck process can always be calculated (Gardiner; Mandt, Hoffman, and Blei 2015–2017)
I think this is a decent contribution. |
ICLR | Title
A2BCD: Asynchronous Acceleration with Optimal Complexity
Abstract
In this paper, we propose the Asynchronous Accelerated Nonuniform Randomized Block Coordinate Descent algorithm (A2BCD). We prove A2BCD converges linearly to a solution of the convex minimization problem at the same rate as NU_ACDM, so long as the maximum delay is not too large. This is the first asynchronous Nesterov-accelerated algorithm that attains any provable speedup. Moreover, we then prove that these algorithms both have optimal complexity. Asynchronous algorithms complete much faster iterations, and A2BCD has optimal complexity. Hence we observe in experiments that A2BCD is the top-performing coordinate descent algorithm, converging up to 4 − 5× faster than NU_ACDM on some data sets in terms of wall-clock time. To motivate our theory and proof techniques, we also derive and analyze a continuous-time analogue of our algorithm and prove it converges at the same rate.
1 Introduction
In this paper, we propose and prove the convergence of the Asynchronous Accelerated Nonuniform Randomized Block Coordinate Descent algorithm (A2BCD), the first asynchronous Nesterovaccelerated algorithm that achieves optimal complexity. No previous attempts have been able to prove a speedup for asynchronous Nesterov acceleration. We aim to find the minimizer x∗ of the unconstrained minimization problem:
min x∈Rd
f(x) = f ( x(1), . . . , x(n) ) (1.1)
where f is σ-strongly convex for σ > 0 with L-Lipschitz gradient ∇f = (∇1f, . . . ,∇nf). x ∈ Rd is composed of coordinate blocks x(1), . . . , x(n). The coordinate blocks of the gradient ∇if are assumed Li-Lipschitz with respect to the ith block. That is, ∀x, h ∈ Rd:
‖∇if(x+ Pih)−∇if(x)‖ ≤ Li‖h‖ (1.2) where Pi is the projection onto the ith block of Rd. Let L̄ , 1n ∑n i=1 Li be the average block Lipschitz constant. These conditions on f are assumed throughout this whole paper. Our algorithm can also be applied to non-strongly convex objectives (σ = 0) or non-smooth objectives using the black box reduction techniques proposed in Allen-Zhu & Hazan (2016). Hence we consider only ∗Corresponding author: [email protected] †[email protected] ‡[email protected]
the coordinate smooth, strongly-convex case. Our algorithm can also be applied to the convex regularized ERM problem via the standard dual transformation (see for instance Lin et al. (2014)):
f(x) = 1 n n∑ i=1 fi(〈ai, x〉) + λ 2 ‖x‖ 2 (1.3)
Hence A2BCD can be used as an asynchronous Nesterov-accelerated finite-sum algorithm. Coordinate descent methods, in which a chosen coordinate block ik is updated at every iteration, are a popular way to solve equation 1.1. Randomized block coordinate descent (RBCD, Nesterov (2012)) updates a uniformly randomly chosen coordinate block ik with a gradient-descent-like step: xk+1 = xk − (1/Lik )∇ikf(xk). The complexity K( ) of an algorithm is defined as the number of iterations required to decrease the error E(f(xk)−f(x∗)) to less than (f(x0)− f(x∗)). Randomized coordinate descent has a complexity of K( ) = O(n(L̄/σ) ln(1/ )). Using a series of averaging and extrapolation steps, accelerated RBCD Nesterov (2012) improves RBCD’s iteration complexity K( ) to O(n √ L̄/σ ln(1/ )), which leads to much faster convergence
when L̄σ is large. This rate is optimal when all Li are equal Lan & Zhou (2015). Finally, using a special probability distribution for the random block index ik, the non-uniform accelerated coordinate descent method Allen-Zhu et al. (2015) (NU_ACDM) can further decrease the complexity to O( ∑n i=1 √ Li/σ ln(1/ )), which can be up to √ n times faster than accelerated RBCD, since some Li can be significantly smaller than L. NU_ACDM is the current state-of-the-art coordinate descent algorithm for solving equation 1.1. Our A2BCD algorithm generalizes NU_ACDM to the asynchronous-parallel case. We solve equation 1.1 with a collection of p computing nodes that continually read a shared-access solution vector y into local memory then compute a block gradient ∇if , which is used to update shared solution vectors (x, y, v). Proving convergence in the asynchronous case requires extensive new technical machinery. A traditional synchronous-parallel implementation is organized into rounds of computation: Every computing node must complete an update in order for the next iteration to begin. However, this synchronization process can be extremely costly, since the lateness of a single node can halt the entire system. This becomes increasingly problematic with scale, as differences in node computing speeds, load balancing, random network delays, and bandwidth constraints mean that a synchronous-parallel solver may spend more time waiting than computing a solution. Computing nodes in an asynchronous solver do not wait for others to complete and share their updates before starting the next iteration. They simply continue to update the solution vectors with the most recent information available, without any central coordination. This eliminates costly idle time, meaning that asynchronous algorithms can be much faster than traditional ones, since they have much faster iterations. For instance, random network delays cause asynchronous algorithms to complete iterations Ω(ln(p)) time faster than synchronous algorithms at scale. This and other factors that influence the speed of iterations are discussed in Hannah & Yin (2017a). However, since many iterations may occur between the time that a node reads the solution vector, and the time that its computed update is applied, effectively the solution vector is being updated with outdated information. At iteration k, the block gradient ∇ikf is computed at a delayed iterate ŷk defined as1:
ŷk = (( yk−j(k,1) ) (1), . . . , ( yk−j(k,n) ) (n) ) (1.4)
1Every coordinate can be outdated by a different amount without significantly changing the proofs.
for delay parameters j(k, 1), . . . , j(k, n) ∈ N. Here j(k, i) denotes how many iterations out of date coordinate block i is at iteration k. Different blocks may be out of date by different amounts, which is known as an inconsistent read. We assume2 that j(k, i) ≤ τ for some constant τ <∞. Asynchronous algorithms were proposed in Chazan & Miranker (1969) to solve linear systems. General convergence results and theory were developed later in Bertsekas (1983); Bertsekas & Tsitsiklis (1997); Tseng et al. (1990); Luo & Tseng (1992; 1993); Tseng (1991) for partially and totally asynchronous systems, with essentially-cyclic block sequence ik. More recently, there has been renewed interest in asynchronous algorithms with random block coordinate updates. Linear and sublinear convergence results were proven for asynchronous RBCD Liu & Wright (2015); Liu et al. (2014); Avron et al. (2014), and similar was proven for asynchronous SGD Recht et al. (2011), and variance reduction algorithms Reddi et al. (2015); Leblond et al. (2017); Mania et al. (2015); Huo & Huang (2016), and primal-dual algorithms Combettes & Eckstein (2018). There is also a rich body of work on asynchronous SGD. In the distributed setting, Zhou et al. (2018) showed global convergence for stochastic variationally coherent problems even when the delays grow at a polynomial rate. In Lian et al. (2018), an asynchronous decentralized SGD was proposed with the same optimal sublinear convergence rate as SGD and linear speedup with respect to the number of workers. In Liu et al. (2018), authors obtained an asymptotic rate of convergence for asynchronous momentum SGD on streaming PCA, which provides insight into the tradeoff between asynchrony and momentum. In Dutta et al. (2018), authors prove convergence results for asynchronous SGD that highlight the tradeoff between faster iterations and iteration complexity. Further related work is discussed in Section 4.
1.1 Summary of Contributions
In this paper, we prove that A2BCD attains NU_ACDM’s state-of-the-art iteration complexity to highest order for solving equation 1.1, so long as delays are not too large (see Section 2). The proof is very different from that of Allen-Zhu et al. (2015), and involves significant technical innovations and complexity related to the analysis of asynchronicity. We also prove that A2BCD (and hence NU_ACDM) has optimal complexity to within a constant factor over a fairly general class of randomized block coordinate descent algorithms (see Section 2.1). This extends results in Lan & Zhou (2015) to asynchronous algorithms with Li not all equal. Since asynchronous algorithms complete faster iterations, and A2BCD has optimal complexity, we expect A2BCD to be faster than all existing coordinate descent algorithms. We confirm with numerical experiments that A2BCD is the current fastest coordinate descent algorithm (see Section 5). We are only aware of one previous and one contemporaneous attempt at proving convergence results for asynchronous Nesterov-accelerated algorithms. However, the first is not accelerated and relies on extreme assumptions, and the second obtains no speedup. Therefore, we claim that our results are the first-ever analysis of asynchronous Nesterov-accelerated algorithms that attains a speedup. Moreover, our speedup is optimal for delays not too large3. The work of Meng et al. claims to obtain square-root speedup for an asynchronous accelerated SVRG. In the case where all component functions have the same Lipschitz constant L, the complexity they obtain reduces to (n+ κ) ln(1/ ) for κ = O ( τn2 ) (Corollary 4.4). Hence authors do not even obtain accelerated rates. Their convergence condition is τ < 14∆1/8 for sparsity parameter ∆. Since the dimension d satisfies d ≥ 1∆ , they require d ≥ 2
16τ8. So τ = 20 requires dimension d > 1015. 2This condition can be relaxed however by techniques in Hannah & Yin (2017b); Sun et al. (2017); Peng
et al. (2016c); Hannah & Yin (2017a) 3Speedup is defined precisely in Section 2
In a contemporaneous preprint, authors in Fang et al. (2018) skillfully devised accelerated schemes for asynchronous coordinate descent and SVRG using momentum compensation techniques. Although their complexity results have the improved √ κ dependence on the condition number, they do not prove any speedup. Their complexity is τ times larger than the serial complexity. Since τ is necessarily greater than p, their results imply that adding more computing nodes will increase running time. The authors claim that they can extend their results to linear speedup for asynchronous, accelerated SVRG under sparsity assumptions. And while we think this is quite likely, they have not yet provided proof. We also derive a second-order ordinary differential equation (ODE), which is the continuous-time limit of A2BCD (see Section 3). This extends the ODE found in Su et al. (2014) to an asynchronous accelerated algorithm minimizing a strongly convex function. We prove this ODE linearly converges to a solution with the same rate as A2BCD’s, without needing to resort to the restarting techniques. The ODE analysis motivates and clarifies the our proof strategy of the main result.
2 Main results
We should consider functions f where it is efficient to calculate blocks of the gradient, so that coordinate-wise parallelization is efficient. That is, the function should be “coordinate friendly” Peng et al. (2016b). This is a very wide class that includes regularized linear regression, logistic regression, etc. The L2-regularized empirical risk minimization problem is not coordinate friendly in general, however the equivalent dual problem is, and hence can be solved efficiently by A2BCD (see Lin et al. (2014), and Section 5). To calculate the k + 1’th iteration of the algorithm from iteration k, we use only one block of the gradient ∇ikf . We assume that the delays j(k, i) are independent of the block sequence ik, but otherwise arbitrary (This is a standard assumption found in the vast majority of papers, but can be relaxed Sun et al. (2017); Leblond et al. (2017); Cannelli et al. (2017)). Definition 1. Asynchronous Accelerated Randomized Block Coordinate Descent (A2BCD). Let f be σ-strongly convex, and let its gradient ∇f be L-Lipschitz with block coordinate Lipschitz parameters Li as in equation 1.2. We define the condition number κ = L/σ, and let L = mini Li. Using these parameters, we sample ik in an independent and identically distributed (IID) fashion according to
P[ik = j] = L1/2j /S, j ∈ {1, . . . , n}, for S = ∑n
i=1 L
1/2 i . (2.1)
Let τ be the maximum asynchronous delay. We define the dimensionless asynchronicity parameter ψ, which is proportional to τ , and quantifies how strongly asynchronicity will affect convergence:
ψ = 9 ( S−1/2L−1/2L3/4κ1/4 ) × τ (2.2)
We use the above system parameters and ψ to define the coefficients α, β, and γ via eqs. (2.3) to (2.5). Hence A2BCD algorithm is defined via the iterations: eqs. (2.6) to (2.8).
α , ( 1 + (1 + ψ)σ−1/2S )−1 (2.3)
β , 1− (1− ψ)σ1/2S−1 (2.4)
h , 1− 12σ 1/2L−1/2ψ. (2.5)
yk = αvk + (1− α)xk, (2.6) xk+1 = yk − hL−1ik ∇ikf(ŷk), (2.7) vk+1 = βvk + (1− β)yk − σ−1/2L−1/2ik ∇ikf(ŷk). (2.8)
See Section A for a discussion of why it is practical and natural to have the gradient ∇ikf(ŷk) to be outdated, while the actual variables xk, yk, vk can be efficiently kept up to date. Essentially it is
because most of the computation lies in computing ∇ikf(ŷk). After this is computed, xk, yk, vk can be updated more-or-less atomically with minimal overhead, meaning that they will always be up to date. However our main results still hold for more general asynchronicity. A natural quantity to consider in asynchronous convergence analysis is the asynchronicity error, a powerful tool for analyzing asynchronous algorithms used in several recent works Peng et al. (2016a); Hannah & Yin (2017b); Sun et al. (2017); Hannah & Yin (2017a). We adapt it and use a weighted sum of the history of the algorithm with decreasing weight as you go further back in time. Definition 2. Asynchronicity error. Using the above parameters, we define:
Ak = τ∑ j=1 cj‖yk+1−j − yk−j‖2 (2.9) for ci = 6 S L1/2κ3/2τ τ∑ j=i ( 1− σ1/2S−1 )i−j−1 ψ−1. (2.10)
Here we define yk = y0 for all k < 0. The determination of the coefficients ci is in general a very involved process of trial and error, intuition, and balancing competing requirements. The algorithm doesn’t depend on the coefficients, however; they are only an analytical tool. We define Ek[X] as the expectation of X conditioned on (x0, . . . , xk), (y0, . . . , yk), (v0, . . . , vk), and (i0, . . . , ik−1). To simplify notation4, we assume that the minimizer x∗ = 0, and that f(x∗) = 0 with no loss in generality. We define the Lyapunov function:
ρk = ‖vk‖2 +Ak + cf(xk) (2.11) for c = 2σ−1/2S−1 ( βα−1(1− α) + 1 ) . (2.12)
We now present this paper’s first main contribution. Theorem 1. Let f be σ-strongly convex with a gradient∇f that is L-Lipschitz with block Lipschitz constants {Li}ni=1. Let ψ defined in equation 2.2 satisfy ψ ≤ 3 7 (i.e. τ ≤ 1 21S
1/2L1/2L−3/4κ−1/4). Then for A2BCD we have:
Ek[ρk+1] ≤ ( 1− (1− ψ)σ1/2S−1 ) ρk.
To obtain E[ρk] ≤ ρ0, it takes KA2BCD( ) iterations for:
KA2BCD( ) = ( σ−1/2S +O(1) ) ln(1/ ) 1− ψ , (2.13)
where O(·) is asymptotic with respect to σ−1/2S →∞, and uniformly bounded.
This result is proven in Section B. A stronger result for Li ≡ L can be proven, but this adds to the complexity of the proof; see Section E for a discussion. In practice, asynchronous algorithms are far more resilient to delays than the theory predicts. τ can be much larger without negatively affecting the convergence rate and complexity. This is perhaps because we are limited to a worst-case analysis, which is not representative of the average-case performance. Allen-Zhu et al. (2015) (Theorem 5.1) shows a linear convergence rate of 1 − 2/ ( 1 + 2σ−1/2S
) for NU_ACDM, which leads to the corresponding iteration complexity of KNU_ACDM( ) =( σ−1/2S +O(1) ) ln(1/ ). Hence, we have:
KA2BCD( ) = 1
1− ψ (1 + o(1))KNU_ACDM( )
4We can assume x∗ = 0 with no loss in generality since we may translate the coordinate system so that x∗ is at the origin. We can assume f(x∗) = 0 with no loss in generality, since we can replace f(x) with f(x)−f(x∗). Without this assumption, the Lyapunov function simply becomes: ‖vk − x∗‖2 +Ak + c(f(xk)− f(x∗)).
When 0 ≤ ψ 1, or equivalently, when τ S1/2L1/2L−3/4κ−1/4, the complexity of A2BCD asymptotically matches that of NU_ACDM. Hence A2BCD combines state-of-the-art complexity with the faster iterations and superior scaling that asynchronous iterations allow. We now present some special cases of the conditions on the maximum delay τ required for good complexity. Corollary 3. Let the conditions of Theorem 1 hold. If all coordinate-wise Lipschitz constants Li are equal (i.e. Li = L1, ∀i), then we have KA2BCD( ) ∼ KNU_ACDM( ) when τ n1/2κ−1/4(L1/L)3/4. If we further assume all coordinate-wise Lipschitz constants Li equal L. Then KA2BCD( ) ∼ KNU_ACDM( ) = KACDM( ), when τ n1/2κ−1/4. Remark 1. Reduction to synchronous case. Notice that when τ = 0, we have ψ = 0, ci ≡ 0 and hence Ak ≡ 0. Thus A2BCD becomes equivalent to NU_ACDM, the Lyapunov function5 ρk becomes equivalent to one found in Allen-Zhu et al. (2015)(pg. 9), and Theorem 1 yields the same complexity.
The maximum delay τ will be a function τ(p) of p, number of computing nodes. Clearly τ ≥ p, and experimentally it has been observed that τ = O(p) Leblond et al. (2017). Let gradient complexity K( , τ) be the number of gradients required for an asynchronous algorithm with maximum delay τ to attain suboptimality . τ(1) = 0, since with only 1 computing node there can be no delay. This corresponds to the serial complexity. We say that an asynchronous algorithm attains a complexity speedup if pK( ,τ(0))K( ,τ(p) is increasing in p. We say it attains linear complexity speedup if pK( ,τ(0)) K( ,τ(p) = Ω(p). In Theorem 1, we obtain a linear complexity speedup (for p not too large), whereas no other prior attempt can attain even a complexity speedup with Nesterov acceleration. In the ideal scenario where the rate at which gradients are calculated increases linearly with p, algorithms that have linear complexity speedup will have a linear decrease in wall-clock time. However in practice, when the number of computing nodes is sufficiently large, the rate at which gradients are calculated will no longer be linear. This is due to many parallel overhead factors including too many nodes sharing the same memory read/write bandwidth, and network bandwidth. However we note that even with these issues, we obtain much faster convergence than the synchronous counterpart experimentally.
2.1 Optimality
NU_ACDM and hence A2BCD are in fact optimal in some sense. That is, among a fairly wide class of coordinate descent algorithms A, they have the best-possible worst-case complexity to highest order. We extend the work in Lan & Zhou (2015) to encompass algorithms are asynchronous and have unequal Li. For a subset S ∈ Rd, we let IC(S) (inconsistent read) denote the set of vectors v whose components are a combination of components of vectors in the set S. That is, v = (v1,1, v2,2, . . . , vd,d) for some vectors v1, v2, . . . , vd ∈ S. Here vi,j denotes the jth component of vector vi. Definition 4. Asynchronous Randomized Incremental Algorithms. Consider the unconstrained minimization problem equation 1.1 for function f satisfying the conditions stated in Section 1. We define the class A as algorithms G on this problem such that: 1. For each parameter set (σ, L1, . . . , Ln, n), G has an associated IID random variable ik with some fixed distribution P[ik] = pi for ∑n i=1 pi = 1.
2. The iterates of A satisfy: xk+1 ∈ span{IC(Xk),∇i0f(IC(X0)),∇i1f(IC(X1)), . . . ,∇ikf(IC(Xk))}
This is a rather general class: xk+1 can be constructed from any inconsistent reading of past iterates IC(Xk), and any past gradient of an inconsistent read ∇ijf(IC(Xj)).
5Their Lyapunov function is in fact a generalization of the one found in Nesterov (2012).
Theorem 2. For any algorithm G ∈ A that solves eq. (1.1), and parameter set (σ, L1, . . . , Ln, n), there is a dimension d, a corresponding function f on Rd, and a starting point x0, such that
E‖xk − x∗‖2/‖x0 − x∗‖2 ≥ 1 2 ( 1− 4/ (∑n j=1 √ Li/σ + 2n ))k Hence A has a complexity lower bound: K( ) ≥ 14 (1 + o(1)) (∑n j=1 √ Li/σ + 2n ) ln(1/2 )
Our proof in Section D follows very similar lines to Lan & Zhou (2015); Nesterov (2013).
3 ODE Analysis
In this section we present and analyze an ODE which is the continuous-time limit of A2BCD. This ODE is a strongly convex, and asynchronous version of the ODE found in Su et al. (2014). For simplicity, assume Li = L, ∀i. We rescale (I.e. we replace f(x) with 1σf .) f so that σ = 1, and hence κ = L/σ = L. Taking the discrete limit of synchronous A2BCD (i.e. accelerated RBCD), we can derive the following ODE6 (see Section equation C.1):
Ÿ + 2n−1κ−1/2Ẏ + 2n−2κ−1∇f(Y ) = 0 (3.1)
We define the parameter η , nκ1/2, and the energy: E(t) = en−1κ−1/2t(f(Y ) + 14 ∥∥Y + ηẎ ∥∥2). This
is very similar to the Lyapunov function discussed in equation 2.11, with 14 ∥∥Y (t) + ηẎ (t)∥∥2 fulfilling the role of ‖vk‖2, and Ak = 0 (since there is no delay yet). Much like the traditional analysis in the proof of Theorem 1, we can derive a linear convergence result with a similar rate. See Section C.2. Lemma 5. If Y satisfies equation 3.1, the energy satisfies E′(t) ≤ 0, E(t) ≤ E(0), and hence:
f(Y (t)) + 14 ∥∥∥Y (t) + nκ1/2Ẏ (t)∥∥∥2 ≤(f(Y (0)) + 14∥∥Y (0) + ηẎ (0)∥∥2 ) e−n −1κ−1/2t
We may also analyze an asynchronous version of equation 3.1 to motivate the proof of our main theorem. Here Ŷ (t) is a delayed version of Y (t) with the delay bounded by τ .
Ÿ + 2n−1κ−1/2Ẏ + 2n−2κ−1∇f ( Ŷ ) = 0, (3.2)
Unfortunately, this energy satisfies (see Section equation C.4, equation C.7):
e−η −1tE′(t) ≤ −18η ∥∥Ẏ ∥∥2 + 3κ2η−1τD(t), for D(t) , ∫ t t−τ ∥∥Ẏ (s)∥∥2ds. Hence this energy E(t) may not be decreasing in general. But, we may add a continuous-time asynchronicity error (see Sun et al. (2017)), much like in Definition 2, to create a decreasing energy. Let c0 ≥ 0 and r > 0 be arbitrary constants that will be set later. Define:
A(t) = ∫ t t−τ c(t− s) ∥∥Ẏ (s)∥∥2ds, for c(t) , c0(e−rt + e−rτ1− e−rτ (e−rt − 1) ) .
Lemma 6. When rτ ≤ 12 , the asynchronicity error A(t) satisfies:
e−rt d
dt
( ertA(t) ) ≤ c0 ∥∥Ẏ (t)∥∥2 − 12τ−1c0D(t). 6For compactness, we have omitted the (t) from time-varying functions Y (t), Ẏ (t), ∇Y (t), etc.
See Section C.3 for the proof. Adding this error to the Lyapunov function serves a similar purpose in the continuous-time case as in the proof of Theorem 1 (see Lemma 11). It allows us to negate 1 2τ −1c0 units of D(t) for the cost of creating c0 units of ∥∥Ẏ (t)∥∥2. This restores monotonicity. Theorem 3. Let c0 = 6κ2η−1τ2, and r = η−1. If τ ≤ 1√48nκ −1/2 then we have:
e−η −1t d
dt
( E(t) + eη −1tA(t) ) ≤ 0. (3.3)
Hence f(Y (t)) convergence linearly to f(x∗) with rate O ( exp ( −t/(nκ1/2) )) Notice how this convergence condition is similar to Corollary 3, but a little looser. The convergence condition in Theorem 1 can actually be improved to approximately match this (see Section E).
Proof. e−η
−1t d
dt
( E(t) + eη −1tA(t) ) ≤ ( c0 − 1 8η )∥∥Ẏ ∥∥2 + (3κ2η−1τ − 12τ−1c0 ) D(t)
= 6η−1κ2 ( τ2 − 148n 2κ−1 )∥∥Ẏ ∥∥2 ≤ 0
The preceding should hopefully elucidate the logic and general strategy of the proof of Theorem 1.
4 Related work
We now discuss related work that was not addressed in Section 1. Nesterov acceleration is a method for improving an algorithm’s iteration complexity’s dependence the condition number κ. Nesterov-accelerated methods have been proposed and discovered in many settings Nesterov (1983); Tseng (2008); Nesterov (2012); Lin et al. (2014); Lu & Xiao (2014); Shalev-Shwartz & Zhang (2016); Allen-Zhu (2017), including for coordinate descent algorithms (algorithms that use 1 gradient block ∇if or minimize with respect to 1 coordinate block per iteration), and incremental algorithms (algorithms for finite sum problems 1n ∑n i=1 fi(x) that use 1 function gradient ∇fi(x) per iteration). Such algorithms can often be augmented to solve composite minimization problems (minimization for objective of the form f(x) + g(x), especially for nonsomooth g), or include constraints. In Peng et al. (2016a), authors proposed and analyzed an asynchronous fixed-point algorithm called ARock, that takes proximal algorithms, forward-backward, ADMM, etc. as special cases. Work has also been done on asynchronous algorithms for finite sums in the operator setting Davis (2016); Johnstone & Eckstein (2018). In Hannah & Yin (2017b); Sun et al. (2017); Peng et al. (2016c); Cannelli et al. (2017) showed that many of the assumptions used in prior work (such as bounded delay τ <∞) were unrealistic and unnecessary in general. In Hannah & Yin (2017a) the authors showed that asynchronous iterations will complete far more iterations per second, and that a wide class of asynchronous algorithms, including asynchronous RBCD, have the same iteration complexity as their synchronous counterparts. Hence certain asynchronous algorithms can be expected to significantly outperform traditional ones. In Xiao et al. (2017) authors propose a novel asynchronous catalyst-accelerated Lin et al. (2015) primal-dual algorithmic framework to solve regularized ERM problems. They structure the parallel updates so that the data that an update depends on is up to date (though the rest of the data may not be). However catalyst acceleration incurs a log(κ) penalty over Nesterov acceleration in general. In Allen-Zhu (2017), the author argues that the inner iterations of catalyst acceleration are hard to tune, making it less practical than Nesterov acceleration.
5 Numerical experiments
To investigate the performance of A2BCD, we solve the ridge regression problem. Consider the following primal and corresponding dual objective (see for instance Lin et al. (2014)):
min w∈Rd P (w) = 12n ∥∥ATw − l∥∥2 + λ2 ‖w‖2, minα∈Rn D(α) = 12d2λ‖Aα‖2 + 12d‖α+ l‖2 (5.1)
where A ∈ Rd×n is a matrix of n samples and d features, and l is a label vector. We let A = [A1, . . . , Am] where Ai are the column blocks of A. We compare A2BCD (which is asynchronous accelerated), synchronous NU_ACDM (which is synchronous accelerated), and asynchronous RBCD (which is asynchronous non-accelerated). Nodes randomly select a coordinate block according to equation 2.1, calculate the corresponding block gradient, and use it to apply an update to the shared solution vectors. synchronous NU_ACDM is implemented in a batch fashion, with batch size p (1 block per computing node). Nodes in synchronous NU_ACDM implementation must wait until all nodes apply their computed gradients before they can start the next iteration, but the asynchronous algorithms simply compute with the most up-to-date information available. We use the datasets w1a (47272 samples, 300 features), wxa which combines the data from from w1a to w8a (293201 samples, 300 features), and aloi (108000 samples, 128 features) from LIBSVM Chang & Lin (2011). The algorithm is implemented in a multi-threaded fashion using C++11 and GNU Scientific Library with a shared memory architecture. We use 40 threads on two 2.5GHz 10-core Intel Xeon E5-2670v2 processors. See Section A.1 for a discussion of parameter tuning and estimation. The parameters for each algorithm are tuned to give the fastest performance, so that a fair comparison is possible. A critical ingredient in the efficient implementation of A2BCD and NU_ACDM for this problem is the efficient update scheme discussed in Lee & Sidford (2013b;a). In linear regression applications such as this, it is essential to be able to efficiently maintain or recover Ay. This is because calculating block gradients requires the vector ATi Ay, and without an efficient way to recover Ay, block gradient evaluations are essentially 50% as expensive as full-gradient calculations. Unfortunately, every accelerated iteration results in dense updates to yk because of the averaging step in equation 2.6. Hence Ay must be recalculated from scratch. However Lee & Sidford (2013a) introduces a linear transformation that allows for an equivalent iteration that results in sparse updates to new iteration variables p and q. The original purpose of this transformation was to ensure that the averaging steps (e.g. equation 2.6) do not dominate the computational cost for sparse problems. However we find a more important secondary use which applies to both sparse and dense problems. Since the updates to p and q are sparse coordinate-block updates, the vectors Ap, and Aq can be efficiently maintained, and therefore block gradients can be efficiently calculated. The specifics of this efficient implementation are discussed in Section A.2. In Table 5, we plot the sub-optimality vs. time for decreasing values of λ, which corresponds to increasingly large condition numbers κ. When κ is small, acceleration doesn’t result in a significantly better convergence rate, and hence A2BCD and async-RBCD both outperform sync-NU_ACDM since they complete faster iterations at similar complexity. Acceleration for low κ has unnecessary overhead, which means async-RBCD can be quite competitive. When κ becomes large, async-RBCD is no longer competitive, since it has a poor convergence rate. We observe that A2BCD and sync-NU_ACDM have essentially the same convergence rate, but A2BCD is up to 4 − 5× faster than sync-NU_ACDM because it completes much faster iterations. We observe this advantage despite the fact that we are in an ideal environment for synchronous computation: A small, homogeneous, high-bandwidth, low-latency cluster. In large-scale heterogeneous systems with greater synchronization overhead, bandwidth constraints, and latency, we expect A2BCD’s advantage to be much larger.
6 Acknowledgement
The authors would like to thank the reviewers for their helpful comments. The research presented in this paper was supported in part by AFOSR MURI FA9550-18-10502, NSF DMS-1720237, and ONR N0001417121.
A Efficient Implementation
An efficient implementation will have coordinate blocks of size greater than 1. This to ensure the efficiency of linear algebra subroutines. Especially because of this, the bulk of the computation for each iteration is computing ∇ikf(ŷk), and not the averaging steps. Hence the computing nodes only need a local copy of yk in order to do the bulk of an iteration’s computation. Given this gradient ∇ikf(ŷk), updating yk and vk is extremely fast (xk can simply be eliminated). Hence it is natural to simply store yk and vk centrally, and update them when the delayed gradients ∇ikf(ŷk). Given the above, a write mutex over (y, v) has minuscule overhead (which we confirm with experiments), and makes the labeling of iterates unambiguous. This also ensures that vk and yk are always up to date when (y, v) are being updated. Whereas the gradient ∇ikf(ŷk) may at the same time be out of date, since it has been calculated with an outdated version of yk. However a write mutex is not necessary in practice, and does not appear to affect convergence rates or computation time. Also it is possible to prove convergence under more general asynchronicity.
A.1 Parameter selection and tuning
When defining the coefficients, σ may be underestimated, and L,L1, . . . , Ln may be overestimated if exact values are unavailable. Notice that xk can be eliminated from the above iteration, and the block gradient ∇ikf(ŷk) only needs to be calculated once per iteration. A larger (or overestimated) maximum delay τ will cause a larger asynchronicity parameter ψ, which leads to more conservative step sizes to compensate. To estimate ψ, one can first performed a dry run with all coefficient set to 0 to estimate τ . All function parameters can be calculated exactly for this problem in terms of the data matrix and λ. We can then use these parameters and this tau to calculate ψ. ψ and τ merely change the parameters, and do not change execution patterns of the processors. Hence their parameter specification doesn’t affect the observed delay. Through simple tuning though, we found that ψ = 0.25 resulted in good performance. In tuning for general problems, there are theoretical reasons why it is difficult to attain acceleration without some prior knowledge of σ, the strong convexity modulus Arjevani (2017). Ideally σ is pre-specified for instance in a regularization term. If the Lipschitz constants Li cannot be calculated directly (which is rarely the case for the classic dual problem of empirical risk minimization objectives), the line-search method discussed in Roux et al. (2012) Section 4 can be used.
A.2 Sparse update formulation
As mentioned in Section 5, authors in Lee & Sidford (2013a) proposed a linear transformation of an accelerated RBCD scheme that results in sparse coordinate updates. Our proposed algorithm can be given a similar efficient implementation. We may eliminate xk from A2BCD, and derive the equivalent iteration below:(
yk+1 vk+1
) = (
1− αβ, αβ 1− β, β )( yk vk ) − (ασ−1/2L−1/2ik + h(1− α)L−1ik )∇ikf(ŷk)( σ−1/2L
−1/2 ik ) ∇ikf ( ŷk ) , C ( yk vk ) −Qk
where C and Qk are defined in the obvious way. Hence we define auxiliary variables pk, qk defined via: (
yk vk
) = Ck ( pk qk ) (A.1)
These clearly follow the iteration:( pk+1 qk+1 ) = ( pk qk ) − C−(k+1)Qk (A.2)
Since the vector Qk is sparse, we can evolve variables pk, and qk in a sparse manner, and recover the original iteration variables at the end of the algorithm via A.1. The gradient of the dual function is given by:
∇D(y) = 1 λd ( 1 d ATAy + λ(y + l) ) As mentioned before, it is necessary to maintain or recover Ayk to calculate block gradients. Since Ayk can be recovered via the linear relation in equation A.1, and the gradient is an affine function, we maintain the auxiliary vectors Apk and Aqk instead. Hence we propose the following efficient implementation in Algorithm 1. We used this to generate the results in Table 5. We also note also that it can improve performance to periodically recover vk and yk, reset the values of pk, qk, and C to vk, yk, and I respectively, and restarting the scheme (which can be done cheaply in time O(d)). We let B ∈ R2×2 represent Ck, and b represent B−1. ⊗ is the Kronecker product. Each computing node has local outdated versions of p, q, Ap,Aq which we denote p̂, q̂, Âp, Âq respectively. We also find it convenient to define: [
Dk1 Dk2
] = [ ασ−1/2L −1/2 ik
+ h(1− α)L−1ik σ−1/2L
−1/2 ik
] (A.3)
Algorithm 1 Shared-memory implementation of A2BCD 1: Inputs: Function parameters A, λ, L, {Li}ni=1, n, d. Delay τ (obtained in dry run). Starting
vectors y, v. 2: Shared data: Solution vectors p, q; auxiliary vectors Ap, Aq; sparsifying matrix B 3: Node local data: Solution vectors p̂, q̂, auxiliary vectors Âp, Âq, sparsifying matrix B̂. 4: Calculate parameters ψ, α, β, h via 1. Set k = 0. 5: Initializations: p← y, q ← v, Ap← Ay, Aq ← Av, B ← I. 6: while not converged, each computing node asynchronous do 7: Randomly select block i via equation 2.1. 8: Read shared data into local memory: p̂← p, q̂ ← q, Âp← Ap, Âq ← Aq, B̂ ← B. 9: Compute block gradient: ∇if(ŷ) = 1nλ ( 1 nA T i ( B̂1,1Âp+ B̂1,2Âq ) + λ ( B̂1,1p̂+ B̂1,2q̂
)) 10: Compute quantity gi = ATi ∇if(ŷ)
Shared memory updates: 11: Update B ← [ 1− αβ αβ 1− β β ] ×B, calculate inverse b← B−1.
12: [ p q ] −= b [ Dk1 Dk2 ] ⊗∇if(ŷ) ,
[ Ap Aq ] −= b [ Dk1 Dk2 ] ⊗ gi
13: Increase iteration count: k ← k + 1 14: end while 15: Recover original iteration variables: [ y v ] ← B [ p q ] . Output y.
B Proof of the main result
We first recall a couple of inequalities for convex functions. Lemma 7. Let f be σ-strongly convex with L-Lipschitz gradient. Then we have:
f(y) ≤ f(x) + 〈y − x,∇f(x)〉+ 12L‖y − x‖ 2 , ∀x, y (B.1) f(y) ≥ f(x) + 〈y − x,∇f(x)〉+ 12σ‖y − x‖ 2 , ∀x, y (B.2)
We also find it convenient to define the norm:
‖s‖∗ = √√√√ n∑ i=1 L −1/2 i ‖si‖ 2 (B.3)
B.1 Starting point
First notice that using the definition equation 2.8 of vk+1 we have:
‖vk+1‖2 = ‖βvk + (1− β)yk‖2 − 2σ−1/2L−1/2ik 〈βvk + (1− β)yk,∇ikf(ŷk)〉+ σ −1L−1ik ‖∇ikf(ŷk)‖ 2
Ek‖vk+1‖2 = ‖βvk + (1− β)yk‖2︸ ︷︷ ︸ A −2σ−1/2S−1 〈βvk + (1− β)yk,∇f(ŷk)〉︸ ︷︷ ︸ B
(B.4)
+ S−1σ−1 n∑ i=1 L −1/2 i ‖∇if(ŷk)‖ 2
︸ ︷︷ ︸ C
We have the following general identity:
‖βx+ (1− β)y‖2 = β‖x‖2 + (1− β)‖y‖2 − β(1− β)‖x− y‖2, ∀x, y (B.5) It can also easily be verified from equation 2.6 that we have:
vk = yk + α−1(1− α)(yk − xk) (B.6) Using equation B.5 on term A, equation B.6 on term B, and recalling the definition equation B.3 on term C, we have from equation B.4:
Ek‖vk+1‖2 = β‖vk‖2 + (1− β)‖yk‖2 − β(1− β)‖vk − yk‖2 + S−1σ−1/2‖∇f(ŷk)‖2∗ (B.7) − 2σ−1/2S−1βα−1(1− α)〈yk − xk,∇f(ŷk)〉 − 2σ−1/2S−1〈yk,∇f(ŷk)〉
This inequality is our starting point. We analyze the terms on the second line in the next section.
B.2 The Cross Term
To analyze these terms, we need a small lemma. This lemma is fundamental in allowing us to deal with asynchronicity. Lemma 8. Let χ,A > 0. Let the delay be bounded by τ . Then:
A‖ŷk − yk‖ ≤ 1 2χ −1A2 + 12χτ τ∑ j=1 ‖yk+1−j − yk−j‖2
Proof. See Hannah & Yin (2017a).
Lemma 9. We have:
−〈∇f(ŷk), yk〉 ≤ −f(yk)− 1 2σ(1− ψ)‖yk‖ 2 + 1 2 Lκψ−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2 (B.8)
〈∇f(ŷk), xk − yk〉 ≤ f(xk)− f(yk) (B.9)
+ 1 2 Lα(1− α)−1 κ−1ψβ‖vk − yk‖2 + κψ−1β−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2
The terms in bold in equation B.8 and equation B.9 are a result of the asynchronicity, and are identically 0 in its absence.
Proof. Our strategy is to separately analyze terms that appear in the traditional analysis of Nesterov (2012), and the terms that result from asynchronicity. We first prove equation B.8:
−〈∇f(ŷk), yk〉 = −〈∇f(yk), yk〉 − 〈∇f(ŷk)−∇f(yk), yk〉
≤ −f(yk)− 1 2σ‖yk‖ 2 + L‖ŷk − yk‖‖yk‖ (B.10)
equation B.10 follows from strong convexity (equation B.2 with x = yk and y = x∗), and the fact that ∇f is L-Lipschitz. The term due to asynchronicity becomes:
L‖ŷk − yk‖‖yk‖ ≤ 1 2Lκ −1ψ‖yk‖2 + 1 2Lκψ −1τ τ∑ j=1 ‖yk+1−j − yk−j‖2
using Lemma 8 with χ = κψ−1, A = ‖yk‖. Combining this with equation B.10 completes the proof of equation B.8. We now prove equation B.9:
〈∇f(ŷk), xk − yk〉 = 〈∇f(yk), xk − yk〉+ 〈∇f(ŷk)−∇f(yk), xk − yk〉 ≤ f(xk)− f(yk) + L‖ŷk − yk‖‖xk − yk‖ ≤ f(xk)− f(yk)
+ 12L κ−1ψβα−1(1− α)‖xk − yk‖2 + κψ−1β−1α(1− α)−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2 Here the last line follows from Lemma 8 with χ = κψ−1β−1α(1− α)−1, A = nxk − yk. We can complete the proof using the following identity that can be easily obtained from equation 2.6:
yk − xk = α(1− α)−1(vk − yk)
B.3 Function-value term
Much like Nesterov (2012), we need a f(xk) term in the Lyapunov function (see the middle of page 357). However we additionally need to consider asynchronicity when analyzing the growth of this term. Again terms due to asynchronicity are emboldened. Lemma 10. We have:
Ekf(xk+1) ≤ f(yk)− 1 2h ( 2− h ( 1 + 1 2 σ1/2L−1/2ψ )) S−1‖∇f(ŷk)‖2∗
+ S−1Lσ1/2κψ−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2
Proof. From the definition equation 2.7 of xk+1, we can see that xk+1 − yk is supported on block ik. Since each gradient block ∇if is Li Lipschitz with respect to changes to block i, we can use
equation B.1 to obtain:
f(xk+1) ≤ f(yk) + 〈∇f(yk), xk+1 − yk〉+ 1 2Lik‖xk+1 − yk‖ 2
(from equation 2.7) = f(yk)− hL−1ik 〈∇ikf(yk),∇ikf(ŷk)〉+ 1 2h 2L−1ik ‖∇ikf(ŷk)‖ 2
= f(yk)− hL−1ik 〈∇ikf(yk)−∇ikf(ŷk),∇ikf(ŷk)〉 − 1 2h(2− h)L −1 ik ‖∇ikf(ŷk)‖ 2
Ekf(xk+1) ≤ f(yk)− hS−1 n∑ i=1 L −1/2 i 〈∇if(yk)−∇if(ŷk),∇if(ŷk)〉 − 1 2h(2− h)S −1‖∇f(ŷk)‖2∗
(B.11) Here the last line followed from the definition equation B.3 of the norm ‖·‖∗1/2. We now analyze the middle term:
− n∑ i=1 L −1/2 i 〈∇if(yk)−∇if(ŷk),∇if(ŷk)〉
= − 〈
n∑ i=1 L −1/4 i (∇if(yk)−∇if(ŷk)), n∑ i=1 L −1/4 i ∇if(ŷk)
〉
(Cauchy Schwarz) ≤ ∥∥∥∥∥ n∑ i=1 L −1/4 i (∇if(yk)−∇if(ŷk)) ∥∥∥∥∥ ∥∥∥∥∥ n∑ i=1 L −1/4 i ∇if(ŷk) ∥∥∥∥∥ = (
n∑ i=1 L −1/2 i ‖∇if(yk)−∇if(ŷk)‖ 2 )1/2( n∑ i=1 L −1/2 i ‖∇if(ŷk)‖ 2 )1/2 (L ≤ Li,∀i and equation B.3) ≤ L−1/4‖∇f(yk)−∇f(ŷk)‖‖∇f(ŷk)‖∗
(∇f is L-Lipschitz) ≤ L−1/4L‖yk − ŷk‖‖∇f(ŷk)‖∗ We then apply Lemma 8 to this with χ = 2h−1σ1/2L1/4κψ−1, A = ‖∇f(ŷk)‖∗ to yield:
− n∑ i=1 L −1/2 i 〈∇if(yk)−∇if(ŷk),∇if(ŷk)〉 ≤ h −1Lσ1/2κψ−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2 (B.12)
+ 14hσ 1/2L−1/2ψ‖∇f(ŷk)‖2∗
Finally to complete the proof, we combine equation B.11, with equation B.12.
B.4 Asynchronicity error
The previous inequalities produced difference terms of the form ‖yk+1−j − yk−j‖2. The following lemma shows how these errors can be incorporated into a Lyapunov function. Lemma 11. Let 0 < r < 1 and consider the asynchronicity error and corresponding coefficients:
Ak = ∞∑ j=1 cj‖yk+1−j − yk−j‖2
ci = ∞∑ j=i ri−j−1sj
This sum satisfies:
Ek[Ak+1 − rAk] = c1Ek‖yk+1 − yk‖2 − ∞∑ j=1 sj‖yk+1−j − yk−j‖2
Remark 2. Interpretation. This result means that an asynchronicity error term Ak can negate a series of difference terms − ∑∞ j=1 sj‖yk+1−j − yk−j‖
2 at the cost of producing an additional error c1Ek‖yk+1 − yk‖2, while maintaining a convergence rate of r. This essentially converts difference terms, which are hard to deal with, into a ‖yk+1 − yk‖2 term which can be negated by other terms in the Lyapunov function. The proof is straightforward.
Proof.
Ek[Ak+1 − rAk] = Ek ∞∑ j=0 cj+1‖yk+1−j − yk−j‖2 − rEk ∞∑ j=1 cj‖yk+1−j − yk−j‖2
= c1Ek‖yk+1 − yk‖2 + Ek ∞∑ j=1 (cj+1 − rcj)‖yk+1−j − yk−j‖2
Noting the following completes the proof:
ci+1 − rci = ∞∑
j=i+1 ri+1−j−1sj − r ∞∑ j=i ri−j−1sj = −si
Given that Ak allows us to negate difference terms, we now analyze the cost c1Ek‖yk+1 − yk‖2 of this negation. Lemma 12. We have:
Ek‖yk+1 − yk‖2 ≤ 2α2β2‖vk − yk‖2 + 2S−1L−1‖∇f(ŷk)‖2
Proof.
yk+1 − yk = (αvk+1 + (1− α)xk+1)− yk = α ( βvk + (1− β)yk − σ−1/2L−1/2ik ∇ikf(ŷk) ) + (1− α) ( yk − hL−1ik ∇ikf(ŷk) ) − yk (B.13)
= αβvk + α(1− β)yk − ασ−1/2L−1/2ik ∇ikf(ŷk)− αyk − (1− α)hL −1 ik ∇ikf(ŷk) = αβ(vk − yk)− ( ασ−1/2L −1/2 ik + h(1− α)L−1ik ) ∇ikf(ŷk)
‖yk+1 − yk‖2 ≤ 2α2β2‖vk − yk‖2 + 2 ( ασ−1/2L −1/2 ik + h(1− α)L−1ik )2 ‖∇ikf(ŷk)‖ 2 (B.14)
Here equation B.13 following from equation 2.8, the definition of vk+1. equation B.14 follows from the inequality ‖x+ y‖2 ≤ 2‖x‖2 + 2‖y‖2. The rest is simple algebraic manipulation.
‖yk+1 − yk‖2 ≤ 2α2β2‖vk − yk‖2 + 2L−1ik ( ασ−1/2 + h(1− α)L−1/2ik )2 ‖∇ikf(ŷk)‖ 2
(L ≤ Li,∀i) ≤ 2α2β2‖vk − yk‖2 + 2L−1ik ( ασ−1/2 + h(1− α)L−1/2 )2 ‖∇ikf(ŷk)‖ 2
= 2α2β2‖vk − yk‖2 + 2L−1ik L −1 ( L1/2σ−1/2α+ h(1− α) )2 ‖∇ikf(ŷk)‖ 2
E‖yk+1 − yk‖2 ≤ 2α2β2‖vk − yk‖2 + 2S−1L−1 ( L1/2σ−1/2α+ h(1− α) )2 ‖∇f(ŷk)‖2∗
Finally, to complete the proof, we prove L1/2σ−1/2α+ h(1− α) ≤ 1. L1/2σ−1/2α+ h(1− α) = h+ α ( L1/2σ−1/2 − h ) (definitions of h and α: equation 2.3, and equation 2.5) = 1− 12σ 1/2L−1/2ψ + σ1/2S−1 ( L1/2σ−1/2
) ≤ 1− σ1/2L−1/2 ( 1 2ψ − σ
−1/2S−1L1 ) (B.15)
Rearranging the definition of ψ, we have:
S−1 = 192ψ 2L1L−3/2κ−1/2τ−2
(τ ≥1 and ψ ≤ 12 ) ≤ 1 182L 1L−3/2κ−1/2
Using this on equation B.15, we have: L1/2ασ−1/2 + h(1− α) ≤ 1− σ1/2L−1/2 (
1 2ψ − 1 182L
1L−3/2κ−1/2σ−1/2L1 )
= 1− σ1/2L−1/2 (
1 2ψ − 1 182 (L/L)
2 )
(ψ ≤ 12 ) = 1− σ 1/2L−1/2 ( 1 24 − 1 182 ) ≤ 1.
This completes the proof.
B.5 Master inequality
We are finally in a position to bring together all the all the previous results together into a master inequality for the Lyapunov function ρk (defined in equation 2.11). After this lemma is proven, we will prove that the right hand size is negative, which will imply that ρk linearly converges to 0 with rate β.
Lemma 13. Master inequality. We have:
Ek[ρk+1 − βρk] ≤+ ‖yk‖2 × ( 1− β − σ−1/2S−1σ(1− ψ) )
(B.16) + ‖vk − yk‖2 ×β ( 2α2βc1 + S−1βL1/2κ−1/2ψ − (1− β) )
+ f(yk) × ( c− 2σ−1/2S−1 ( βα−1(1− α) + 1 )) + f(xk) ×β ( 2σ−1/2S−1α−1(1− α)− c
) +
τ∑ j=1 ‖yk+1−j − yk−j‖2 ×S−1Lκψ−1τσ1/2 ( 2σ−1 + c ) − s
+ ‖∇f(ŷk)‖2∗ ×S −1 ( σ−1 + 2L−1c1 − 1 2ch ( 2− h ( 1 + 12σ 1/2L−1/2ψ )))
Proof.
Ek‖vk+1‖2 − β‖vk‖2
(B.7) = (1− β)‖yk‖2 − β(1− β)‖vk − yk‖2 + S−1σ−1‖∇f(ŷk)‖2∗ − 2σ−1/2S−1〈yk,∇f(ŷk)〉 − 2σ−1/2S−1βα−1(1− α)〈yk − xk,∇f(ŷk)〉 ≤ (1− β)‖yk‖2 − β(1− β)‖vk − yk‖2 + S−1σ−1‖∇f(ŷk)‖2∗ (B.17)
(B.8) + 2σ−1/2S−1 −f(yk)− 12σ(1− ψ)‖yk‖2 + 12Lκψ−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2 (B.9)− 2σ−1/2S−1βα−1(1− α)(f(xk)− f(yk))
+ σ−1/2S−1βL κ−1ψβ‖vk − yk‖2 + κψ−1β−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2 We now collect and organize the similar terms of this inequality.
≤+ ‖yk‖2 × ( 1− β − σ−1/2S−1σ(1− ψ) )
+ ‖vk − yk‖2 ×β ( σ−1/2S−1βLκ−1ψ − (1− β) ) − f(yk) ×2σ−1/2S−1 ( βα−1(1− α) + 1
) + f(xk) ×2σ−1/2S−1βα−1(1− α)
+ τ∑ j=1 ‖yk+1−j − yk−j‖2 ×2σ−1/2S−1Lκψ−1τ
+ ‖∇f(ŷk)‖2∗ ×σ −1S−1
Now finally, we add the function-value and asynchronicity terms to our analysis. We use Lemma 11 is with r = 1− σ1/2S−1, and
si = { s = 6S−1L1/2κ3/2ψ−1τ, 1 ≤ i ≤ τ 0, i > τ (B.18)
Notice that this choice of si will recover the coefficient formula given in equation 2.9. Hence we have:
Ek[cf(xk+1) +Ak+1 − β(cf(xk) +Ak)]
(Lemma 10) ≤ cf(yk)− 1 2ch
( 2− h ( 1 + 12σ 1/2L−1/2ψ )) S−1‖∇f(ŷk)‖2∗ − βcf(xk)
(B.19)
+ S−1Lσ1/2κψ−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2
(Lemmas 11 and 12) + c1 ( 2α2β2‖vk − yk‖2 + 2S−1L−1‖∇f(ŷk)‖2 )
(B.20)
− ∞∑ j=1 sj‖yk+1−j − yk−j‖2 +Ak(r − β)
Notice Ak(r − β) ≤ 0. Finally, combining equation B.17 and equation B.19 completes the proof.
In the next section, we will prove that every coefficient on the right hand side of equation B.16 is 0 or less, which will complete the proof of Theorem 1.
B.6 Proof of main theorem
Lemma 14. The coefficients of ‖yk‖2, f(yk), and ∑τ j=1‖yk+1−j − yk−j‖
2 in Lemma 13 are non-positive.
Proof. The coefficient 1− (1− ψ)σ1/2S−1−β of ‖yk‖2 is identically 0 via the definition equation 2.4 of β. The coefficient c − 2σ−1/2S−1 ( βα−1(1− α) + 1 ) of f(yk) is identically 0 via the definition equation 2.12 of c. First notice from the definition equation 2.12 of c:
c = 2σ−1/2S−1 ( βα−1(1− α) + 1 ) (definitions of α, β) = 2σ−1/2S−1 (( 1− σ1/2S−1(1− ψ) ) (1 + ψ)σ−1/2S + 1
) = 2σ−1/2S−1 ( (1 + ψ)σ−1/2S + ψ2
) = 2σ−1 ( (1 + ψ) + ψ2σ1/2S−1 ) (B.21)
c ≤ 4σ−1 (B.22)
Here the last line followed since ψ ≤ 12 and σ 1/2S−1 ≤ 1. We now analyze the coefficient of∑τ
j=1‖yk+1−j − yk−j‖ 2.
S−1Lκψ−1τσ1/2 ( 2σ−1 + c ) − s
(B.22) ≤ 6L1/2κ3/2ψ−1τ − s (definition equation B.18 of s) ≤ 0
Lemma 15. The coefficient β ( 2σ−1/2S−1α−1(1− α)− c ) of f(xk) in Lemma 13 is non-positive.
Proof.
2σ−1/2S−1α−1(1− α)− c (B.21) = 2σ−1/2S−1(1 + ψ)σ−1/2S − 2σ−1 ( (1 + ψ) + ψ2σ1/2S−1 )
= 2σ−1 ( (1 + ψ)− ( (1 + ψ) + ψ2σ1/2S−1 ))
= −2ψ2σ−1/2S−1 ≤ 0
Lemma 16. The coefficient S−1 ( σ−1 + 2L−1c1 − 12ch ( 2− h ( 1 + 12σ 1/2L−1/2ψ )))
of ‖∇f(ŷk)‖2∗ in Lemma 13 is non-positive.
Proof. We first need to bound c1.
(equation B.18 and equation 2.9) c1 = s τ∑ j=1 ( 1− σ1/2S−1 )−j equation B.18 ≤ 6S−1L1/2κ3/2ψ−1τ
τ∑ j=1 ( 1− σ1/2S−1 )−j ≤ 6S−1L1/2κ3/2ψ−1τ2 ( 1− σ1/2S−1
)−τ It can be easily verified that if x ≤ 12 and y ≥ 0, then (1− x)
−y ≤ exp(2xy). Using this fact with x = σ1/2S−1 and y = τ , we have:
≤ 6S−1L1/2κ3/2ψ−1τ2 exp ( τσ1/2S−1 ) (since ψ ≤ 3/7 and hence τσ1/2S−1 ≤ 17 ) ≤ S −1L1/2κ3/2ψ−1τ2 × 6 exp ( 1 7
) c1 ≤ 7S−1L1/2κ3/2ψ−1τ2 (B.23)
We now analyze the coefficient of ‖∇f(ŷk)‖2∗
σ−1 + 2L−1c1 − 1 2ch
( 2− h ( 1 + 12σ 1/2L−1/2ψ ))
(B.23 and 2.5) ≤ σ−1 + 14S−1L−1L1/2κ3/2ψ−1τ2 − 12ch ( 1 + 14σ 1L−1ψ2 ) ≤ σ−1 + 14S−1L−1L1/2κ3/2ψ−1τ2 − 12ch
(definition 2.2 of ψ) = σ−1 + 1481σ −1ψ − 12ch
(B.21, definition 2.5 of h) = σ−1 ( 1 + 1481ψ − ( (1 + ψ) + ψ2σ1/2S−1 )( 1− 12σ 1/2L−1/2ψ )) (σ1/2L−1/2 ≤ 0 and σ1/2S−1 ≤ 1) ≤ σ−1 ( 1 + 1481ψ − (1 + ψ) ( 1− 12ψ
)) = σ−1ψ ( 14 81 + 1 2ψ − 1 2
) (ψ ≤ 12 ) ≤ 0
Lemma 17. The coefficient β ( 2α2βc1 + S−1βL1/2κ−1/2ψ − (1− β) ) of ‖vk − yk‖2 in 13 is nonpositive.
Proof. 2α2βc1 + σ1/2S−1βψ − (1− ψ)σ1/2S−1
(B.23) ≤ 14α2βS−1L1/2κ3/2ψ−1τ2 + σ1/2S−1βψ − (1− ψ)σ1/2S−1
≤ 14σS−3L1/2κ3/2ψ−1τ2 + σ1/2S−1ψ − (1− ψ)σ1/2S−1 = σ1/2S−1 ( 14S−2Lκτ2ψ−1 + 2ψ − 1 ) Here the last inequality follows since β ≤ 1 and α ≤ σ1/2S−1. We now rearrange the definition of ψ to yield the identity:
S−2κ = 194L 2L−3τ−4ψ4
Using this, we have: 14S−2Lκτ2ψ−1 + 2ψ − 1
= 1494 L 2L−2ψ3τ−2 + 2ψ − 1
≤ 1494 ( 3 7 )3 1−2 + 67 − 1 ≤ 0
Here the last line followed since L ≤ L, ψ ≤ 37 , and τ ≥ 1. Hence the proof is complete.
Proof of Theorem 1. Using the master inequality 13 in combination with the previous Lemmas 14, 15, 16, and 17, we have:
Ek[ρk+1] ≤ βρk = ( 1− (1− ψ)σ1/2S−1 ) ρk
When we have: ( 1− (1− ψ)σ1/2S−1 )k ≤
then the Lyapunov function ρk has decreased below ρ0 in expectation. Hence the complexity K( ) satisfies:
K( ) ln ( 1− (1− ψ)σ1/2S−1 ) = ln( )
K( ) = −1 ln ( 1− (1− ψ)σ1/2S−1 ) ln(1/ ) Now it can be shown that for 0 < x ≤ 12 , we have:
1 x − 1 ≤ −1ln(1− x) ≤ 1 x − 12
−1 ln(1− x) = 1 x +O(1)
Since n ≥ 2, we have σ1/2S−1 ≤ 12 . Hence:
K( ) = 11− ψ
( σ−1/2S +O(1) ) ln(1/ )
An expression for KNU_ACDM( ), the complexity of NU_ACDM follows by similar reasoning. KNU_ACDM( ) = ( σ−1/2S +O(1) ) ln(1/ ) (B.24)
Finally we have:
K( ) = 11− ψ ( σ−1/2S +O(1) σ−1/2S +O(1) ) KNU_ACDM( )
= 11− ψ (1 + o(1))KNU_ACDM( )
which completes the proof.
C Ordinary Differential Equation Analysis
C.1 Derivation of ODE for synchronous A2BCD
If we take expectations with respect to Ek, then synchronous (no delay) A2BCD becomes: yk = αvk + (1− α)xk
Ekxk+1 = yk − n−1κ−1∇f(yk) Ekvk+1 = βvk + (1− β)yk − n−1κ−1/2∇f(yk)
We find it convenient to define η = nκ1/2. Inspired by this, we consider the following iteration:
yk = αvk + (1− α)xk (C.1) xk+1 = yk − s1/2κ−1/2η−1∇f(yk) (C.2) vk+1 = βvk + (1− β)yk − s1/2η−1∇f(yk) (C.3)
for coefficients:
α = ( 1 + s−1/2η )−1
β = 1− s1/2η−1
s is a discretization scale parameter that will be sent to 0 to obtain an ODE analogue of synchronous A2BCD. We first use equation B.6 to eliminate vk from from equation C.3.
0 = −vk+1 + βvk + (1− β)yk − s1/2η−1∇f(yk) 0 = −α−1yk+1 + α−1(1− α)xk+1
+ β ( α−1yk − α−1(1− α)xk ) + (1− β)yk − s1/2η−1∇f(yk)
(times by α) 0 = −yk+1 + (1− α)xk+1 + β(yk − (1− α)xk) + α(1− β)yk − αs1/2η−1∇f(yk) = −yk+1 + yk(β + α(1− β)) + (1− α)xk+1 − xkβ(1− α)− αs1/2η−1∇f(yk)
We now eliminate xk using equation C.1:
0 = −yk+1 + yk(β + α(1− β)) + (1− α) ( yk − s1/2η−1κ−1/2∇f(yk) ) − ( yk−1 − s1/2η−1κ−1/2∇f(yk−1) ) β(1− α)
− αs1/2η−1∇f(yk) = −yk+1 + yk(β + α(1− β) + (1− α))− β(1− α)yk−1 + s1/2η−1∇f(yk−1)(β − 1)(1− α) − αs1/2η−1∇f(yk) = (yk − yk+1) + β(1− α)(yk − yk−1) + s1/2η−1(∇f(yk−1)(β − 1)(1− α)− α∇f(yk))
Now to derive an ODE, we let yk = Y ( ks1/2 ) . Then ∇f(yk−1) = ∇f(yk) + O ( s1/2 ) . Hence the above becomes:
0 = (yk − yk+1) + β(1− α)(yk − yk−1) + s1/2η−1((β − 1)(1− α)− α)∇f(yk) +O ( s3/2 ) 0 = ( −s1/2Ẏ − 12sŸ ) + β(1− α) ( s1/2Ẏ − 12sŸ ) (C.4)
+ s1/2η−1((β − 1)(1− α)− α)∇f(yk) +O ( s3/2 )
We now look at some of the terms in this equation to find the highest-order dependence on s.
β(1− α) = ( 1− s1/2η−1 )(
1− 1 1 + s−1/2η ) = ( 1− s1/2η−1 ) s−1/2η
1 + s−1/2η
= s −1/2η − 1 s−1/2η + 1
= 1− s 1/2η−1
1 + s1/2η−1
= 1− 2s1/2η−1 +O(s)
We also have:
(β − 1)(1− α)− α = β(1− α)− 1 = −2s1/2η−1 +O(s)
Hence using these facts on equation C.4, we have: 0 = ( −s1/2Ẏ − 12sŸ ) + ( 1− 2s1/2η−1 +O(s) )( s1/2Ẏ − 12sŸ )
+ s1/2η−1 ( −2s1/2η−1 +O(s) ) ∇f(yk) +O ( s3/2 ) 0 = −s1/2Ẏ − 12sŸ + ( s1/2Ẏ − 12sŸ − 2s 1η−1Ẏ +O ( s3/2
)) ( −2s1η−2 +O ( s3/2 )) ∇f(yk) +O ( s3/2
) 0 = −sŸ − 2sη−1Ẏ − 2sη−2∇f(yk) +O ( s3/2
) 0 = −Ÿ − 2η−1Ẏ − 2η−2∇f(yk) +O ( s1/2
) Taking the limit as s→ 0, we obtain the ODE | 1. What is the focus of the paper regarding accelerated asynchronous block coordinate descent?
2. What are the strengths of the proposed approach, particularly in its iteration complexity and lower bound analysis?
3. What are the reviewer's concerns or confusions about the guarantee of the algorithm, specifically regarding the theorem and corollary?
4. How does the reviewer assess the surprise or unexpectedness of the algorithm's performance in certain regimes, such as the delay proportionate to the minimum smoothness parameter?
5. Are there any questions or areas of confusion regarding the ODE analysis motivating the approach or any other aspects of the paper? | Review | Review
The authors design an accelerated, asynchronous block coordinate descent algorithm, which, for sufficiently small delays attains the iteration complexity of the current state of the art algorithm (which is not parallel/asynchronous). The authors prove a lower bound on the iteration complexity in order to show that their algorithm is near optimal. They also analyze an ODE which is the continuous time limit of A2BCD, which they use to motivate their approach.
I am a little bit confused about the guarantee of the algorithm, as it does not agree with my intuition. Perhaps I am simply mistaken in my intuition, but I am concerned that there may need to be additional premises to the Theorem.
My main confusion is with Theorem 1, which says that for $\psi < 3/7$ the iteration complexity is approximately the iteration complexity of NU_ACDM times a factor of $(1 + o(1))/(1-\psi)$, i.e. within that factor of the optimal *non-asynchronous/parallel* algorithm. In particular, since $\psi < 3/7$ this means that the algorithm is within a $7/4 + o(1)$ factor. As mentioned in Corollary 3, this applies for instance when $L_i = L$ for all i and $\tau = \Theta( n^{1/2}\kappa^{-1/4} )$. Therefore, in a regime where $n \approx \kappa$, and $n$ very large, this would indicate that the algorithm would be almost as good as the best synchronous algorithm even for delays $\tau \approx n^{1/4}$. Perhaps I am missing something, but this seems very surprising to me, in particular, I would expect more significant slowdown due to $\tau$.
I am also a little bit surprised that the maximum tolerable delay is proportional to the *minimum* smoothness parameter $\underbar{L}$. It seems like decreasing $\underbar{L}$ should make optimization easier and therefore more delay should be tolerated. Perhaps this is simply an artifact of the analysis. |
ICLR | Title
A2BCD: Asynchronous Acceleration with Optimal Complexity
Abstract
In this paper, we propose the Asynchronous Accelerated Nonuniform Randomized Block Coordinate Descent algorithm (A2BCD). We prove A2BCD converges linearly to a solution of the convex minimization problem at the same rate as NU_ACDM, so long as the maximum delay is not too large. This is the first asynchronous Nesterov-accelerated algorithm that attains any provable speedup. Moreover, we then prove that these algorithms both have optimal complexity. Asynchronous algorithms complete much faster iterations, and A2BCD has optimal complexity. Hence we observe in experiments that A2BCD is the top-performing coordinate descent algorithm, converging up to 4 − 5× faster than NU_ACDM on some data sets in terms of wall-clock time. To motivate our theory and proof techniques, we also derive and analyze a continuous-time analogue of our algorithm and prove it converges at the same rate.
1 Introduction
In this paper, we propose and prove the convergence of the Asynchronous Accelerated Nonuniform Randomized Block Coordinate Descent algorithm (A2BCD), the first asynchronous Nesterovaccelerated algorithm that achieves optimal complexity. No previous attempts have been able to prove a speedup for asynchronous Nesterov acceleration. We aim to find the minimizer x∗ of the unconstrained minimization problem:
min x∈Rd
f(x) = f ( x(1), . . . , x(n) ) (1.1)
where f is σ-strongly convex for σ > 0 with L-Lipschitz gradient ∇f = (∇1f, . . . ,∇nf). x ∈ Rd is composed of coordinate blocks x(1), . . . , x(n). The coordinate blocks of the gradient ∇if are assumed Li-Lipschitz with respect to the ith block. That is, ∀x, h ∈ Rd:
‖∇if(x+ Pih)−∇if(x)‖ ≤ Li‖h‖ (1.2) where Pi is the projection onto the ith block of Rd. Let L̄ , 1n ∑n i=1 Li be the average block Lipschitz constant. These conditions on f are assumed throughout this whole paper. Our algorithm can also be applied to non-strongly convex objectives (σ = 0) or non-smooth objectives using the black box reduction techniques proposed in Allen-Zhu & Hazan (2016). Hence we consider only ∗Corresponding author: [email protected] †[email protected] ‡[email protected]
the coordinate smooth, strongly-convex case. Our algorithm can also be applied to the convex regularized ERM problem via the standard dual transformation (see for instance Lin et al. (2014)):
f(x) = 1 n n∑ i=1 fi(〈ai, x〉) + λ 2 ‖x‖ 2 (1.3)
Hence A2BCD can be used as an asynchronous Nesterov-accelerated finite-sum algorithm. Coordinate descent methods, in which a chosen coordinate block ik is updated at every iteration, are a popular way to solve equation 1.1. Randomized block coordinate descent (RBCD, Nesterov (2012)) updates a uniformly randomly chosen coordinate block ik with a gradient-descent-like step: xk+1 = xk − (1/Lik )∇ikf(xk). The complexity K( ) of an algorithm is defined as the number of iterations required to decrease the error E(f(xk)−f(x∗)) to less than (f(x0)− f(x∗)). Randomized coordinate descent has a complexity of K( ) = O(n(L̄/σ) ln(1/ )). Using a series of averaging and extrapolation steps, accelerated RBCD Nesterov (2012) improves RBCD’s iteration complexity K( ) to O(n √ L̄/σ ln(1/ )), which leads to much faster convergence
when L̄σ is large. This rate is optimal when all Li are equal Lan & Zhou (2015). Finally, using a special probability distribution for the random block index ik, the non-uniform accelerated coordinate descent method Allen-Zhu et al. (2015) (NU_ACDM) can further decrease the complexity to O( ∑n i=1 √ Li/σ ln(1/ )), which can be up to √ n times faster than accelerated RBCD, since some Li can be significantly smaller than L. NU_ACDM is the current state-of-the-art coordinate descent algorithm for solving equation 1.1. Our A2BCD algorithm generalizes NU_ACDM to the asynchronous-parallel case. We solve equation 1.1 with a collection of p computing nodes that continually read a shared-access solution vector y into local memory then compute a block gradient ∇if , which is used to update shared solution vectors (x, y, v). Proving convergence in the asynchronous case requires extensive new technical machinery. A traditional synchronous-parallel implementation is organized into rounds of computation: Every computing node must complete an update in order for the next iteration to begin. However, this synchronization process can be extremely costly, since the lateness of a single node can halt the entire system. This becomes increasingly problematic with scale, as differences in node computing speeds, load balancing, random network delays, and bandwidth constraints mean that a synchronous-parallel solver may spend more time waiting than computing a solution. Computing nodes in an asynchronous solver do not wait for others to complete and share their updates before starting the next iteration. They simply continue to update the solution vectors with the most recent information available, without any central coordination. This eliminates costly idle time, meaning that asynchronous algorithms can be much faster than traditional ones, since they have much faster iterations. For instance, random network delays cause asynchronous algorithms to complete iterations Ω(ln(p)) time faster than synchronous algorithms at scale. This and other factors that influence the speed of iterations are discussed in Hannah & Yin (2017a). However, since many iterations may occur between the time that a node reads the solution vector, and the time that its computed update is applied, effectively the solution vector is being updated with outdated information. At iteration k, the block gradient ∇ikf is computed at a delayed iterate ŷk defined as1:
ŷk = (( yk−j(k,1) ) (1), . . . , ( yk−j(k,n) ) (n) ) (1.4)
1Every coordinate can be outdated by a different amount without significantly changing the proofs.
for delay parameters j(k, 1), . . . , j(k, n) ∈ N. Here j(k, i) denotes how many iterations out of date coordinate block i is at iteration k. Different blocks may be out of date by different amounts, which is known as an inconsistent read. We assume2 that j(k, i) ≤ τ for some constant τ <∞. Asynchronous algorithms were proposed in Chazan & Miranker (1969) to solve linear systems. General convergence results and theory were developed later in Bertsekas (1983); Bertsekas & Tsitsiklis (1997); Tseng et al. (1990); Luo & Tseng (1992; 1993); Tseng (1991) for partially and totally asynchronous systems, with essentially-cyclic block sequence ik. More recently, there has been renewed interest in asynchronous algorithms with random block coordinate updates. Linear and sublinear convergence results were proven for asynchronous RBCD Liu & Wright (2015); Liu et al. (2014); Avron et al. (2014), and similar was proven for asynchronous SGD Recht et al. (2011), and variance reduction algorithms Reddi et al. (2015); Leblond et al. (2017); Mania et al. (2015); Huo & Huang (2016), and primal-dual algorithms Combettes & Eckstein (2018). There is also a rich body of work on asynchronous SGD. In the distributed setting, Zhou et al. (2018) showed global convergence for stochastic variationally coherent problems even when the delays grow at a polynomial rate. In Lian et al. (2018), an asynchronous decentralized SGD was proposed with the same optimal sublinear convergence rate as SGD and linear speedup with respect to the number of workers. In Liu et al. (2018), authors obtained an asymptotic rate of convergence for asynchronous momentum SGD on streaming PCA, which provides insight into the tradeoff between asynchrony and momentum. In Dutta et al. (2018), authors prove convergence results for asynchronous SGD that highlight the tradeoff between faster iterations and iteration complexity. Further related work is discussed in Section 4.
1.1 Summary of Contributions
In this paper, we prove that A2BCD attains NU_ACDM’s state-of-the-art iteration complexity to highest order for solving equation 1.1, so long as delays are not too large (see Section 2). The proof is very different from that of Allen-Zhu et al. (2015), and involves significant technical innovations and complexity related to the analysis of asynchronicity. We also prove that A2BCD (and hence NU_ACDM) has optimal complexity to within a constant factor over a fairly general class of randomized block coordinate descent algorithms (see Section 2.1). This extends results in Lan & Zhou (2015) to asynchronous algorithms with Li not all equal. Since asynchronous algorithms complete faster iterations, and A2BCD has optimal complexity, we expect A2BCD to be faster than all existing coordinate descent algorithms. We confirm with numerical experiments that A2BCD is the current fastest coordinate descent algorithm (see Section 5). We are only aware of one previous and one contemporaneous attempt at proving convergence results for asynchronous Nesterov-accelerated algorithms. However, the first is not accelerated and relies on extreme assumptions, and the second obtains no speedup. Therefore, we claim that our results are the first-ever analysis of asynchronous Nesterov-accelerated algorithms that attains a speedup. Moreover, our speedup is optimal for delays not too large3. The work of Meng et al. claims to obtain square-root speedup for an asynchronous accelerated SVRG. In the case where all component functions have the same Lipschitz constant L, the complexity they obtain reduces to (n+ κ) ln(1/ ) for κ = O ( τn2 ) (Corollary 4.4). Hence authors do not even obtain accelerated rates. Their convergence condition is τ < 14∆1/8 for sparsity parameter ∆. Since the dimension d satisfies d ≥ 1∆ , they require d ≥ 2
16τ8. So τ = 20 requires dimension d > 1015. 2This condition can be relaxed however by techniques in Hannah & Yin (2017b); Sun et al. (2017); Peng
et al. (2016c); Hannah & Yin (2017a) 3Speedup is defined precisely in Section 2
In a contemporaneous preprint, authors in Fang et al. (2018) skillfully devised accelerated schemes for asynchronous coordinate descent and SVRG using momentum compensation techniques. Although their complexity results have the improved √ κ dependence on the condition number, they do not prove any speedup. Their complexity is τ times larger than the serial complexity. Since τ is necessarily greater than p, their results imply that adding more computing nodes will increase running time. The authors claim that they can extend their results to linear speedup for asynchronous, accelerated SVRG under sparsity assumptions. And while we think this is quite likely, they have not yet provided proof. We also derive a second-order ordinary differential equation (ODE), which is the continuous-time limit of A2BCD (see Section 3). This extends the ODE found in Su et al. (2014) to an asynchronous accelerated algorithm minimizing a strongly convex function. We prove this ODE linearly converges to a solution with the same rate as A2BCD’s, without needing to resort to the restarting techniques. The ODE analysis motivates and clarifies the our proof strategy of the main result.
2 Main results
We should consider functions f where it is efficient to calculate blocks of the gradient, so that coordinate-wise parallelization is efficient. That is, the function should be “coordinate friendly” Peng et al. (2016b). This is a very wide class that includes regularized linear regression, logistic regression, etc. The L2-regularized empirical risk minimization problem is not coordinate friendly in general, however the equivalent dual problem is, and hence can be solved efficiently by A2BCD (see Lin et al. (2014), and Section 5). To calculate the k + 1’th iteration of the algorithm from iteration k, we use only one block of the gradient ∇ikf . We assume that the delays j(k, i) are independent of the block sequence ik, but otherwise arbitrary (This is a standard assumption found in the vast majority of papers, but can be relaxed Sun et al. (2017); Leblond et al. (2017); Cannelli et al. (2017)). Definition 1. Asynchronous Accelerated Randomized Block Coordinate Descent (A2BCD). Let f be σ-strongly convex, and let its gradient ∇f be L-Lipschitz with block coordinate Lipschitz parameters Li as in equation 1.2. We define the condition number κ = L/σ, and let L = mini Li. Using these parameters, we sample ik in an independent and identically distributed (IID) fashion according to
P[ik = j] = L1/2j /S, j ∈ {1, . . . , n}, for S = ∑n
i=1 L
1/2 i . (2.1)
Let τ be the maximum asynchronous delay. We define the dimensionless asynchronicity parameter ψ, which is proportional to τ , and quantifies how strongly asynchronicity will affect convergence:
ψ = 9 ( S−1/2L−1/2L3/4κ1/4 ) × τ (2.2)
We use the above system parameters and ψ to define the coefficients α, β, and γ via eqs. (2.3) to (2.5). Hence A2BCD algorithm is defined via the iterations: eqs. (2.6) to (2.8).
α , ( 1 + (1 + ψ)σ−1/2S )−1 (2.3)
β , 1− (1− ψ)σ1/2S−1 (2.4)
h , 1− 12σ 1/2L−1/2ψ. (2.5)
yk = αvk + (1− α)xk, (2.6) xk+1 = yk − hL−1ik ∇ikf(ŷk), (2.7) vk+1 = βvk + (1− β)yk − σ−1/2L−1/2ik ∇ikf(ŷk). (2.8)
See Section A for a discussion of why it is practical and natural to have the gradient ∇ikf(ŷk) to be outdated, while the actual variables xk, yk, vk can be efficiently kept up to date. Essentially it is
because most of the computation lies in computing ∇ikf(ŷk). After this is computed, xk, yk, vk can be updated more-or-less atomically with minimal overhead, meaning that they will always be up to date. However our main results still hold for more general asynchronicity. A natural quantity to consider in asynchronous convergence analysis is the asynchronicity error, a powerful tool for analyzing asynchronous algorithms used in several recent works Peng et al. (2016a); Hannah & Yin (2017b); Sun et al. (2017); Hannah & Yin (2017a). We adapt it and use a weighted sum of the history of the algorithm with decreasing weight as you go further back in time. Definition 2. Asynchronicity error. Using the above parameters, we define:
Ak = τ∑ j=1 cj‖yk+1−j − yk−j‖2 (2.9) for ci = 6 S L1/2κ3/2τ τ∑ j=i ( 1− σ1/2S−1 )i−j−1 ψ−1. (2.10)
Here we define yk = y0 for all k < 0. The determination of the coefficients ci is in general a very involved process of trial and error, intuition, and balancing competing requirements. The algorithm doesn’t depend on the coefficients, however; they are only an analytical tool. We define Ek[X] as the expectation of X conditioned on (x0, . . . , xk), (y0, . . . , yk), (v0, . . . , vk), and (i0, . . . , ik−1). To simplify notation4, we assume that the minimizer x∗ = 0, and that f(x∗) = 0 with no loss in generality. We define the Lyapunov function:
ρk = ‖vk‖2 +Ak + cf(xk) (2.11) for c = 2σ−1/2S−1 ( βα−1(1− α) + 1 ) . (2.12)
We now present this paper’s first main contribution. Theorem 1. Let f be σ-strongly convex with a gradient∇f that is L-Lipschitz with block Lipschitz constants {Li}ni=1. Let ψ defined in equation 2.2 satisfy ψ ≤ 3 7 (i.e. τ ≤ 1 21S
1/2L1/2L−3/4κ−1/4). Then for A2BCD we have:
Ek[ρk+1] ≤ ( 1− (1− ψ)σ1/2S−1 ) ρk.
To obtain E[ρk] ≤ ρ0, it takes KA2BCD( ) iterations for:
KA2BCD( ) = ( σ−1/2S +O(1) ) ln(1/ ) 1− ψ , (2.13)
where O(·) is asymptotic with respect to σ−1/2S →∞, and uniformly bounded.
This result is proven in Section B. A stronger result for Li ≡ L can be proven, but this adds to the complexity of the proof; see Section E for a discussion. In practice, asynchronous algorithms are far more resilient to delays than the theory predicts. τ can be much larger without negatively affecting the convergence rate and complexity. This is perhaps because we are limited to a worst-case analysis, which is not representative of the average-case performance. Allen-Zhu et al. (2015) (Theorem 5.1) shows a linear convergence rate of 1 − 2/ ( 1 + 2σ−1/2S
) for NU_ACDM, which leads to the corresponding iteration complexity of KNU_ACDM( ) =( σ−1/2S +O(1) ) ln(1/ ). Hence, we have:
KA2BCD( ) = 1
1− ψ (1 + o(1))KNU_ACDM( )
4We can assume x∗ = 0 with no loss in generality since we may translate the coordinate system so that x∗ is at the origin. We can assume f(x∗) = 0 with no loss in generality, since we can replace f(x) with f(x)−f(x∗). Without this assumption, the Lyapunov function simply becomes: ‖vk − x∗‖2 +Ak + c(f(xk)− f(x∗)).
When 0 ≤ ψ 1, or equivalently, when τ S1/2L1/2L−3/4κ−1/4, the complexity of A2BCD asymptotically matches that of NU_ACDM. Hence A2BCD combines state-of-the-art complexity with the faster iterations and superior scaling that asynchronous iterations allow. We now present some special cases of the conditions on the maximum delay τ required for good complexity. Corollary 3. Let the conditions of Theorem 1 hold. If all coordinate-wise Lipschitz constants Li are equal (i.e. Li = L1, ∀i), then we have KA2BCD( ) ∼ KNU_ACDM( ) when τ n1/2κ−1/4(L1/L)3/4. If we further assume all coordinate-wise Lipschitz constants Li equal L. Then KA2BCD( ) ∼ KNU_ACDM( ) = KACDM( ), when τ n1/2κ−1/4. Remark 1. Reduction to synchronous case. Notice that when τ = 0, we have ψ = 0, ci ≡ 0 and hence Ak ≡ 0. Thus A2BCD becomes equivalent to NU_ACDM, the Lyapunov function5 ρk becomes equivalent to one found in Allen-Zhu et al. (2015)(pg. 9), and Theorem 1 yields the same complexity.
The maximum delay τ will be a function τ(p) of p, number of computing nodes. Clearly τ ≥ p, and experimentally it has been observed that τ = O(p) Leblond et al. (2017). Let gradient complexity K( , τ) be the number of gradients required for an asynchronous algorithm with maximum delay τ to attain suboptimality . τ(1) = 0, since with only 1 computing node there can be no delay. This corresponds to the serial complexity. We say that an asynchronous algorithm attains a complexity speedup if pK( ,τ(0))K( ,τ(p) is increasing in p. We say it attains linear complexity speedup if pK( ,τ(0)) K( ,τ(p) = Ω(p). In Theorem 1, we obtain a linear complexity speedup (for p not too large), whereas no other prior attempt can attain even a complexity speedup with Nesterov acceleration. In the ideal scenario where the rate at which gradients are calculated increases linearly with p, algorithms that have linear complexity speedup will have a linear decrease in wall-clock time. However in practice, when the number of computing nodes is sufficiently large, the rate at which gradients are calculated will no longer be linear. This is due to many parallel overhead factors including too many nodes sharing the same memory read/write bandwidth, and network bandwidth. However we note that even with these issues, we obtain much faster convergence than the synchronous counterpart experimentally.
2.1 Optimality
NU_ACDM and hence A2BCD are in fact optimal in some sense. That is, among a fairly wide class of coordinate descent algorithms A, they have the best-possible worst-case complexity to highest order. We extend the work in Lan & Zhou (2015) to encompass algorithms are asynchronous and have unequal Li. For a subset S ∈ Rd, we let IC(S) (inconsistent read) denote the set of vectors v whose components are a combination of components of vectors in the set S. That is, v = (v1,1, v2,2, . . . , vd,d) for some vectors v1, v2, . . . , vd ∈ S. Here vi,j denotes the jth component of vector vi. Definition 4. Asynchronous Randomized Incremental Algorithms. Consider the unconstrained minimization problem equation 1.1 for function f satisfying the conditions stated in Section 1. We define the class A as algorithms G on this problem such that: 1. For each parameter set (σ, L1, . . . , Ln, n), G has an associated IID random variable ik with some fixed distribution P[ik] = pi for ∑n i=1 pi = 1.
2. The iterates of A satisfy: xk+1 ∈ span{IC(Xk),∇i0f(IC(X0)),∇i1f(IC(X1)), . . . ,∇ikf(IC(Xk))}
This is a rather general class: xk+1 can be constructed from any inconsistent reading of past iterates IC(Xk), and any past gradient of an inconsistent read ∇ijf(IC(Xj)).
5Their Lyapunov function is in fact a generalization of the one found in Nesterov (2012).
Theorem 2. For any algorithm G ∈ A that solves eq. (1.1), and parameter set (σ, L1, . . . , Ln, n), there is a dimension d, a corresponding function f on Rd, and a starting point x0, such that
E‖xk − x∗‖2/‖x0 − x∗‖2 ≥ 1 2 ( 1− 4/ (∑n j=1 √ Li/σ + 2n ))k Hence A has a complexity lower bound: K( ) ≥ 14 (1 + o(1)) (∑n j=1 √ Li/σ + 2n ) ln(1/2 )
Our proof in Section D follows very similar lines to Lan & Zhou (2015); Nesterov (2013).
3 ODE Analysis
In this section we present and analyze an ODE which is the continuous-time limit of A2BCD. This ODE is a strongly convex, and asynchronous version of the ODE found in Su et al. (2014). For simplicity, assume Li = L, ∀i. We rescale (I.e. we replace f(x) with 1σf .) f so that σ = 1, and hence κ = L/σ = L. Taking the discrete limit of synchronous A2BCD (i.e. accelerated RBCD), we can derive the following ODE6 (see Section equation C.1):
Ÿ + 2n−1κ−1/2Ẏ + 2n−2κ−1∇f(Y ) = 0 (3.1)
We define the parameter η , nκ1/2, and the energy: E(t) = en−1κ−1/2t(f(Y ) + 14 ∥∥Y + ηẎ ∥∥2). This
is very similar to the Lyapunov function discussed in equation 2.11, with 14 ∥∥Y (t) + ηẎ (t)∥∥2 fulfilling the role of ‖vk‖2, and Ak = 0 (since there is no delay yet). Much like the traditional analysis in the proof of Theorem 1, we can derive a linear convergence result with a similar rate. See Section C.2. Lemma 5. If Y satisfies equation 3.1, the energy satisfies E′(t) ≤ 0, E(t) ≤ E(0), and hence:
f(Y (t)) + 14 ∥∥∥Y (t) + nκ1/2Ẏ (t)∥∥∥2 ≤(f(Y (0)) + 14∥∥Y (0) + ηẎ (0)∥∥2 ) e−n −1κ−1/2t
We may also analyze an asynchronous version of equation 3.1 to motivate the proof of our main theorem. Here Ŷ (t) is a delayed version of Y (t) with the delay bounded by τ .
Ÿ + 2n−1κ−1/2Ẏ + 2n−2κ−1∇f ( Ŷ ) = 0, (3.2)
Unfortunately, this energy satisfies (see Section equation C.4, equation C.7):
e−η −1tE′(t) ≤ −18η ∥∥Ẏ ∥∥2 + 3κ2η−1τD(t), for D(t) , ∫ t t−τ ∥∥Ẏ (s)∥∥2ds. Hence this energy E(t) may not be decreasing in general. But, we may add a continuous-time asynchronicity error (see Sun et al. (2017)), much like in Definition 2, to create a decreasing energy. Let c0 ≥ 0 and r > 0 be arbitrary constants that will be set later. Define:
A(t) = ∫ t t−τ c(t− s) ∥∥Ẏ (s)∥∥2ds, for c(t) , c0(e−rt + e−rτ1− e−rτ (e−rt − 1) ) .
Lemma 6. When rτ ≤ 12 , the asynchronicity error A(t) satisfies:
e−rt d
dt
( ertA(t) ) ≤ c0 ∥∥Ẏ (t)∥∥2 − 12τ−1c0D(t). 6For compactness, we have omitted the (t) from time-varying functions Y (t), Ẏ (t), ∇Y (t), etc.
See Section C.3 for the proof. Adding this error to the Lyapunov function serves a similar purpose in the continuous-time case as in the proof of Theorem 1 (see Lemma 11). It allows us to negate 1 2τ −1c0 units of D(t) for the cost of creating c0 units of ∥∥Ẏ (t)∥∥2. This restores monotonicity. Theorem 3. Let c0 = 6κ2η−1τ2, and r = η−1. If τ ≤ 1√48nκ −1/2 then we have:
e−η −1t d
dt
( E(t) + eη −1tA(t) ) ≤ 0. (3.3)
Hence f(Y (t)) convergence linearly to f(x∗) with rate O ( exp ( −t/(nκ1/2) )) Notice how this convergence condition is similar to Corollary 3, but a little looser. The convergence condition in Theorem 1 can actually be improved to approximately match this (see Section E).
Proof. e−η
−1t d
dt
( E(t) + eη −1tA(t) ) ≤ ( c0 − 1 8η )∥∥Ẏ ∥∥2 + (3κ2η−1τ − 12τ−1c0 ) D(t)
= 6η−1κ2 ( τ2 − 148n 2κ−1 )∥∥Ẏ ∥∥2 ≤ 0
The preceding should hopefully elucidate the logic and general strategy of the proof of Theorem 1.
4 Related work
We now discuss related work that was not addressed in Section 1. Nesterov acceleration is a method for improving an algorithm’s iteration complexity’s dependence the condition number κ. Nesterov-accelerated methods have been proposed and discovered in many settings Nesterov (1983); Tseng (2008); Nesterov (2012); Lin et al. (2014); Lu & Xiao (2014); Shalev-Shwartz & Zhang (2016); Allen-Zhu (2017), including for coordinate descent algorithms (algorithms that use 1 gradient block ∇if or minimize with respect to 1 coordinate block per iteration), and incremental algorithms (algorithms for finite sum problems 1n ∑n i=1 fi(x) that use 1 function gradient ∇fi(x) per iteration). Such algorithms can often be augmented to solve composite minimization problems (minimization for objective of the form f(x) + g(x), especially for nonsomooth g), or include constraints. In Peng et al. (2016a), authors proposed and analyzed an asynchronous fixed-point algorithm called ARock, that takes proximal algorithms, forward-backward, ADMM, etc. as special cases. Work has also been done on asynchronous algorithms for finite sums in the operator setting Davis (2016); Johnstone & Eckstein (2018). In Hannah & Yin (2017b); Sun et al. (2017); Peng et al. (2016c); Cannelli et al. (2017) showed that many of the assumptions used in prior work (such as bounded delay τ <∞) were unrealistic and unnecessary in general. In Hannah & Yin (2017a) the authors showed that asynchronous iterations will complete far more iterations per second, and that a wide class of asynchronous algorithms, including asynchronous RBCD, have the same iteration complexity as their synchronous counterparts. Hence certain asynchronous algorithms can be expected to significantly outperform traditional ones. In Xiao et al. (2017) authors propose a novel asynchronous catalyst-accelerated Lin et al. (2015) primal-dual algorithmic framework to solve regularized ERM problems. They structure the parallel updates so that the data that an update depends on is up to date (though the rest of the data may not be). However catalyst acceleration incurs a log(κ) penalty over Nesterov acceleration in general. In Allen-Zhu (2017), the author argues that the inner iterations of catalyst acceleration are hard to tune, making it less practical than Nesterov acceleration.
5 Numerical experiments
To investigate the performance of A2BCD, we solve the ridge regression problem. Consider the following primal and corresponding dual objective (see for instance Lin et al. (2014)):
min w∈Rd P (w) = 12n ∥∥ATw − l∥∥2 + λ2 ‖w‖2, minα∈Rn D(α) = 12d2λ‖Aα‖2 + 12d‖α+ l‖2 (5.1)
where A ∈ Rd×n is a matrix of n samples and d features, and l is a label vector. We let A = [A1, . . . , Am] where Ai are the column blocks of A. We compare A2BCD (which is asynchronous accelerated), synchronous NU_ACDM (which is synchronous accelerated), and asynchronous RBCD (which is asynchronous non-accelerated). Nodes randomly select a coordinate block according to equation 2.1, calculate the corresponding block gradient, and use it to apply an update to the shared solution vectors. synchronous NU_ACDM is implemented in a batch fashion, with batch size p (1 block per computing node). Nodes in synchronous NU_ACDM implementation must wait until all nodes apply their computed gradients before they can start the next iteration, but the asynchronous algorithms simply compute with the most up-to-date information available. We use the datasets w1a (47272 samples, 300 features), wxa which combines the data from from w1a to w8a (293201 samples, 300 features), and aloi (108000 samples, 128 features) from LIBSVM Chang & Lin (2011). The algorithm is implemented in a multi-threaded fashion using C++11 and GNU Scientific Library with a shared memory architecture. We use 40 threads on two 2.5GHz 10-core Intel Xeon E5-2670v2 processors. See Section A.1 for a discussion of parameter tuning and estimation. The parameters for each algorithm are tuned to give the fastest performance, so that a fair comparison is possible. A critical ingredient in the efficient implementation of A2BCD and NU_ACDM for this problem is the efficient update scheme discussed in Lee & Sidford (2013b;a). In linear regression applications such as this, it is essential to be able to efficiently maintain or recover Ay. This is because calculating block gradients requires the vector ATi Ay, and without an efficient way to recover Ay, block gradient evaluations are essentially 50% as expensive as full-gradient calculations. Unfortunately, every accelerated iteration results in dense updates to yk because of the averaging step in equation 2.6. Hence Ay must be recalculated from scratch. However Lee & Sidford (2013a) introduces a linear transformation that allows for an equivalent iteration that results in sparse updates to new iteration variables p and q. The original purpose of this transformation was to ensure that the averaging steps (e.g. equation 2.6) do not dominate the computational cost for sparse problems. However we find a more important secondary use which applies to both sparse and dense problems. Since the updates to p and q are sparse coordinate-block updates, the vectors Ap, and Aq can be efficiently maintained, and therefore block gradients can be efficiently calculated. The specifics of this efficient implementation are discussed in Section A.2. In Table 5, we plot the sub-optimality vs. time for decreasing values of λ, which corresponds to increasingly large condition numbers κ. When κ is small, acceleration doesn’t result in a significantly better convergence rate, and hence A2BCD and async-RBCD both outperform sync-NU_ACDM since they complete faster iterations at similar complexity. Acceleration for low κ has unnecessary overhead, which means async-RBCD can be quite competitive. When κ becomes large, async-RBCD is no longer competitive, since it has a poor convergence rate. We observe that A2BCD and sync-NU_ACDM have essentially the same convergence rate, but A2BCD is up to 4 − 5× faster than sync-NU_ACDM because it completes much faster iterations. We observe this advantage despite the fact that we are in an ideal environment for synchronous computation: A small, homogeneous, high-bandwidth, low-latency cluster. In large-scale heterogeneous systems with greater synchronization overhead, bandwidth constraints, and latency, we expect A2BCD’s advantage to be much larger.
6 Acknowledgement
The authors would like to thank the reviewers for their helpful comments. The research presented in this paper was supported in part by AFOSR MURI FA9550-18-10502, NSF DMS-1720237, and ONR N0001417121.
A Efficient Implementation
An efficient implementation will have coordinate blocks of size greater than 1. This to ensure the efficiency of linear algebra subroutines. Especially because of this, the bulk of the computation for each iteration is computing ∇ikf(ŷk), and not the averaging steps. Hence the computing nodes only need a local copy of yk in order to do the bulk of an iteration’s computation. Given this gradient ∇ikf(ŷk), updating yk and vk is extremely fast (xk can simply be eliminated). Hence it is natural to simply store yk and vk centrally, and update them when the delayed gradients ∇ikf(ŷk). Given the above, a write mutex over (y, v) has minuscule overhead (which we confirm with experiments), and makes the labeling of iterates unambiguous. This also ensures that vk and yk are always up to date when (y, v) are being updated. Whereas the gradient ∇ikf(ŷk) may at the same time be out of date, since it has been calculated with an outdated version of yk. However a write mutex is not necessary in practice, and does not appear to affect convergence rates or computation time. Also it is possible to prove convergence under more general asynchronicity.
A.1 Parameter selection and tuning
When defining the coefficients, σ may be underestimated, and L,L1, . . . , Ln may be overestimated if exact values are unavailable. Notice that xk can be eliminated from the above iteration, and the block gradient ∇ikf(ŷk) only needs to be calculated once per iteration. A larger (or overestimated) maximum delay τ will cause a larger asynchronicity parameter ψ, which leads to more conservative step sizes to compensate. To estimate ψ, one can first performed a dry run with all coefficient set to 0 to estimate τ . All function parameters can be calculated exactly for this problem in terms of the data matrix and λ. We can then use these parameters and this tau to calculate ψ. ψ and τ merely change the parameters, and do not change execution patterns of the processors. Hence their parameter specification doesn’t affect the observed delay. Through simple tuning though, we found that ψ = 0.25 resulted in good performance. In tuning for general problems, there are theoretical reasons why it is difficult to attain acceleration without some prior knowledge of σ, the strong convexity modulus Arjevani (2017). Ideally σ is pre-specified for instance in a regularization term. If the Lipschitz constants Li cannot be calculated directly (which is rarely the case for the classic dual problem of empirical risk minimization objectives), the line-search method discussed in Roux et al. (2012) Section 4 can be used.
A.2 Sparse update formulation
As mentioned in Section 5, authors in Lee & Sidford (2013a) proposed a linear transformation of an accelerated RBCD scheme that results in sparse coordinate updates. Our proposed algorithm can be given a similar efficient implementation. We may eliminate xk from A2BCD, and derive the equivalent iteration below:(
yk+1 vk+1
) = (
1− αβ, αβ 1− β, β )( yk vk ) − (ασ−1/2L−1/2ik + h(1− α)L−1ik )∇ikf(ŷk)( σ−1/2L
−1/2 ik ) ∇ikf ( ŷk ) , C ( yk vk ) −Qk
where C and Qk are defined in the obvious way. Hence we define auxiliary variables pk, qk defined via: (
yk vk
) = Ck ( pk qk ) (A.1)
These clearly follow the iteration:( pk+1 qk+1 ) = ( pk qk ) − C−(k+1)Qk (A.2)
Since the vector Qk is sparse, we can evolve variables pk, and qk in a sparse manner, and recover the original iteration variables at the end of the algorithm via A.1. The gradient of the dual function is given by:
∇D(y) = 1 λd ( 1 d ATAy + λ(y + l) ) As mentioned before, it is necessary to maintain or recover Ayk to calculate block gradients. Since Ayk can be recovered via the linear relation in equation A.1, and the gradient is an affine function, we maintain the auxiliary vectors Apk and Aqk instead. Hence we propose the following efficient implementation in Algorithm 1. We used this to generate the results in Table 5. We also note also that it can improve performance to periodically recover vk and yk, reset the values of pk, qk, and C to vk, yk, and I respectively, and restarting the scheme (which can be done cheaply in time O(d)). We let B ∈ R2×2 represent Ck, and b represent B−1. ⊗ is the Kronecker product. Each computing node has local outdated versions of p, q, Ap,Aq which we denote p̂, q̂, Âp, Âq respectively. We also find it convenient to define: [
Dk1 Dk2
] = [ ασ−1/2L −1/2 ik
+ h(1− α)L−1ik σ−1/2L
−1/2 ik
] (A.3)
Algorithm 1 Shared-memory implementation of A2BCD 1: Inputs: Function parameters A, λ, L, {Li}ni=1, n, d. Delay τ (obtained in dry run). Starting
vectors y, v. 2: Shared data: Solution vectors p, q; auxiliary vectors Ap, Aq; sparsifying matrix B 3: Node local data: Solution vectors p̂, q̂, auxiliary vectors Âp, Âq, sparsifying matrix B̂. 4: Calculate parameters ψ, α, β, h via 1. Set k = 0. 5: Initializations: p← y, q ← v, Ap← Ay, Aq ← Av, B ← I. 6: while not converged, each computing node asynchronous do 7: Randomly select block i via equation 2.1. 8: Read shared data into local memory: p̂← p, q̂ ← q, Âp← Ap, Âq ← Aq, B̂ ← B. 9: Compute block gradient: ∇if(ŷ) = 1nλ ( 1 nA T i ( B̂1,1Âp+ B̂1,2Âq ) + λ ( B̂1,1p̂+ B̂1,2q̂
)) 10: Compute quantity gi = ATi ∇if(ŷ)
Shared memory updates: 11: Update B ← [ 1− αβ αβ 1− β β ] ×B, calculate inverse b← B−1.
12: [ p q ] −= b [ Dk1 Dk2 ] ⊗∇if(ŷ) ,
[ Ap Aq ] −= b [ Dk1 Dk2 ] ⊗ gi
13: Increase iteration count: k ← k + 1 14: end while 15: Recover original iteration variables: [ y v ] ← B [ p q ] . Output y.
B Proof of the main result
We first recall a couple of inequalities for convex functions. Lemma 7. Let f be σ-strongly convex with L-Lipschitz gradient. Then we have:
f(y) ≤ f(x) + 〈y − x,∇f(x)〉+ 12L‖y − x‖ 2 , ∀x, y (B.1) f(y) ≥ f(x) + 〈y − x,∇f(x)〉+ 12σ‖y − x‖ 2 , ∀x, y (B.2)
We also find it convenient to define the norm:
‖s‖∗ = √√√√ n∑ i=1 L −1/2 i ‖si‖ 2 (B.3)
B.1 Starting point
First notice that using the definition equation 2.8 of vk+1 we have:
‖vk+1‖2 = ‖βvk + (1− β)yk‖2 − 2σ−1/2L−1/2ik 〈βvk + (1− β)yk,∇ikf(ŷk)〉+ σ −1L−1ik ‖∇ikf(ŷk)‖ 2
Ek‖vk+1‖2 = ‖βvk + (1− β)yk‖2︸ ︷︷ ︸ A −2σ−1/2S−1 〈βvk + (1− β)yk,∇f(ŷk)〉︸ ︷︷ ︸ B
(B.4)
+ S−1σ−1 n∑ i=1 L −1/2 i ‖∇if(ŷk)‖ 2
︸ ︷︷ ︸ C
We have the following general identity:
‖βx+ (1− β)y‖2 = β‖x‖2 + (1− β)‖y‖2 − β(1− β)‖x− y‖2, ∀x, y (B.5) It can also easily be verified from equation 2.6 that we have:
vk = yk + α−1(1− α)(yk − xk) (B.6) Using equation B.5 on term A, equation B.6 on term B, and recalling the definition equation B.3 on term C, we have from equation B.4:
Ek‖vk+1‖2 = β‖vk‖2 + (1− β)‖yk‖2 − β(1− β)‖vk − yk‖2 + S−1σ−1/2‖∇f(ŷk)‖2∗ (B.7) − 2σ−1/2S−1βα−1(1− α)〈yk − xk,∇f(ŷk)〉 − 2σ−1/2S−1〈yk,∇f(ŷk)〉
This inequality is our starting point. We analyze the terms on the second line in the next section.
B.2 The Cross Term
To analyze these terms, we need a small lemma. This lemma is fundamental in allowing us to deal with asynchronicity. Lemma 8. Let χ,A > 0. Let the delay be bounded by τ . Then:
A‖ŷk − yk‖ ≤ 1 2χ −1A2 + 12χτ τ∑ j=1 ‖yk+1−j − yk−j‖2
Proof. See Hannah & Yin (2017a).
Lemma 9. We have:
−〈∇f(ŷk), yk〉 ≤ −f(yk)− 1 2σ(1− ψ)‖yk‖ 2 + 1 2 Lκψ−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2 (B.8)
〈∇f(ŷk), xk − yk〉 ≤ f(xk)− f(yk) (B.9)
+ 1 2 Lα(1− α)−1 κ−1ψβ‖vk − yk‖2 + κψ−1β−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2
The terms in bold in equation B.8 and equation B.9 are a result of the asynchronicity, and are identically 0 in its absence.
Proof. Our strategy is to separately analyze terms that appear in the traditional analysis of Nesterov (2012), and the terms that result from asynchronicity. We first prove equation B.8:
−〈∇f(ŷk), yk〉 = −〈∇f(yk), yk〉 − 〈∇f(ŷk)−∇f(yk), yk〉
≤ −f(yk)− 1 2σ‖yk‖ 2 + L‖ŷk − yk‖‖yk‖ (B.10)
equation B.10 follows from strong convexity (equation B.2 with x = yk and y = x∗), and the fact that ∇f is L-Lipschitz. The term due to asynchronicity becomes:
L‖ŷk − yk‖‖yk‖ ≤ 1 2Lκ −1ψ‖yk‖2 + 1 2Lκψ −1τ τ∑ j=1 ‖yk+1−j − yk−j‖2
using Lemma 8 with χ = κψ−1, A = ‖yk‖. Combining this with equation B.10 completes the proof of equation B.8. We now prove equation B.9:
〈∇f(ŷk), xk − yk〉 = 〈∇f(yk), xk − yk〉+ 〈∇f(ŷk)−∇f(yk), xk − yk〉 ≤ f(xk)− f(yk) + L‖ŷk − yk‖‖xk − yk‖ ≤ f(xk)− f(yk)
+ 12L κ−1ψβα−1(1− α)‖xk − yk‖2 + κψ−1β−1α(1− α)−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2 Here the last line follows from Lemma 8 with χ = κψ−1β−1α(1− α)−1, A = nxk − yk. We can complete the proof using the following identity that can be easily obtained from equation 2.6:
yk − xk = α(1− α)−1(vk − yk)
B.3 Function-value term
Much like Nesterov (2012), we need a f(xk) term in the Lyapunov function (see the middle of page 357). However we additionally need to consider asynchronicity when analyzing the growth of this term. Again terms due to asynchronicity are emboldened. Lemma 10. We have:
Ekf(xk+1) ≤ f(yk)− 1 2h ( 2− h ( 1 + 1 2 σ1/2L−1/2ψ )) S−1‖∇f(ŷk)‖2∗
+ S−1Lσ1/2κψ−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2
Proof. From the definition equation 2.7 of xk+1, we can see that xk+1 − yk is supported on block ik. Since each gradient block ∇if is Li Lipschitz with respect to changes to block i, we can use
equation B.1 to obtain:
f(xk+1) ≤ f(yk) + 〈∇f(yk), xk+1 − yk〉+ 1 2Lik‖xk+1 − yk‖ 2
(from equation 2.7) = f(yk)− hL−1ik 〈∇ikf(yk),∇ikf(ŷk)〉+ 1 2h 2L−1ik ‖∇ikf(ŷk)‖ 2
= f(yk)− hL−1ik 〈∇ikf(yk)−∇ikf(ŷk),∇ikf(ŷk)〉 − 1 2h(2− h)L −1 ik ‖∇ikf(ŷk)‖ 2
Ekf(xk+1) ≤ f(yk)− hS−1 n∑ i=1 L −1/2 i 〈∇if(yk)−∇if(ŷk),∇if(ŷk)〉 − 1 2h(2− h)S −1‖∇f(ŷk)‖2∗
(B.11) Here the last line followed from the definition equation B.3 of the norm ‖·‖∗1/2. We now analyze the middle term:
− n∑ i=1 L −1/2 i 〈∇if(yk)−∇if(ŷk),∇if(ŷk)〉
= − 〈
n∑ i=1 L −1/4 i (∇if(yk)−∇if(ŷk)), n∑ i=1 L −1/4 i ∇if(ŷk)
〉
(Cauchy Schwarz) ≤ ∥∥∥∥∥ n∑ i=1 L −1/4 i (∇if(yk)−∇if(ŷk)) ∥∥∥∥∥ ∥∥∥∥∥ n∑ i=1 L −1/4 i ∇if(ŷk) ∥∥∥∥∥ = (
n∑ i=1 L −1/2 i ‖∇if(yk)−∇if(ŷk)‖ 2 )1/2( n∑ i=1 L −1/2 i ‖∇if(ŷk)‖ 2 )1/2 (L ≤ Li,∀i and equation B.3) ≤ L−1/4‖∇f(yk)−∇f(ŷk)‖‖∇f(ŷk)‖∗
(∇f is L-Lipschitz) ≤ L−1/4L‖yk − ŷk‖‖∇f(ŷk)‖∗ We then apply Lemma 8 to this with χ = 2h−1σ1/2L1/4κψ−1, A = ‖∇f(ŷk)‖∗ to yield:
− n∑ i=1 L −1/2 i 〈∇if(yk)−∇if(ŷk),∇if(ŷk)〉 ≤ h −1Lσ1/2κψ−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2 (B.12)
+ 14hσ 1/2L−1/2ψ‖∇f(ŷk)‖2∗
Finally to complete the proof, we combine equation B.11, with equation B.12.
B.4 Asynchronicity error
The previous inequalities produced difference terms of the form ‖yk+1−j − yk−j‖2. The following lemma shows how these errors can be incorporated into a Lyapunov function. Lemma 11. Let 0 < r < 1 and consider the asynchronicity error and corresponding coefficients:
Ak = ∞∑ j=1 cj‖yk+1−j − yk−j‖2
ci = ∞∑ j=i ri−j−1sj
This sum satisfies:
Ek[Ak+1 − rAk] = c1Ek‖yk+1 − yk‖2 − ∞∑ j=1 sj‖yk+1−j − yk−j‖2
Remark 2. Interpretation. This result means that an asynchronicity error term Ak can negate a series of difference terms − ∑∞ j=1 sj‖yk+1−j − yk−j‖
2 at the cost of producing an additional error c1Ek‖yk+1 − yk‖2, while maintaining a convergence rate of r. This essentially converts difference terms, which are hard to deal with, into a ‖yk+1 − yk‖2 term which can be negated by other terms in the Lyapunov function. The proof is straightforward.
Proof.
Ek[Ak+1 − rAk] = Ek ∞∑ j=0 cj+1‖yk+1−j − yk−j‖2 − rEk ∞∑ j=1 cj‖yk+1−j − yk−j‖2
= c1Ek‖yk+1 − yk‖2 + Ek ∞∑ j=1 (cj+1 − rcj)‖yk+1−j − yk−j‖2
Noting the following completes the proof:
ci+1 − rci = ∞∑
j=i+1 ri+1−j−1sj − r ∞∑ j=i ri−j−1sj = −si
Given that Ak allows us to negate difference terms, we now analyze the cost c1Ek‖yk+1 − yk‖2 of this negation. Lemma 12. We have:
Ek‖yk+1 − yk‖2 ≤ 2α2β2‖vk − yk‖2 + 2S−1L−1‖∇f(ŷk)‖2
Proof.
yk+1 − yk = (αvk+1 + (1− α)xk+1)− yk = α ( βvk + (1− β)yk − σ−1/2L−1/2ik ∇ikf(ŷk) ) + (1− α) ( yk − hL−1ik ∇ikf(ŷk) ) − yk (B.13)
= αβvk + α(1− β)yk − ασ−1/2L−1/2ik ∇ikf(ŷk)− αyk − (1− α)hL −1 ik ∇ikf(ŷk) = αβ(vk − yk)− ( ασ−1/2L −1/2 ik + h(1− α)L−1ik ) ∇ikf(ŷk)
‖yk+1 − yk‖2 ≤ 2α2β2‖vk − yk‖2 + 2 ( ασ−1/2L −1/2 ik + h(1− α)L−1ik )2 ‖∇ikf(ŷk)‖ 2 (B.14)
Here equation B.13 following from equation 2.8, the definition of vk+1. equation B.14 follows from the inequality ‖x+ y‖2 ≤ 2‖x‖2 + 2‖y‖2. The rest is simple algebraic manipulation.
‖yk+1 − yk‖2 ≤ 2α2β2‖vk − yk‖2 + 2L−1ik ( ασ−1/2 + h(1− α)L−1/2ik )2 ‖∇ikf(ŷk)‖ 2
(L ≤ Li,∀i) ≤ 2α2β2‖vk − yk‖2 + 2L−1ik ( ασ−1/2 + h(1− α)L−1/2 )2 ‖∇ikf(ŷk)‖ 2
= 2α2β2‖vk − yk‖2 + 2L−1ik L −1 ( L1/2σ−1/2α+ h(1− α) )2 ‖∇ikf(ŷk)‖ 2
E‖yk+1 − yk‖2 ≤ 2α2β2‖vk − yk‖2 + 2S−1L−1 ( L1/2σ−1/2α+ h(1− α) )2 ‖∇f(ŷk)‖2∗
Finally, to complete the proof, we prove L1/2σ−1/2α+ h(1− α) ≤ 1. L1/2σ−1/2α+ h(1− α) = h+ α ( L1/2σ−1/2 − h ) (definitions of h and α: equation 2.3, and equation 2.5) = 1− 12σ 1/2L−1/2ψ + σ1/2S−1 ( L1/2σ−1/2
) ≤ 1− σ1/2L−1/2 ( 1 2ψ − σ
−1/2S−1L1 ) (B.15)
Rearranging the definition of ψ, we have:
S−1 = 192ψ 2L1L−3/2κ−1/2τ−2
(τ ≥1 and ψ ≤ 12 ) ≤ 1 182L 1L−3/2κ−1/2
Using this on equation B.15, we have: L1/2ασ−1/2 + h(1− α) ≤ 1− σ1/2L−1/2 (
1 2ψ − 1 182L
1L−3/2κ−1/2σ−1/2L1 )
= 1− σ1/2L−1/2 (
1 2ψ − 1 182 (L/L)
2 )
(ψ ≤ 12 ) = 1− σ 1/2L−1/2 ( 1 24 − 1 182 ) ≤ 1.
This completes the proof.
B.5 Master inequality
We are finally in a position to bring together all the all the previous results together into a master inequality for the Lyapunov function ρk (defined in equation 2.11). After this lemma is proven, we will prove that the right hand size is negative, which will imply that ρk linearly converges to 0 with rate β.
Lemma 13. Master inequality. We have:
Ek[ρk+1 − βρk] ≤+ ‖yk‖2 × ( 1− β − σ−1/2S−1σ(1− ψ) )
(B.16) + ‖vk − yk‖2 ×β ( 2α2βc1 + S−1βL1/2κ−1/2ψ − (1− β) )
+ f(yk) × ( c− 2σ−1/2S−1 ( βα−1(1− α) + 1 )) + f(xk) ×β ( 2σ−1/2S−1α−1(1− α)− c
) +
τ∑ j=1 ‖yk+1−j − yk−j‖2 ×S−1Lκψ−1τσ1/2 ( 2σ−1 + c ) − s
+ ‖∇f(ŷk)‖2∗ ×S −1 ( σ−1 + 2L−1c1 − 1 2ch ( 2− h ( 1 + 12σ 1/2L−1/2ψ )))
Proof.
Ek‖vk+1‖2 − β‖vk‖2
(B.7) = (1− β)‖yk‖2 − β(1− β)‖vk − yk‖2 + S−1σ−1‖∇f(ŷk)‖2∗ − 2σ−1/2S−1〈yk,∇f(ŷk)〉 − 2σ−1/2S−1βα−1(1− α)〈yk − xk,∇f(ŷk)〉 ≤ (1− β)‖yk‖2 − β(1− β)‖vk − yk‖2 + S−1σ−1‖∇f(ŷk)‖2∗ (B.17)
(B.8) + 2σ−1/2S−1 −f(yk)− 12σ(1− ψ)‖yk‖2 + 12Lκψ−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2 (B.9)− 2σ−1/2S−1βα−1(1− α)(f(xk)− f(yk))
+ σ−1/2S−1βL κ−1ψβ‖vk − yk‖2 + κψ−1β−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2 We now collect and organize the similar terms of this inequality.
≤+ ‖yk‖2 × ( 1− β − σ−1/2S−1σ(1− ψ) )
+ ‖vk − yk‖2 ×β ( σ−1/2S−1βLκ−1ψ − (1− β) ) − f(yk) ×2σ−1/2S−1 ( βα−1(1− α) + 1
) + f(xk) ×2σ−1/2S−1βα−1(1− α)
+ τ∑ j=1 ‖yk+1−j − yk−j‖2 ×2σ−1/2S−1Lκψ−1τ
+ ‖∇f(ŷk)‖2∗ ×σ −1S−1
Now finally, we add the function-value and asynchronicity terms to our analysis. We use Lemma 11 is with r = 1− σ1/2S−1, and
si = { s = 6S−1L1/2κ3/2ψ−1τ, 1 ≤ i ≤ τ 0, i > τ (B.18)
Notice that this choice of si will recover the coefficient formula given in equation 2.9. Hence we have:
Ek[cf(xk+1) +Ak+1 − β(cf(xk) +Ak)]
(Lemma 10) ≤ cf(yk)− 1 2ch
( 2− h ( 1 + 12σ 1/2L−1/2ψ )) S−1‖∇f(ŷk)‖2∗ − βcf(xk)
(B.19)
+ S−1Lσ1/2κψ−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2
(Lemmas 11 and 12) + c1 ( 2α2β2‖vk − yk‖2 + 2S−1L−1‖∇f(ŷk)‖2 )
(B.20)
− ∞∑ j=1 sj‖yk+1−j − yk−j‖2 +Ak(r − β)
Notice Ak(r − β) ≤ 0. Finally, combining equation B.17 and equation B.19 completes the proof.
In the next section, we will prove that every coefficient on the right hand side of equation B.16 is 0 or less, which will complete the proof of Theorem 1.
B.6 Proof of main theorem
Lemma 14. The coefficients of ‖yk‖2, f(yk), and ∑τ j=1‖yk+1−j − yk−j‖
2 in Lemma 13 are non-positive.
Proof. The coefficient 1− (1− ψ)σ1/2S−1−β of ‖yk‖2 is identically 0 via the definition equation 2.4 of β. The coefficient c − 2σ−1/2S−1 ( βα−1(1− α) + 1 ) of f(yk) is identically 0 via the definition equation 2.12 of c. First notice from the definition equation 2.12 of c:
c = 2σ−1/2S−1 ( βα−1(1− α) + 1 ) (definitions of α, β) = 2σ−1/2S−1 (( 1− σ1/2S−1(1− ψ) ) (1 + ψ)σ−1/2S + 1
) = 2σ−1/2S−1 ( (1 + ψ)σ−1/2S + ψ2
) = 2σ−1 ( (1 + ψ) + ψ2σ1/2S−1 ) (B.21)
c ≤ 4σ−1 (B.22)
Here the last line followed since ψ ≤ 12 and σ 1/2S−1 ≤ 1. We now analyze the coefficient of∑τ
j=1‖yk+1−j − yk−j‖ 2.
S−1Lκψ−1τσ1/2 ( 2σ−1 + c ) − s
(B.22) ≤ 6L1/2κ3/2ψ−1τ − s (definition equation B.18 of s) ≤ 0
Lemma 15. The coefficient β ( 2σ−1/2S−1α−1(1− α)− c ) of f(xk) in Lemma 13 is non-positive.
Proof.
2σ−1/2S−1α−1(1− α)− c (B.21) = 2σ−1/2S−1(1 + ψ)σ−1/2S − 2σ−1 ( (1 + ψ) + ψ2σ1/2S−1 )
= 2σ−1 ( (1 + ψ)− ( (1 + ψ) + ψ2σ1/2S−1 ))
= −2ψ2σ−1/2S−1 ≤ 0
Lemma 16. The coefficient S−1 ( σ−1 + 2L−1c1 − 12ch ( 2− h ( 1 + 12σ 1/2L−1/2ψ )))
of ‖∇f(ŷk)‖2∗ in Lemma 13 is non-positive.
Proof. We first need to bound c1.
(equation B.18 and equation 2.9) c1 = s τ∑ j=1 ( 1− σ1/2S−1 )−j equation B.18 ≤ 6S−1L1/2κ3/2ψ−1τ
τ∑ j=1 ( 1− σ1/2S−1 )−j ≤ 6S−1L1/2κ3/2ψ−1τ2 ( 1− σ1/2S−1
)−τ It can be easily verified that if x ≤ 12 and y ≥ 0, then (1− x)
−y ≤ exp(2xy). Using this fact with x = σ1/2S−1 and y = τ , we have:
≤ 6S−1L1/2κ3/2ψ−1τ2 exp ( τσ1/2S−1 ) (since ψ ≤ 3/7 and hence τσ1/2S−1 ≤ 17 ) ≤ S −1L1/2κ3/2ψ−1τ2 × 6 exp ( 1 7
) c1 ≤ 7S−1L1/2κ3/2ψ−1τ2 (B.23)
We now analyze the coefficient of ‖∇f(ŷk)‖2∗
σ−1 + 2L−1c1 − 1 2ch
( 2− h ( 1 + 12σ 1/2L−1/2ψ ))
(B.23 and 2.5) ≤ σ−1 + 14S−1L−1L1/2κ3/2ψ−1τ2 − 12ch ( 1 + 14σ 1L−1ψ2 ) ≤ σ−1 + 14S−1L−1L1/2κ3/2ψ−1τ2 − 12ch
(definition 2.2 of ψ) = σ−1 + 1481σ −1ψ − 12ch
(B.21, definition 2.5 of h) = σ−1 ( 1 + 1481ψ − ( (1 + ψ) + ψ2σ1/2S−1 )( 1− 12σ 1/2L−1/2ψ )) (σ1/2L−1/2 ≤ 0 and σ1/2S−1 ≤ 1) ≤ σ−1 ( 1 + 1481ψ − (1 + ψ) ( 1− 12ψ
)) = σ−1ψ ( 14 81 + 1 2ψ − 1 2
) (ψ ≤ 12 ) ≤ 0
Lemma 17. The coefficient β ( 2α2βc1 + S−1βL1/2κ−1/2ψ − (1− β) ) of ‖vk − yk‖2 in 13 is nonpositive.
Proof. 2α2βc1 + σ1/2S−1βψ − (1− ψ)σ1/2S−1
(B.23) ≤ 14α2βS−1L1/2κ3/2ψ−1τ2 + σ1/2S−1βψ − (1− ψ)σ1/2S−1
≤ 14σS−3L1/2κ3/2ψ−1τ2 + σ1/2S−1ψ − (1− ψ)σ1/2S−1 = σ1/2S−1 ( 14S−2Lκτ2ψ−1 + 2ψ − 1 ) Here the last inequality follows since β ≤ 1 and α ≤ σ1/2S−1. We now rearrange the definition of ψ to yield the identity:
S−2κ = 194L 2L−3τ−4ψ4
Using this, we have: 14S−2Lκτ2ψ−1 + 2ψ − 1
= 1494 L 2L−2ψ3τ−2 + 2ψ − 1
≤ 1494 ( 3 7 )3 1−2 + 67 − 1 ≤ 0
Here the last line followed since L ≤ L, ψ ≤ 37 , and τ ≥ 1. Hence the proof is complete.
Proof of Theorem 1. Using the master inequality 13 in combination with the previous Lemmas 14, 15, 16, and 17, we have:
Ek[ρk+1] ≤ βρk = ( 1− (1− ψ)σ1/2S−1 ) ρk
When we have: ( 1− (1− ψ)σ1/2S−1 )k ≤
then the Lyapunov function ρk has decreased below ρ0 in expectation. Hence the complexity K( ) satisfies:
K( ) ln ( 1− (1− ψ)σ1/2S−1 ) = ln( )
K( ) = −1 ln ( 1− (1− ψ)σ1/2S−1 ) ln(1/ ) Now it can be shown that for 0 < x ≤ 12 , we have:
1 x − 1 ≤ −1ln(1− x) ≤ 1 x − 12
−1 ln(1− x) = 1 x +O(1)
Since n ≥ 2, we have σ1/2S−1 ≤ 12 . Hence:
K( ) = 11− ψ
( σ−1/2S +O(1) ) ln(1/ )
An expression for KNU_ACDM( ), the complexity of NU_ACDM follows by similar reasoning. KNU_ACDM( ) = ( σ−1/2S +O(1) ) ln(1/ ) (B.24)
Finally we have:
K( ) = 11− ψ ( σ−1/2S +O(1) σ−1/2S +O(1) ) KNU_ACDM( )
= 11− ψ (1 + o(1))KNU_ACDM( )
which completes the proof.
C Ordinary Differential Equation Analysis
C.1 Derivation of ODE for synchronous A2BCD
If we take expectations with respect to Ek, then synchronous (no delay) A2BCD becomes: yk = αvk + (1− α)xk
Ekxk+1 = yk − n−1κ−1∇f(yk) Ekvk+1 = βvk + (1− β)yk − n−1κ−1/2∇f(yk)
We find it convenient to define η = nκ1/2. Inspired by this, we consider the following iteration:
yk = αvk + (1− α)xk (C.1) xk+1 = yk − s1/2κ−1/2η−1∇f(yk) (C.2) vk+1 = βvk + (1− β)yk − s1/2η−1∇f(yk) (C.3)
for coefficients:
α = ( 1 + s−1/2η )−1
β = 1− s1/2η−1
s is a discretization scale parameter that will be sent to 0 to obtain an ODE analogue of synchronous A2BCD. We first use equation B.6 to eliminate vk from from equation C.3.
0 = −vk+1 + βvk + (1− β)yk − s1/2η−1∇f(yk) 0 = −α−1yk+1 + α−1(1− α)xk+1
+ β ( α−1yk − α−1(1− α)xk ) + (1− β)yk − s1/2η−1∇f(yk)
(times by α) 0 = −yk+1 + (1− α)xk+1 + β(yk − (1− α)xk) + α(1− β)yk − αs1/2η−1∇f(yk) = −yk+1 + yk(β + α(1− β)) + (1− α)xk+1 − xkβ(1− α)− αs1/2η−1∇f(yk)
We now eliminate xk using equation C.1:
0 = −yk+1 + yk(β + α(1− β)) + (1− α) ( yk − s1/2η−1κ−1/2∇f(yk) ) − ( yk−1 − s1/2η−1κ−1/2∇f(yk−1) ) β(1− α)
− αs1/2η−1∇f(yk) = −yk+1 + yk(β + α(1− β) + (1− α))− β(1− α)yk−1 + s1/2η−1∇f(yk−1)(β − 1)(1− α) − αs1/2η−1∇f(yk) = (yk − yk+1) + β(1− α)(yk − yk−1) + s1/2η−1(∇f(yk−1)(β − 1)(1− α)− α∇f(yk))
Now to derive an ODE, we let yk = Y ( ks1/2 ) . Then ∇f(yk−1) = ∇f(yk) + O ( s1/2 ) . Hence the above becomes:
0 = (yk − yk+1) + β(1− α)(yk − yk−1) + s1/2η−1((β − 1)(1− α)− α)∇f(yk) +O ( s3/2 ) 0 = ( −s1/2Ẏ − 12sŸ ) + β(1− α) ( s1/2Ẏ − 12sŸ ) (C.4)
+ s1/2η−1((β − 1)(1− α)− α)∇f(yk) +O ( s3/2 )
We now look at some of the terms in this equation to find the highest-order dependence on s.
β(1− α) = ( 1− s1/2η−1 )(
1− 1 1 + s−1/2η ) = ( 1− s1/2η−1 ) s−1/2η
1 + s−1/2η
= s −1/2η − 1 s−1/2η + 1
= 1− s 1/2η−1
1 + s1/2η−1
= 1− 2s1/2η−1 +O(s)
We also have:
(β − 1)(1− α)− α = β(1− α)− 1 = −2s1/2η−1 +O(s)
Hence using these facts on equation C.4, we have: 0 = ( −s1/2Ẏ − 12sŸ ) + ( 1− 2s1/2η−1 +O(s) )( s1/2Ẏ − 12sŸ )
+ s1/2η−1 ( −2s1/2η−1 +O(s) ) ∇f(yk) +O ( s3/2 ) 0 = −s1/2Ẏ − 12sŸ + ( s1/2Ẏ − 12sŸ − 2s 1η−1Ẏ +O ( s3/2
)) ( −2s1η−2 +O ( s3/2 )) ∇f(yk) +O ( s3/2
) 0 = −sŸ − 2sη−1Ẏ − 2sη−2∇f(yk) +O ( s3/2
) 0 = −Ÿ − 2η−1Ẏ − 2η−2∇f(yk) +O ( s1/2
) Taking the limit as s→ 0, we obtain the ODE | 1. What is the focus of the paper regarding asynchronous parallelization and stochastic coordinate descent?
2. What are the strengths of the proposed approach, particularly in terms of its convergence rate?
3. Do you have any concerns regarding the correctness of the results, especially when compared to prior works like Nesterov's accelerated methods?
4. How does the reviewer assess the clarity and applicability of the paper's content, including minor issues with definitions and references? | Review | Review
This paper studies the combination of the asynchronous parallelization and the accelerated stochastic coordinate descent method. The proved convergence rate is claimed to be consistent with the non parallel counterpart. The linear speedup is achievable when the maximal staleness is bounded by n^{1/2} roughly, that sounds very interesting result to me. However, I have a few questions about the correctness of the results:
- Theorem 1 essentially shows that every single step is guaranteed to improve the last step in the expectation sense. However, this violates my my experiences to study Nesterov's accelerated methods. To my knowledge, Nesterov's accelerated methods generally do not guarantee improvement over each single step, because accelerate methods essentially constructs a sequence z_{t+1} = A z_t where A is a nonsymmetric matrix with spectral norm greater than 1.
- The actual implemented algorithm is using the sparse update other than the analyzed version, since the analyzed version is not efficient or suitable for parallelization. However, the sparse updating rule is equivalent to the original version only for the non asynchronous version. Therefore, the analysis does not apply the actual implementation.
minors:
- pp2 line 8, K(epsilon) is not defined
- Eq. (1.4), the index is missing.
- missing reference: An Asynchronous Parallel Stochastic Coordinate Descent Algorithm, ICML 2014. |
ICLR | Title
A2BCD: Asynchronous Acceleration with Optimal Complexity
Abstract
In this paper, we propose the Asynchronous Accelerated Nonuniform Randomized Block Coordinate Descent algorithm (A2BCD). We prove A2BCD converges linearly to a solution of the convex minimization problem at the same rate as NU_ACDM, so long as the maximum delay is not too large. This is the first asynchronous Nesterov-accelerated algorithm that attains any provable speedup. Moreover, we then prove that these algorithms both have optimal complexity. Asynchronous algorithms complete much faster iterations, and A2BCD has optimal complexity. Hence we observe in experiments that A2BCD is the top-performing coordinate descent algorithm, converging up to 4 − 5× faster than NU_ACDM on some data sets in terms of wall-clock time. To motivate our theory and proof techniques, we also derive and analyze a continuous-time analogue of our algorithm and prove it converges at the same rate.
1 Introduction
In this paper, we propose and prove the convergence of the Asynchronous Accelerated Nonuniform Randomized Block Coordinate Descent algorithm (A2BCD), the first asynchronous Nesterovaccelerated algorithm that achieves optimal complexity. No previous attempts have been able to prove a speedup for asynchronous Nesterov acceleration. We aim to find the minimizer x∗ of the unconstrained minimization problem:
min x∈Rd
f(x) = f ( x(1), . . . , x(n) ) (1.1)
where f is σ-strongly convex for σ > 0 with L-Lipschitz gradient ∇f = (∇1f, . . . ,∇nf). x ∈ Rd is composed of coordinate blocks x(1), . . . , x(n). The coordinate blocks of the gradient ∇if are assumed Li-Lipschitz with respect to the ith block. That is, ∀x, h ∈ Rd:
‖∇if(x+ Pih)−∇if(x)‖ ≤ Li‖h‖ (1.2) where Pi is the projection onto the ith block of Rd. Let L̄ , 1n ∑n i=1 Li be the average block Lipschitz constant. These conditions on f are assumed throughout this whole paper. Our algorithm can also be applied to non-strongly convex objectives (σ = 0) or non-smooth objectives using the black box reduction techniques proposed in Allen-Zhu & Hazan (2016). Hence we consider only ∗Corresponding author: [email protected] †[email protected] ‡[email protected]
the coordinate smooth, strongly-convex case. Our algorithm can also be applied to the convex regularized ERM problem via the standard dual transformation (see for instance Lin et al. (2014)):
f(x) = 1 n n∑ i=1 fi(〈ai, x〉) + λ 2 ‖x‖ 2 (1.3)
Hence A2BCD can be used as an asynchronous Nesterov-accelerated finite-sum algorithm. Coordinate descent methods, in which a chosen coordinate block ik is updated at every iteration, are a popular way to solve equation 1.1. Randomized block coordinate descent (RBCD, Nesterov (2012)) updates a uniformly randomly chosen coordinate block ik with a gradient-descent-like step: xk+1 = xk − (1/Lik )∇ikf(xk). The complexity K( ) of an algorithm is defined as the number of iterations required to decrease the error E(f(xk)−f(x∗)) to less than (f(x0)− f(x∗)). Randomized coordinate descent has a complexity of K( ) = O(n(L̄/σ) ln(1/ )). Using a series of averaging and extrapolation steps, accelerated RBCD Nesterov (2012) improves RBCD’s iteration complexity K( ) to O(n √ L̄/σ ln(1/ )), which leads to much faster convergence
when L̄σ is large. This rate is optimal when all Li are equal Lan & Zhou (2015). Finally, using a special probability distribution for the random block index ik, the non-uniform accelerated coordinate descent method Allen-Zhu et al. (2015) (NU_ACDM) can further decrease the complexity to O( ∑n i=1 √ Li/σ ln(1/ )), which can be up to √ n times faster than accelerated RBCD, since some Li can be significantly smaller than L. NU_ACDM is the current state-of-the-art coordinate descent algorithm for solving equation 1.1. Our A2BCD algorithm generalizes NU_ACDM to the asynchronous-parallel case. We solve equation 1.1 with a collection of p computing nodes that continually read a shared-access solution vector y into local memory then compute a block gradient ∇if , which is used to update shared solution vectors (x, y, v). Proving convergence in the asynchronous case requires extensive new technical machinery. A traditional synchronous-parallel implementation is organized into rounds of computation: Every computing node must complete an update in order for the next iteration to begin. However, this synchronization process can be extremely costly, since the lateness of a single node can halt the entire system. This becomes increasingly problematic with scale, as differences in node computing speeds, load balancing, random network delays, and bandwidth constraints mean that a synchronous-parallel solver may spend more time waiting than computing a solution. Computing nodes in an asynchronous solver do not wait for others to complete and share their updates before starting the next iteration. They simply continue to update the solution vectors with the most recent information available, without any central coordination. This eliminates costly idle time, meaning that asynchronous algorithms can be much faster than traditional ones, since they have much faster iterations. For instance, random network delays cause asynchronous algorithms to complete iterations Ω(ln(p)) time faster than synchronous algorithms at scale. This and other factors that influence the speed of iterations are discussed in Hannah & Yin (2017a). However, since many iterations may occur between the time that a node reads the solution vector, and the time that its computed update is applied, effectively the solution vector is being updated with outdated information. At iteration k, the block gradient ∇ikf is computed at a delayed iterate ŷk defined as1:
ŷk = (( yk−j(k,1) ) (1), . . . , ( yk−j(k,n) ) (n) ) (1.4)
1Every coordinate can be outdated by a different amount without significantly changing the proofs.
for delay parameters j(k, 1), . . . , j(k, n) ∈ N. Here j(k, i) denotes how many iterations out of date coordinate block i is at iteration k. Different blocks may be out of date by different amounts, which is known as an inconsistent read. We assume2 that j(k, i) ≤ τ for some constant τ <∞. Asynchronous algorithms were proposed in Chazan & Miranker (1969) to solve linear systems. General convergence results and theory were developed later in Bertsekas (1983); Bertsekas & Tsitsiklis (1997); Tseng et al. (1990); Luo & Tseng (1992; 1993); Tseng (1991) for partially and totally asynchronous systems, with essentially-cyclic block sequence ik. More recently, there has been renewed interest in asynchronous algorithms with random block coordinate updates. Linear and sublinear convergence results were proven for asynchronous RBCD Liu & Wright (2015); Liu et al. (2014); Avron et al. (2014), and similar was proven for asynchronous SGD Recht et al. (2011), and variance reduction algorithms Reddi et al. (2015); Leblond et al. (2017); Mania et al. (2015); Huo & Huang (2016), and primal-dual algorithms Combettes & Eckstein (2018). There is also a rich body of work on asynchronous SGD. In the distributed setting, Zhou et al. (2018) showed global convergence for stochastic variationally coherent problems even when the delays grow at a polynomial rate. In Lian et al. (2018), an asynchronous decentralized SGD was proposed with the same optimal sublinear convergence rate as SGD and linear speedup with respect to the number of workers. In Liu et al. (2018), authors obtained an asymptotic rate of convergence for asynchronous momentum SGD on streaming PCA, which provides insight into the tradeoff between asynchrony and momentum. In Dutta et al. (2018), authors prove convergence results for asynchronous SGD that highlight the tradeoff between faster iterations and iteration complexity. Further related work is discussed in Section 4.
1.1 Summary of Contributions
In this paper, we prove that A2BCD attains NU_ACDM’s state-of-the-art iteration complexity to highest order for solving equation 1.1, so long as delays are not too large (see Section 2). The proof is very different from that of Allen-Zhu et al. (2015), and involves significant technical innovations and complexity related to the analysis of asynchronicity. We also prove that A2BCD (and hence NU_ACDM) has optimal complexity to within a constant factor over a fairly general class of randomized block coordinate descent algorithms (see Section 2.1). This extends results in Lan & Zhou (2015) to asynchronous algorithms with Li not all equal. Since asynchronous algorithms complete faster iterations, and A2BCD has optimal complexity, we expect A2BCD to be faster than all existing coordinate descent algorithms. We confirm with numerical experiments that A2BCD is the current fastest coordinate descent algorithm (see Section 5). We are only aware of one previous and one contemporaneous attempt at proving convergence results for asynchronous Nesterov-accelerated algorithms. However, the first is not accelerated and relies on extreme assumptions, and the second obtains no speedup. Therefore, we claim that our results are the first-ever analysis of asynchronous Nesterov-accelerated algorithms that attains a speedup. Moreover, our speedup is optimal for delays not too large3. The work of Meng et al. claims to obtain square-root speedup for an asynchronous accelerated SVRG. In the case where all component functions have the same Lipschitz constant L, the complexity they obtain reduces to (n+ κ) ln(1/ ) for κ = O ( τn2 ) (Corollary 4.4). Hence authors do not even obtain accelerated rates. Their convergence condition is τ < 14∆1/8 for sparsity parameter ∆. Since the dimension d satisfies d ≥ 1∆ , they require d ≥ 2
16τ8. So τ = 20 requires dimension d > 1015. 2This condition can be relaxed however by techniques in Hannah & Yin (2017b); Sun et al. (2017); Peng
et al. (2016c); Hannah & Yin (2017a) 3Speedup is defined precisely in Section 2
In a contemporaneous preprint, authors in Fang et al. (2018) skillfully devised accelerated schemes for asynchronous coordinate descent and SVRG using momentum compensation techniques. Although their complexity results have the improved √ κ dependence on the condition number, they do not prove any speedup. Their complexity is τ times larger than the serial complexity. Since τ is necessarily greater than p, their results imply that adding more computing nodes will increase running time. The authors claim that they can extend their results to linear speedup for asynchronous, accelerated SVRG under sparsity assumptions. And while we think this is quite likely, they have not yet provided proof. We also derive a second-order ordinary differential equation (ODE), which is the continuous-time limit of A2BCD (see Section 3). This extends the ODE found in Su et al. (2014) to an asynchronous accelerated algorithm minimizing a strongly convex function. We prove this ODE linearly converges to a solution with the same rate as A2BCD’s, without needing to resort to the restarting techniques. The ODE analysis motivates and clarifies the our proof strategy of the main result.
2 Main results
We should consider functions f where it is efficient to calculate blocks of the gradient, so that coordinate-wise parallelization is efficient. That is, the function should be “coordinate friendly” Peng et al. (2016b). This is a very wide class that includes regularized linear regression, logistic regression, etc. The L2-regularized empirical risk minimization problem is not coordinate friendly in general, however the equivalent dual problem is, and hence can be solved efficiently by A2BCD (see Lin et al. (2014), and Section 5). To calculate the k + 1’th iteration of the algorithm from iteration k, we use only one block of the gradient ∇ikf . We assume that the delays j(k, i) are independent of the block sequence ik, but otherwise arbitrary (This is a standard assumption found in the vast majority of papers, but can be relaxed Sun et al. (2017); Leblond et al. (2017); Cannelli et al. (2017)). Definition 1. Asynchronous Accelerated Randomized Block Coordinate Descent (A2BCD). Let f be σ-strongly convex, and let its gradient ∇f be L-Lipschitz with block coordinate Lipschitz parameters Li as in equation 1.2. We define the condition number κ = L/σ, and let L = mini Li. Using these parameters, we sample ik in an independent and identically distributed (IID) fashion according to
P[ik = j] = L1/2j /S, j ∈ {1, . . . , n}, for S = ∑n
i=1 L
1/2 i . (2.1)
Let τ be the maximum asynchronous delay. We define the dimensionless asynchronicity parameter ψ, which is proportional to τ , and quantifies how strongly asynchronicity will affect convergence:
ψ = 9 ( S−1/2L−1/2L3/4κ1/4 ) × τ (2.2)
We use the above system parameters and ψ to define the coefficients α, β, and γ via eqs. (2.3) to (2.5). Hence A2BCD algorithm is defined via the iterations: eqs. (2.6) to (2.8).
α , ( 1 + (1 + ψ)σ−1/2S )−1 (2.3)
β , 1− (1− ψ)σ1/2S−1 (2.4)
h , 1− 12σ 1/2L−1/2ψ. (2.5)
yk = αvk + (1− α)xk, (2.6) xk+1 = yk − hL−1ik ∇ikf(ŷk), (2.7) vk+1 = βvk + (1− β)yk − σ−1/2L−1/2ik ∇ikf(ŷk). (2.8)
See Section A for a discussion of why it is practical and natural to have the gradient ∇ikf(ŷk) to be outdated, while the actual variables xk, yk, vk can be efficiently kept up to date. Essentially it is
because most of the computation lies in computing ∇ikf(ŷk). After this is computed, xk, yk, vk can be updated more-or-less atomically with minimal overhead, meaning that they will always be up to date. However our main results still hold for more general asynchronicity. A natural quantity to consider in asynchronous convergence analysis is the asynchronicity error, a powerful tool for analyzing asynchronous algorithms used in several recent works Peng et al. (2016a); Hannah & Yin (2017b); Sun et al. (2017); Hannah & Yin (2017a). We adapt it and use a weighted sum of the history of the algorithm with decreasing weight as you go further back in time. Definition 2. Asynchronicity error. Using the above parameters, we define:
Ak = τ∑ j=1 cj‖yk+1−j − yk−j‖2 (2.9) for ci = 6 S L1/2κ3/2τ τ∑ j=i ( 1− σ1/2S−1 )i−j−1 ψ−1. (2.10)
Here we define yk = y0 for all k < 0. The determination of the coefficients ci is in general a very involved process of trial and error, intuition, and balancing competing requirements. The algorithm doesn’t depend on the coefficients, however; they are only an analytical tool. We define Ek[X] as the expectation of X conditioned on (x0, . . . , xk), (y0, . . . , yk), (v0, . . . , vk), and (i0, . . . , ik−1). To simplify notation4, we assume that the minimizer x∗ = 0, and that f(x∗) = 0 with no loss in generality. We define the Lyapunov function:
ρk = ‖vk‖2 +Ak + cf(xk) (2.11) for c = 2σ−1/2S−1 ( βα−1(1− α) + 1 ) . (2.12)
We now present this paper’s first main contribution. Theorem 1. Let f be σ-strongly convex with a gradient∇f that is L-Lipschitz with block Lipschitz constants {Li}ni=1. Let ψ defined in equation 2.2 satisfy ψ ≤ 3 7 (i.e. τ ≤ 1 21S
1/2L1/2L−3/4κ−1/4). Then for A2BCD we have:
Ek[ρk+1] ≤ ( 1− (1− ψ)σ1/2S−1 ) ρk.
To obtain E[ρk] ≤ ρ0, it takes KA2BCD( ) iterations for:
KA2BCD( ) = ( σ−1/2S +O(1) ) ln(1/ ) 1− ψ , (2.13)
where O(·) is asymptotic with respect to σ−1/2S →∞, and uniformly bounded.
This result is proven in Section B. A stronger result for Li ≡ L can be proven, but this adds to the complexity of the proof; see Section E for a discussion. In practice, asynchronous algorithms are far more resilient to delays than the theory predicts. τ can be much larger without negatively affecting the convergence rate and complexity. This is perhaps because we are limited to a worst-case analysis, which is not representative of the average-case performance. Allen-Zhu et al. (2015) (Theorem 5.1) shows a linear convergence rate of 1 − 2/ ( 1 + 2σ−1/2S
) for NU_ACDM, which leads to the corresponding iteration complexity of KNU_ACDM( ) =( σ−1/2S +O(1) ) ln(1/ ). Hence, we have:
KA2BCD( ) = 1
1− ψ (1 + o(1))KNU_ACDM( )
4We can assume x∗ = 0 with no loss in generality since we may translate the coordinate system so that x∗ is at the origin. We can assume f(x∗) = 0 with no loss in generality, since we can replace f(x) with f(x)−f(x∗). Without this assumption, the Lyapunov function simply becomes: ‖vk − x∗‖2 +Ak + c(f(xk)− f(x∗)).
When 0 ≤ ψ 1, or equivalently, when τ S1/2L1/2L−3/4κ−1/4, the complexity of A2BCD asymptotically matches that of NU_ACDM. Hence A2BCD combines state-of-the-art complexity with the faster iterations and superior scaling that asynchronous iterations allow. We now present some special cases of the conditions on the maximum delay τ required for good complexity. Corollary 3. Let the conditions of Theorem 1 hold. If all coordinate-wise Lipschitz constants Li are equal (i.e. Li = L1, ∀i), then we have KA2BCD( ) ∼ KNU_ACDM( ) when τ n1/2κ−1/4(L1/L)3/4. If we further assume all coordinate-wise Lipschitz constants Li equal L. Then KA2BCD( ) ∼ KNU_ACDM( ) = KACDM( ), when τ n1/2κ−1/4. Remark 1. Reduction to synchronous case. Notice that when τ = 0, we have ψ = 0, ci ≡ 0 and hence Ak ≡ 0. Thus A2BCD becomes equivalent to NU_ACDM, the Lyapunov function5 ρk becomes equivalent to one found in Allen-Zhu et al. (2015)(pg. 9), and Theorem 1 yields the same complexity.
The maximum delay τ will be a function τ(p) of p, number of computing nodes. Clearly τ ≥ p, and experimentally it has been observed that τ = O(p) Leblond et al. (2017). Let gradient complexity K( , τ) be the number of gradients required for an asynchronous algorithm with maximum delay τ to attain suboptimality . τ(1) = 0, since with only 1 computing node there can be no delay. This corresponds to the serial complexity. We say that an asynchronous algorithm attains a complexity speedup if pK( ,τ(0))K( ,τ(p) is increasing in p. We say it attains linear complexity speedup if pK( ,τ(0)) K( ,τ(p) = Ω(p). In Theorem 1, we obtain a linear complexity speedup (for p not too large), whereas no other prior attempt can attain even a complexity speedup with Nesterov acceleration. In the ideal scenario where the rate at which gradients are calculated increases linearly with p, algorithms that have linear complexity speedup will have a linear decrease in wall-clock time. However in practice, when the number of computing nodes is sufficiently large, the rate at which gradients are calculated will no longer be linear. This is due to many parallel overhead factors including too many nodes sharing the same memory read/write bandwidth, and network bandwidth. However we note that even with these issues, we obtain much faster convergence than the synchronous counterpart experimentally.
2.1 Optimality
NU_ACDM and hence A2BCD are in fact optimal in some sense. That is, among a fairly wide class of coordinate descent algorithms A, they have the best-possible worst-case complexity to highest order. We extend the work in Lan & Zhou (2015) to encompass algorithms are asynchronous and have unequal Li. For a subset S ∈ Rd, we let IC(S) (inconsistent read) denote the set of vectors v whose components are a combination of components of vectors in the set S. That is, v = (v1,1, v2,2, . . . , vd,d) for some vectors v1, v2, . . . , vd ∈ S. Here vi,j denotes the jth component of vector vi. Definition 4. Asynchronous Randomized Incremental Algorithms. Consider the unconstrained minimization problem equation 1.1 for function f satisfying the conditions stated in Section 1. We define the class A as algorithms G on this problem such that: 1. For each parameter set (σ, L1, . . . , Ln, n), G has an associated IID random variable ik with some fixed distribution P[ik] = pi for ∑n i=1 pi = 1.
2. The iterates of A satisfy: xk+1 ∈ span{IC(Xk),∇i0f(IC(X0)),∇i1f(IC(X1)), . . . ,∇ikf(IC(Xk))}
This is a rather general class: xk+1 can be constructed from any inconsistent reading of past iterates IC(Xk), and any past gradient of an inconsistent read ∇ijf(IC(Xj)).
5Their Lyapunov function is in fact a generalization of the one found in Nesterov (2012).
Theorem 2. For any algorithm G ∈ A that solves eq. (1.1), and parameter set (σ, L1, . . . , Ln, n), there is a dimension d, a corresponding function f on Rd, and a starting point x0, such that
E‖xk − x∗‖2/‖x0 − x∗‖2 ≥ 1 2 ( 1− 4/ (∑n j=1 √ Li/σ + 2n ))k Hence A has a complexity lower bound: K( ) ≥ 14 (1 + o(1)) (∑n j=1 √ Li/σ + 2n ) ln(1/2 )
Our proof in Section D follows very similar lines to Lan & Zhou (2015); Nesterov (2013).
3 ODE Analysis
In this section we present and analyze an ODE which is the continuous-time limit of A2BCD. This ODE is a strongly convex, and asynchronous version of the ODE found in Su et al. (2014). For simplicity, assume Li = L, ∀i. We rescale (I.e. we replace f(x) with 1σf .) f so that σ = 1, and hence κ = L/σ = L. Taking the discrete limit of synchronous A2BCD (i.e. accelerated RBCD), we can derive the following ODE6 (see Section equation C.1):
Ÿ + 2n−1κ−1/2Ẏ + 2n−2κ−1∇f(Y ) = 0 (3.1)
We define the parameter η , nκ1/2, and the energy: E(t) = en−1κ−1/2t(f(Y ) + 14 ∥∥Y + ηẎ ∥∥2). This
is very similar to the Lyapunov function discussed in equation 2.11, with 14 ∥∥Y (t) + ηẎ (t)∥∥2 fulfilling the role of ‖vk‖2, and Ak = 0 (since there is no delay yet). Much like the traditional analysis in the proof of Theorem 1, we can derive a linear convergence result with a similar rate. See Section C.2. Lemma 5. If Y satisfies equation 3.1, the energy satisfies E′(t) ≤ 0, E(t) ≤ E(0), and hence:
f(Y (t)) + 14 ∥∥∥Y (t) + nκ1/2Ẏ (t)∥∥∥2 ≤(f(Y (0)) + 14∥∥Y (0) + ηẎ (0)∥∥2 ) e−n −1κ−1/2t
We may also analyze an asynchronous version of equation 3.1 to motivate the proof of our main theorem. Here Ŷ (t) is a delayed version of Y (t) with the delay bounded by τ .
Ÿ + 2n−1κ−1/2Ẏ + 2n−2κ−1∇f ( Ŷ ) = 0, (3.2)
Unfortunately, this energy satisfies (see Section equation C.4, equation C.7):
e−η −1tE′(t) ≤ −18η ∥∥Ẏ ∥∥2 + 3κ2η−1τD(t), for D(t) , ∫ t t−τ ∥∥Ẏ (s)∥∥2ds. Hence this energy E(t) may not be decreasing in general. But, we may add a continuous-time asynchronicity error (see Sun et al. (2017)), much like in Definition 2, to create a decreasing energy. Let c0 ≥ 0 and r > 0 be arbitrary constants that will be set later. Define:
A(t) = ∫ t t−τ c(t− s) ∥∥Ẏ (s)∥∥2ds, for c(t) , c0(e−rt + e−rτ1− e−rτ (e−rt − 1) ) .
Lemma 6. When rτ ≤ 12 , the asynchronicity error A(t) satisfies:
e−rt d
dt
( ertA(t) ) ≤ c0 ∥∥Ẏ (t)∥∥2 − 12τ−1c0D(t). 6For compactness, we have omitted the (t) from time-varying functions Y (t), Ẏ (t), ∇Y (t), etc.
See Section C.3 for the proof. Adding this error to the Lyapunov function serves a similar purpose in the continuous-time case as in the proof of Theorem 1 (see Lemma 11). It allows us to negate 1 2τ −1c0 units of D(t) for the cost of creating c0 units of ∥∥Ẏ (t)∥∥2. This restores monotonicity. Theorem 3. Let c0 = 6κ2η−1τ2, and r = η−1. If τ ≤ 1√48nκ −1/2 then we have:
e−η −1t d
dt
( E(t) + eη −1tA(t) ) ≤ 0. (3.3)
Hence f(Y (t)) convergence linearly to f(x∗) with rate O ( exp ( −t/(nκ1/2) )) Notice how this convergence condition is similar to Corollary 3, but a little looser. The convergence condition in Theorem 1 can actually be improved to approximately match this (see Section E).
Proof. e−η
−1t d
dt
( E(t) + eη −1tA(t) ) ≤ ( c0 − 1 8η )∥∥Ẏ ∥∥2 + (3κ2η−1τ − 12τ−1c0 ) D(t)
= 6η−1κ2 ( τ2 − 148n 2κ−1 )∥∥Ẏ ∥∥2 ≤ 0
The preceding should hopefully elucidate the logic and general strategy of the proof of Theorem 1.
4 Related work
We now discuss related work that was not addressed in Section 1. Nesterov acceleration is a method for improving an algorithm’s iteration complexity’s dependence the condition number κ. Nesterov-accelerated methods have been proposed and discovered in many settings Nesterov (1983); Tseng (2008); Nesterov (2012); Lin et al. (2014); Lu & Xiao (2014); Shalev-Shwartz & Zhang (2016); Allen-Zhu (2017), including for coordinate descent algorithms (algorithms that use 1 gradient block ∇if or minimize with respect to 1 coordinate block per iteration), and incremental algorithms (algorithms for finite sum problems 1n ∑n i=1 fi(x) that use 1 function gradient ∇fi(x) per iteration). Such algorithms can often be augmented to solve composite minimization problems (minimization for objective of the form f(x) + g(x), especially for nonsomooth g), or include constraints. In Peng et al. (2016a), authors proposed and analyzed an asynchronous fixed-point algorithm called ARock, that takes proximal algorithms, forward-backward, ADMM, etc. as special cases. Work has also been done on asynchronous algorithms for finite sums in the operator setting Davis (2016); Johnstone & Eckstein (2018). In Hannah & Yin (2017b); Sun et al. (2017); Peng et al. (2016c); Cannelli et al. (2017) showed that many of the assumptions used in prior work (such as bounded delay τ <∞) were unrealistic and unnecessary in general. In Hannah & Yin (2017a) the authors showed that asynchronous iterations will complete far more iterations per second, and that a wide class of asynchronous algorithms, including asynchronous RBCD, have the same iteration complexity as their synchronous counterparts. Hence certain asynchronous algorithms can be expected to significantly outperform traditional ones. In Xiao et al. (2017) authors propose a novel asynchronous catalyst-accelerated Lin et al. (2015) primal-dual algorithmic framework to solve regularized ERM problems. They structure the parallel updates so that the data that an update depends on is up to date (though the rest of the data may not be). However catalyst acceleration incurs a log(κ) penalty over Nesterov acceleration in general. In Allen-Zhu (2017), the author argues that the inner iterations of catalyst acceleration are hard to tune, making it less practical than Nesterov acceleration.
5 Numerical experiments
To investigate the performance of A2BCD, we solve the ridge regression problem. Consider the following primal and corresponding dual objective (see for instance Lin et al. (2014)):
min w∈Rd P (w) = 12n ∥∥ATw − l∥∥2 + λ2 ‖w‖2, minα∈Rn D(α) = 12d2λ‖Aα‖2 + 12d‖α+ l‖2 (5.1)
where A ∈ Rd×n is a matrix of n samples and d features, and l is a label vector. We let A = [A1, . . . , Am] where Ai are the column blocks of A. We compare A2BCD (which is asynchronous accelerated), synchronous NU_ACDM (which is synchronous accelerated), and asynchronous RBCD (which is asynchronous non-accelerated). Nodes randomly select a coordinate block according to equation 2.1, calculate the corresponding block gradient, and use it to apply an update to the shared solution vectors. synchronous NU_ACDM is implemented in a batch fashion, with batch size p (1 block per computing node). Nodes in synchronous NU_ACDM implementation must wait until all nodes apply their computed gradients before they can start the next iteration, but the asynchronous algorithms simply compute with the most up-to-date information available. We use the datasets w1a (47272 samples, 300 features), wxa which combines the data from from w1a to w8a (293201 samples, 300 features), and aloi (108000 samples, 128 features) from LIBSVM Chang & Lin (2011). The algorithm is implemented in a multi-threaded fashion using C++11 and GNU Scientific Library with a shared memory architecture. We use 40 threads on two 2.5GHz 10-core Intel Xeon E5-2670v2 processors. See Section A.1 for a discussion of parameter tuning and estimation. The parameters for each algorithm are tuned to give the fastest performance, so that a fair comparison is possible. A critical ingredient in the efficient implementation of A2BCD and NU_ACDM for this problem is the efficient update scheme discussed in Lee & Sidford (2013b;a). In linear regression applications such as this, it is essential to be able to efficiently maintain or recover Ay. This is because calculating block gradients requires the vector ATi Ay, and without an efficient way to recover Ay, block gradient evaluations are essentially 50% as expensive as full-gradient calculations. Unfortunately, every accelerated iteration results in dense updates to yk because of the averaging step in equation 2.6. Hence Ay must be recalculated from scratch. However Lee & Sidford (2013a) introduces a linear transformation that allows for an equivalent iteration that results in sparse updates to new iteration variables p and q. The original purpose of this transformation was to ensure that the averaging steps (e.g. equation 2.6) do not dominate the computational cost for sparse problems. However we find a more important secondary use which applies to both sparse and dense problems. Since the updates to p and q are sparse coordinate-block updates, the vectors Ap, and Aq can be efficiently maintained, and therefore block gradients can be efficiently calculated. The specifics of this efficient implementation are discussed in Section A.2. In Table 5, we plot the sub-optimality vs. time for decreasing values of λ, which corresponds to increasingly large condition numbers κ. When κ is small, acceleration doesn’t result in a significantly better convergence rate, and hence A2BCD and async-RBCD both outperform sync-NU_ACDM since they complete faster iterations at similar complexity. Acceleration for low κ has unnecessary overhead, which means async-RBCD can be quite competitive. When κ becomes large, async-RBCD is no longer competitive, since it has a poor convergence rate. We observe that A2BCD and sync-NU_ACDM have essentially the same convergence rate, but A2BCD is up to 4 − 5× faster than sync-NU_ACDM because it completes much faster iterations. We observe this advantage despite the fact that we are in an ideal environment for synchronous computation: A small, homogeneous, high-bandwidth, low-latency cluster. In large-scale heterogeneous systems with greater synchronization overhead, bandwidth constraints, and latency, we expect A2BCD’s advantage to be much larger.
6 Acknowledgement
The authors would like to thank the reviewers for their helpful comments. The research presented in this paper was supported in part by AFOSR MURI FA9550-18-10502, NSF DMS-1720237, and ONR N0001417121.
A Efficient Implementation
An efficient implementation will have coordinate blocks of size greater than 1. This to ensure the efficiency of linear algebra subroutines. Especially because of this, the bulk of the computation for each iteration is computing ∇ikf(ŷk), and not the averaging steps. Hence the computing nodes only need a local copy of yk in order to do the bulk of an iteration’s computation. Given this gradient ∇ikf(ŷk), updating yk and vk is extremely fast (xk can simply be eliminated). Hence it is natural to simply store yk and vk centrally, and update them when the delayed gradients ∇ikf(ŷk). Given the above, a write mutex over (y, v) has minuscule overhead (which we confirm with experiments), and makes the labeling of iterates unambiguous. This also ensures that vk and yk are always up to date when (y, v) are being updated. Whereas the gradient ∇ikf(ŷk) may at the same time be out of date, since it has been calculated with an outdated version of yk. However a write mutex is not necessary in practice, and does not appear to affect convergence rates or computation time. Also it is possible to prove convergence under more general asynchronicity.
A.1 Parameter selection and tuning
When defining the coefficients, σ may be underestimated, and L,L1, . . . , Ln may be overestimated if exact values are unavailable. Notice that xk can be eliminated from the above iteration, and the block gradient ∇ikf(ŷk) only needs to be calculated once per iteration. A larger (or overestimated) maximum delay τ will cause a larger asynchronicity parameter ψ, which leads to more conservative step sizes to compensate. To estimate ψ, one can first performed a dry run with all coefficient set to 0 to estimate τ . All function parameters can be calculated exactly for this problem in terms of the data matrix and λ. We can then use these parameters and this tau to calculate ψ. ψ and τ merely change the parameters, and do not change execution patterns of the processors. Hence their parameter specification doesn’t affect the observed delay. Through simple tuning though, we found that ψ = 0.25 resulted in good performance. In tuning for general problems, there are theoretical reasons why it is difficult to attain acceleration without some prior knowledge of σ, the strong convexity modulus Arjevani (2017). Ideally σ is pre-specified for instance in a regularization term. If the Lipschitz constants Li cannot be calculated directly (which is rarely the case for the classic dual problem of empirical risk minimization objectives), the line-search method discussed in Roux et al. (2012) Section 4 can be used.
A.2 Sparse update formulation
As mentioned in Section 5, authors in Lee & Sidford (2013a) proposed a linear transformation of an accelerated RBCD scheme that results in sparse coordinate updates. Our proposed algorithm can be given a similar efficient implementation. We may eliminate xk from A2BCD, and derive the equivalent iteration below:(
yk+1 vk+1
) = (
1− αβ, αβ 1− β, β )( yk vk ) − (ασ−1/2L−1/2ik + h(1− α)L−1ik )∇ikf(ŷk)( σ−1/2L
−1/2 ik ) ∇ikf ( ŷk ) , C ( yk vk ) −Qk
where C and Qk are defined in the obvious way. Hence we define auxiliary variables pk, qk defined via: (
yk vk
) = Ck ( pk qk ) (A.1)
These clearly follow the iteration:( pk+1 qk+1 ) = ( pk qk ) − C−(k+1)Qk (A.2)
Since the vector Qk is sparse, we can evolve variables pk, and qk in a sparse manner, and recover the original iteration variables at the end of the algorithm via A.1. The gradient of the dual function is given by:
∇D(y) = 1 λd ( 1 d ATAy + λ(y + l) ) As mentioned before, it is necessary to maintain or recover Ayk to calculate block gradients. Since Ayk can be recovered via the linear relation in equation A.1, and the gradient is an affine function, we maintain the auxiliary vectors Apk and Aqk instead. Hence we propose the following efficient implementation in Algorithm 1. We used this to generate the results in Table 5. We also note also that it can improve performance to periodically recover vk and yk, reset the values of pk, qk, and C to vk, yk, and I respectively, and restarting the scheme (which can be done cheaply in time O(d)). We let B ∈ R2×2 represent Ck, and b represent B−1. ⊗ is the Kronecker product. Each computing node has local outdated versions of p, q, Ap,Aq which we denote p̂, q̂, Âp, Âq respectively. We also find it convenient to define: [
Dk1 Dk2
] = [ ασ−1/2L −1/2 ik
+ h(1− α)L−1ik σ−1/2L
−1/2 ik
] (A.3)
Algorithm 1 Shared-memory implementation of A2BCD 1: Inputs: Function parameters A, λ, L, {Li}ni=1, n, d. Delay τ (obtained in dry run). Starting
vectors y, v. 2: Shared data: Solution vectors p, q; auxiliary vectors Ap, Aq; sparsifying matrix B 3: Node local data: Solution vectors p̂, q̂, auxiliary vectors Âp, Âq, sparsifying matrix B̂. 4: Calculate parameters ψ, α, β, h via 1. Set k = 0. 5: Initializations: p← y, q ← v, Ap← Ay, Aq ← Av, B ← I. 6: while not converged, each computing node asynchronous do 7: Randomly select block i via equation 2.1. 8: Read shared data into local memory: p̂← p, q̂ ← q, Âp← Ap, Âq ← Aq, B̂ ← B. 9: Compute block gradient: ∇if(ŷ) = 1nλ ( 1 nA T i ( B̂1,1Âp+ B̂1,2Âq ) + λ ( B̂1,1p̂+ B̂1,2q̂
)) 10: Compute quantity gi = ATi ∇if(ŷ)
Shared memory updates: 11: Update B ← [ 1− αβ αβ 1− β β ] ×B, calculate inverse b← B−1.
12: [ p q ] −= b [ Dk1 Dk2 ] ⊗∇if(ŷ) ,
[ Ap Aq ] −= b [ Dk1 Dk2 ] ⊗ gi
13: Increase iteration count: k ← k + 1 14: end while 15: Recover original iteration variables: [ y v ] ← B [ p q ] . Output y.
B Proof of the main result
We first recall a couple of inequalities for convex functions. Lemma 7. Let f be σ-strongly convex with L-Lipschitz gradient. Then we have:
f(y) ≤ f(x) + 〈y − x,∇f(x)〉+ 12L‖y − x‖ 2 , ∀x, y (B.1) f(y) ≥ f(x) + 〈y − x,∇f(x)〉+ 12σ‖y − x‖ 2 , ∀x, y (B.2)
We also find it convenient to define the norm:
‖s‖∗ = √√√√ n∑ i=1 L −1/2 i ‖si‖ 2 (B.3)
B.1 Starting point
First notice that using the definition equation 2.8 of vk+1 we have:
‖vk+1‖2 = ‖βvk + (1− β)yk‖2 − 2σ−1/2L−1/2ik 〈βvk + (1− β)yk,∇ikf(ŷk)〉+ σ −1L−1ik ‖∇ikf(ŷk)‖ 2
Ek‖vk+1‖2 = ‖βvk + (1− β)yk‖2︸ ︷︷ ︸ A −2σ−1/2S−1 〈βvk + (1− β)yk,∇f(ŷk)〉︸ ︷︷ ︸ B
(B.4)
+ S−1σ−1 n∑ i=1 L −1/2 i ‖∇if(ŷk)‖ 2
︸ ︷︷ ︸ C
We have the following general identity:
‖βx+ (1− β)y‖2 = β‖x‖2 + (1− β)‖y‖2 − β(1− β)‖x− y‖2, ∀x, y (B.5) It can also easily be verified from equation 2.6 that we have:
vk = yk + α−1(1− α)(yk − xk) (B.6) Using equation B.5 on term A, equation B.6 on term B, and recalling the definition equation B.3 on term C, we have from equation B.4:
Ek‖vk+1‖2 = β‖vk‖2 + (1− β)‖yk‖2 − β(1− β)‖vk − yk‖2 + S−1σ−1/2‖∇f(ŷk)‖2∗ (B.7) − 2σ−1/2S−1βα−1(1− α)〈yk − xk,∇f(ŷk)〉 − 2σ−1/2S−1〈yk,∇f(ŷk)〉
This inequality is our starting point. We analyze the terms on the second line in the next section.
B.2 The Cross Term
To analyze these terms, we need a small lemma. This lemma is fundamental in allowing us to deal with asynchronicity. Lemma 8. Let χ,A > 0. Let the delay be bounded by τ . Then:
A‖ŷk − yk‖ ≤ 1 2χ −1A2 + 12χτ τ∑ j=1 ‖yk+1−j − yk−j‖2
Proof. See Hannah & Yin (2017a).
Lemma 9. We have:
−〈∇f(ŷk), yk〉 ≤ −f(yk)− 1 2σ(1− ψ)‖yk‖ 2 + 1 2 Lκψ−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2 (B.8)
〈∇f(ŷk), xk − yk〉 ≤ f(xk)− f(yk) (B.9)
+ 1 2 Lα(1− α)−1 κ−1ψβ‖vk − yk‖2 + κψ−1β−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2
The terms in bold in equation B.8 and equation B.9 are a result of the asynchronicity, and are identically 0 in its absence.
Proof. Our strategy is to separately analyze terms that appear in the traditional analysis of Nesterov (2012), and the terms that result from asynchronicity. We first prove equation B.8:
−〈∇f(ŷk), yk〉 = −〈∇f(yk), yk〉 − 〈∇f(ŷk)−∇f(yk), yk〉
≤ −f(yk)− 1 2σ‖yk‖ 2 + L‖ŷk − yk‖‖yk‖ (B.10)
equation B.10 follows from strong convexity (equation B.2 with x = yk and y = x∗), and the fact that ∇f is L-Lipschitz. The term due to asynchronicity becomes:
L‖ŷk − yk‖‖yk‖ ≤ 1 2Lκ −1ψ‖yk‖2 + 1 2Lκψ −1τ τ∑ j=1 ‖yk+1−j − yk−j‖2
using Lemma 8 with χ = κψ−1, A = ‖yk‖. Combining this with equation B.10 completes the proof of equation B.8. We now prove equation B.9:
〈∇f(ŷk), xk − yk〉 = 〈∇f(yk), xk − yk〉+ 〈∇f(ŷk)−∇f(yk), xk − yk〉 ≤ f(xk)− f(yk) + L‖ŷk − yk‖‖xk − yk‖ ≤ f(xk)− f(yk)
+ 12L κ−1ψβα−1(1− α)‖xk − yk‖2 + κψ−1β−1α(1− α)−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2 Here the last line follows from Lemma 8 with χ = κψ−1β−1α(1− α)−1, A = nxk − yk. We can complete the proof using the following identity that can be easily obtained from equation 2.6:
yk − xk = α(1− α)−1(vk − yk)
B.3 Function-value term
Much like Nesterov (2012), we need a f(xk) term in the Lyapunov function (see the middle of page 357). However we additionally need to consider asynchronicity when analyzing the growth of this term. Again terms due to asynchronicity are emboldened. Lemma 10. We have:
Ekf(xk+1) ≤ f(yk)− 1 2h ( 2− h ( 1 + 1 2 σ1/2L−1/2ψ )) S−1‖∇f(ŷk)‖2∗
+ S−1Lσ1/2κψ−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2
Proof. From the definition equation 2.7 of xk+1, we can see that xk+1 − yk is supported on block ik. Since each gradient block ∇if is Li Lipschitz with respect to changes to block i, we can use
equation B.1 to obtain:
f(xk+1) ≤ f(yk) + 〈∇f(yk), xk+1 − yk〉+ 1 2Lik‖xk+1 − yk‖ 2
(from equation 2.7) = f(yk)− hL−1ik 〈∇ikf(yk),∇ikf(ŷk)〉+ 1 2h 2L−1ik ‖∇ikf(ŷk)‖ 2
= f(yk)− hL−1ik 〈∇ikf(yk)−∇ikf(ŷk),∇ikf(ŷk)〉 − 1 2h(2− h)L −1 ik ‖∇ikf(ŷk)‖ 2
Ekf(xk+1) ≤ f(yk)− hS−1 n∑ i=1 L −1/2 i 〈∇if(yk)−∇if(ŷk),∇if(ŷk)〉 − 1 2h(2− h)S −1‖∇f(ŷk)‖2∗
(B.11) Here the last line followed from the definition equation B.3 of the norm ‖·‖∗1/2. We now analyze the middle term:
− n∑ i=1 L −1/2 i 〈∇if(yk)−∇if(ŷk),∇if(ŷk)〉
= − 〈
n∑ i=1 L −1/4 i (∇if(yk)−∇if(ŷk)), n∑ i=1 L −1/4 i ∇if(ŷk)
〉
(Cauchy Schwarz) ≤ ∥∥∥∥∥ n∑ i=1 L −1/4 i (∇if(yk)−∇if(ŷk)) ∥∥∥∥∥ ∥∥∥∥∥ n∑ i=1 L −1/4 i ∇if(ŷk) ∥∥∥∥∥ = (
n∑ i=1 L −1/2 i ‖∇if(yk)−∇if(ŷk)‖ 2 )1/2( n∑ i=1 L −1/2 i ‖∇if(ŷk)‖ 2 )1/2 (L ≤ Li,∀i and equation B.3) ≤ L−1/4‖∇f(yk)−∇f(ŷk)‖‖∇f(ŷk)‖∗
(∇f is L-Lipschitz) ≤ L−1/4L‖yk − ŷk‖‖∇f(ŷk)‖∗ We then apply Lemma 8 to this with χ = 2h−1σ1/2L1/4κψ−1, A = ‖∇f(ŷk)‖∗ to yield:
− n∑ i=1 L −1/2 i 〈∇if(yk)−∇if(ŷk),∇if(ŷk)〉 ≤ h −1Lσ1/2κψ−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2 (B.12)
+ 14hσ 1/2L−1/2ψ‖∇f(ŷk)‖2∗
Finally to complete the proof, we combine equation B.11, with equation B.12.
B.4 Asynchronicity error
The previous inequalities produced difference terms of the form ‖yk+1−j − yk−j‖2. The following lemma shows how these errors can be incorporated into a Lyapunov function. Lemma 11. Let 0 < r < 1 and consider the asynchronicity error and corresponding coefficients:
Ak = ∞∑ j=1 cj‖yk+1−j − yk−j‖2
ci = ∞∑ j=i ri−j−1sj
This sum satisfies:
Ek[Ak+1 − rAk] = c1Ek‖yk+1 − yk‖2 − ∞∑ j=1 sj‖yk+1−j − yk−j‖2
Remark 2. Interpretation. This result means that an asynchronicity error term Ak can negate a series of difference terms − ∑∞ j=1 sj‖yk+1−j − yk−j‖
2 at the cost of producing an additional error c1Ek‖yk+1 − yk‖2, while maintaining a convergence rate of r. This essentially converts difference terms, which are hard to deal with, into a ‖yk+1 − yk‖2 term which can be negated by other terms in the Lyapunov function. The proof is straightforward.
Proof.
Ek[Ak+1 − rAk] = Ek ∞∑ j=0 cj+1‖yk+1−j − yk−j‖2 − rEk ∞∑ j=1 cj‖yk+1−j − yk−j‖2
= c1Ek‖yk+1 − yk‖2 + Ek ∞∑ j=1 (cj+1 − rcj)‖yk+1−j − yk−j‖2
Noting the following completes the proof:
ci+1 − rci = ∞∑
j=i+1 ri+1−j−1sj − r ∞∑ j=i ri−j−1sj = −si
Given that Ak allows us to negate difference terms, we now analyze the cost c1Ek‖yk+1 − yk‖2 of this negation. Lemma 12. We have:
Ek‖yk+1 − yk‖2 ≤ 2α2β2‖vk − yk‖2 + 2S−1L−1‖∇f(ŷk)‖2
Proof.
yk+1 − yk = (αvk+1 + (1− α)xk+1)− yk = α ( βvk + (1− β)yk − σ−1/2L−1/2ik ∇ikf(ŷk) ) + (1− α) ( yk − hL−1ik ∇ikf(ŷk) ) − yk (B.13)
= αβvk + α(1− β)yk − ασ−1/2L−1/2ik ∇ikf(ŷk)− αyk − (1− α)hL −1 ik ∇ikf(ŷk) = αβ(vk − yk)− ( ασ−1/2L −1/2 ik + h(1− α)L−1ik ) ∇ikf(ŷk)
‖yk+1 − yk‖2 ≤ 2α2β2‖vk − yk‖2 + 2 ( ασ−1/2L −1/2 ik + h(1− α)L−1ik )2 ‖∇ikf(ŷk)‖ 2 (B.14)
Here equation B.13 following from equation 2.8, the definition of vk+1. equation B.14 follows from the inequality ‖x+ y‖2 ≤ 2‖x‖2 + 2‖y‖2. The rest is simple algebraic manipulation.
‖yk+1 − yk‖2 ≤ 2α2β2‖vk − yk‖2 + 2L−1ik ( ασ−1/2 + h(1− α)L−1/2ik )2 ‖∇ikf(ŷk)‖ 2
(L ≤ Li,∀i) ≤ 2α2β2‖vk − yk‖2 + 2L−1ik ( ασ−1/2 + h(1− α)L−1/2 )2 ‖∇ikf(ŷk)‖ 2
= 2α2β2‖vk − yk‖2 + 2L−1ik L −1 ( L1/2σ−1/2α+ h(1− α) )2 ‖∇ikf(ŷk)‖ 2
E‖yk+1 − yk‖2 ≤ 2α2β2‖vk − yk‖2 + 2S−1L−1 ( L1/2σ−1/2α+ h(1− α) )2 ‖∇f(ŷk)‖2∗
Finally, to complete the proof, we prove L1/2σ−1/2α+ h(1− α) ≤ 1. L1/2σ−1/2α+ h(1− α) = h+ α ( L1/2σ−1/2 − h ) (definitions of h and α: equation 2.3, and equation 2.5) = 1− 12σ 1/2L−1/2ψ + σ1/2S−1 ( L1/2σ−1/2
) ≤ 1− σ1/2L−1/2 ( 1 2ψ − σ
−1/2S−1L1 ) (B.15)
Rearranging the definition of ψ, we have:
S−1 = 192ψ 2L1L−3/2κ−1/2τ−2
(τ ≥1 and ψ ≤ 12 ) ≤ 1 182L 1L−3/2κ−1/2
Using this on equation B.15, we have: L1/2ασ−1/2 + h(1− α) ≤ 1− σ1/2L−1/2 (
1 2ψ − 1 182L
1L−3/2κ−1/2σ−1/2L1 )
= 1− σ1/2L−1/2 (
1 2ψ − 1 182 (L/L)
2 )
(ψ ≤ 12 ) = 1− σ 1/2L−1/2 ( 1 24 − 1 182 ) ≤ 1.
This completes the proof.
B.5 Master inequality
We are finally in a position to bring together all the all the previous results together into a master inequality for the Lyapunov function ρk (defined in equation 2.11). After this lemma is proven, we will prove that the right hand size is negative, which will imply that ρk linearly converges to 0 with rate β.
Lemma 13. Master inequality. We have:
Ek[ρk+1 − βρk] ≤+ ‖yk‖2 × ( 1− β − σ−1/2S−1σ(1− ψ) )
(B.16) + ‖vk − yk‖2 ×β ( 2α2βc1 + S−1βL1/2κ−1/2ψ − (1− β) )
+ f(yk) × ( c− 2σ−1/2S−1 ( βα−1(1− α) + 1 )) + f(xk) ×β ( 2σ−1/2S−1α−1(1− α)− c
) +
τ∑ j=1 ‖yk+1−j − yk−j‖2 ×S−1Lκψ−1τσ1/2 ( 2σ−1 + c ) − s
+ ‖∇f(ŷk)‖2∗ ×S −1 ( σ−1 + 2L−1c1 − 1 2ch ( 2− h ( 1 + 12σ 1/2L−1/2ψ )))
Proof.
Ek‖vk+1‖2 − β‖vk‖2
(B.7) = (1− β)‖yk‖2 − β(1− β)‖vk − yk‖2 + S−1σ−1‖∇f(ŷk)‖2∗ − 2σ−1/2S−1〈yk,∇f(ŷk)〉 − 2σ−1/2S−1βα−1(1− α)〈yk − xk,∇f(ŷk)〉 ≤ (1− β)‖yk‖2 − β(1− β)‖vk − yk‖2 + S−1σ−1‖∇f(ŷk)‖2∗ (B.17)
(B.8) + 2σ−1/2S−1 −f(yk)− 12σ(1− ψ)‖yk‖2 + 12Lκψ−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2 (B.9)− 2σ−1/2S−1βα−1(1− α)(f(xk)− f(yk))
+ σ−1/2S−1βL κ−1ψβ‖vk − yk‖2 + κψ−1β−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2 We now collect and organize the similar terms of this inequality.
≤+ ‖yk‖2 × ( 1− β − σ−1/2S−1σ(1− ψ) )
+ ‖vk − yk‖2 ×β ( σ−1/2S−1βLκ−1ψ − (1− β) ) − f(yk) ×2σ−1/2S−1 ( βα−1(1− α) + 1
) + f(xk) ×2σ−1/2S−1βα−1(1− α)
+ τ∑ j=1 ‖yk+1−j − yk−j‖2 ×2σ−1/2S−1Lκψ−1τ
+ ‖∇f(ŷk)‖2∗ ×σ −1S−1
Now finally, we add the function-value and asynchronicity terms to our analysis. We use Lemma 11 is with r = 1− σ1/2S−1, and
si = { s = 6S−1L1/2κ3/2ψ−1τ, 1 ≤ i ≤ τ 0, i > τ (B.18)
Notice that this choice of si will recover the coefficient formula given in equation 2.9. Hence we have:
Ek[cf(xk+1) +Ak+1 − β(cf(xk) +Ak)]
(Lemma 10) ≤ cf(yk)− 1 2ch
( 2− h ( 1 + 12σ 1/2L−1/2ψ )) S−1‖∇f(ŷk)‖2∗ − βcf(xk)
(B.19)
+ S−1Lσ1/2κψ−1τ τ∑ j=1 ‖yk+1−j − yk−j‖2
(Lemmas 11 and 12) + c1 ( 2α2β2‖vk − yk‖2 + 2S−1L−1‖∇f(ŷk)‖2 )
(B.20)
− ∞∑ j=1 sj‖yk+1−j − yk−j‖2 +Ak(r − β)
Notice Ak(r − β) ≤ 0. Finally, combining equation B.17 and equation B.19 completes the proof.
In the next section, we will prove that every coefficient on the right hand side of equation B.16 is 0 or less, which will complete the proof of Theorem 1.
B.6 Proof of main theorem
Lemma 14. The coefficients of ‖yk‖2, f(yk), and ∑τ j=1‖yk+1−j − yk−j‖
2 in Lemma 13 are non-positive.
Proof. The coefficient 1− (1− ψ)σ1/2S−1−β of ‖yk‖2 is identically 0 via the definition equation 2.4 of β. The coefficient c − 2σ−1/2S−1 ( βα−1(1− α) + 1 ) of f(yk) is identically 0 via the definition equation 2.12 of c. First notice from the definition equation 2.12 of c:
c = 2σ−1/2S−1 ( βα−1(1− α) + 1 ) (definitions of α, β) = 2σ−1/2S−1 (( 1− σ1/2S−1(1− ψ) ) (1 + ψ)σ−1/2S + 1
) = 2σ−1/2S−1 ( (1 + ψ)σ−1/2S + ψ2
) = 2σ−1 ( (1 + ψ) + ψ2σ1/2S−1 ) (B.21)
c ≤ 4σ−1 (B.22)
Here the last line followed since ψ ≤ 12 and σ 1/2S−1 ≤ 1. We now analyze the coefficient of∑τ
j=1‖yk+1−j − yk−j‖ 2.
S−1Lκψ−1τσ1/2 ( 2σ−1 + c ) − s
(B.22) ≤ 6L1/2κ3/2ψ−1τ − s (definition equation B.18 of s) ≤ 0
Lemma 15. The coefficient β ( 2σ−1/2S−1α−1(1− α)− c ) of f(xk) in Lemma 13 is non-positive.
Proof.
2σ−1/2S−1α−1(1− α)− c (B.21) = 2σ−1/2S−1(1 + ψ)σ−1/2S − 2σ−1 ( (1 + ψ) + ψ2σ1/2S−1 )
= 2σ−1 ( (1 + ψ)− ( (1 + ψ) + ψ2σ1/2S−1 ))
= −2ψ2σ−1/2S−1 ≤ 0
Lemma 16. The coefficient S−1 ( σ−1 + 2L−1c1 − 12ch ( 2− h ( 1 + 12σ 1/2L−1/2ψ )))
of ‖∇f(ŷk)‖2∗ in Lemma 13 is non-positive.
Proof. We first need to bound c1.
(equation B.18 and equation 2.9) c1 = s τ∑ j=1 ( 1− σ1/2S−1 )−j equation B.18 ≤ 6S−1L1/2κ3/2ψ−1τ
τ∑ j=1 ( 1− σ1/2S−1 )−j ≤ 6S−1L1/2κ3/2ψ−1τ2 ( 1− σ1/2S−1
)−τ It can be easily verified that if x ≤ 12 and y ≥ 0, then (1− x)
−y ≤ exp(2xy). Using this fact with x = σ1/2S−1 and y = τ , we have:
≤ 6S−1L1/2κ3/2ψ−1τ2 exp ( τσ1/2S−1 ) (since ψ ≤ 3/7 and hence τσ1/2S−1 ≤ 17 ) ≤ S −1L1/2κ3/2ψ−1τ2 × 6 exp ( 1 7
) c1 ≤ 7S−1L1/2κ3/2ψ−1τ2 (B.23)
We now analyze the coefficient of ‖∇f(ŷk)‖2∗
σ−1 + 2L−1c1 − 1 2ch
( 2− h ( 1 + 12σ 1/2L−1/2ψ ))
(B.23 and 2.5) ≤ σ−1 + 14S−1L−1L1/2κ3/2ψ−1τ2 − 12ch ( 1 + 14σ 1L−1ψ2 ) ≤ σ−1 + 14S−1L−1L1/2κ3/2ψ−1τ2 − 12ch
(definition 2.2 of ψ) = σ−1 + 1481σ −1ψ − 12ch
(B.21, definition 2.5 of h) = σ−1 ( 1 + 1481ψ − ( (1 + ψ) + ψ2σ1/2S−1 )( 1− 12σ 1/2L−1/2ψ )) (σ1/2L−1/2 ≤ 0 and σ1/2S−1 ≤ 1) ≤ σ−1 ( 1 + 1481ψ − (1 + ψ) ( 1− 12ψ
)) = σ−1ψ ( 14 81 + 1 2ψ − 1 2
) (ψ ≤ 12 ) ≤ 0
Lemma 17. The coefficient β ( 2α2βc1 + S−1βL1/2κ−1/2ψ − (1− β) ) of ‖vk − yk‖2 in 13 is nonpositive.
Proof. 2α2βc1 + σ1/2S−1βψ − (1− ψ)σ1/2S−1
(B.23) ≤ 14α2βS−1L1/2κ3/2ψ−1τ2 + σ1/2S−1βψ − (1− ψ)σ1/2S−1
≤ 14σS−3L1/2κ3/2ψ−1τ2 + σ1/2S−1ψ − (1− ψ)σ1/2S−1 = σ1/2S−1 ( 14S−2Lκτ2ψ−1 + 2ψ − 1 ) Here the last inequality follows since β ≤ 1 and α ≤ σ1/2S−1. We now rearrange the definition of ψ to yield the identity:
S−2κ = 194L 2L−3τ−4ψ4
Using this, we have: 14S−2Lκτ2ψ−1 + 2ψ − 1
= 1494 L 2L−2ψ3τ−2 + 2ψ − 1
≤ 1494 ( 3 7 )3 1−2 + 67 − 1 ≤ 0
Here the last line followed since L ≤ L, ψ ≤ 37 , and τ ≥ 1. Hence the proof is complete.
Proof of Theorem 1. Using the master inequality 13 in combination with the previous Lemmas 14, 15, 16, and 17, we have:
Ek[ρk+1] ≤ βρk = ( 1− (1− ψ)σ1/2S−1 ) ρk
When we have: ( 1− (1− ψ)σ1/2S−1 )k ≤
then the Lyapunov function ρk has decreased below ρ0 in expectation. Hence the complexity K( ) satisfies:
K( ) ln ( 1− (1− ψ)σ1/2S−1 ) = ln( )
K( ) = −1 ln ( 1− (1− ψ)σ1/2S−1 ) ln(1/ ) Now it can be shown that for 0 < x ≤ 12 , we have:
1 x − 1 ≤ −1ln(1− x) ≤ 1 x − 12
−1 ln(1− x) = 1 x +O(1)
Since n ≥ 2, we have σ1/2S−1 ≤ 12 . Hence:
K( ) = 11− ψ
( σ−1/2S +O(1) ) ln(1/ )
An expression for KNU_ACDM( ), the complexity of NU_ACDM follows by similar reasoning. KNU_ACDM( ) = ( σ−1/2S +O(1) ) ln(1/ ) (B.24)
Finally we have:
K( ) = 11− ψ ( σ−1/2S +O(1) σ−1/2S +O(1) ) KNU_ACDM( )
= 11− ψ (1 + o(1))KNU_ACDM( )
which completes the proof.
C Ordinary Differential Equation Analysis
C.1 Derivation of ODE for synchronous A2BCD
If we take expectations with respect to Ek, then synchronous (no delay) A2BCD becomes: yk = αvk + (1− α)xk
Ekxk+1 = yk − n−1κ−1∇f(yk) Ekvk+1 = βvk + (1− β)yk − n−1κ−1/2∇f(yk)
We find it convenient to define η = nκ1/2. Inspired by this, we consider the following iteration:
yk = αvk + (1− α)xk (C.1) xk+1 = yk − s1/2κ−1/2η−1∇f(yk) (C.2) vk+1 = βvk + (1− β)yk − s1/2η−1∇f(yk) (C.3)
for coefficients:
α = ( 1 + s−1/2η )−1
β = 1− s1/2η−1
s is a discretization scale parameter that will be sent to 0 to obtain an ODE analogue of synchronous A2BCD. We first use equation B.6 to eliminate vk from from equation C.3.
0 = −vk+1 + βvk + (1− β)yk − s1/2η−1∇f(yk) 0 = −α−1yk+1 + α−1(1− α)xk+1
+ β ( α−1yk − α−1(1− α)xk ) + (1− β)yk − s1/2η−1∇f(yk)
(times by α) 0 = −yk+1 + (1− α)xk+1 + β(yk − (1− α)xk) + α(1− β)yk − αs1/2η−1∇f(yk) = −yk+1 + yk(β + α(1− β)) + (1− α)xk+1 − xkβ(1− α)− αs1/2η−1∇f(yk)
We now eliminate xk using equation C.1:
0 = −yk+1 + yk(β + α(1− β)) + (1− α) ( yk − s1/2η−1κ−1/2∇f(yk) ) − ( yk−1 − s1/2η−1κ−1/2∇f(yk−1) ) β(1− α)
− αs1/2η−1∇f(yk) = −yk+1 + yk(β + α(1− β) + (1− α))− β(1− α)yk−1 + s1/2η−1∇f(yk−1)(β − 1)(1− α) − αs1/2η−1∇f(yk) = (yk − yk+1) + β(1− α)(yk − yk−1) + s1/2η−1(∇f(yk−1)(β − 1)(1− α)− α∇f(yk))
Now to derive an ODE, we let yk = Y ( ks1/2 ) . Then ∇f(yk−1) = ∇f(yk) + O ( s1/2 ) . Hence the above becomes:
0 = (yk − yk+1) + β(1− α)(yk − yk−1) + s1/2η−1((β − 1)(1− α)− α)∇f(yk) +O ( s3/2 ) 0 = ( −s1/2Ẏ − 12sŸ ) + β(1− α) ( s1/2Ẏ − 12sŸ ) (C.4)
+ s1/2η−1((β − 1)(1− α)− α)∇f(yk) +O ( s3/2 )
We now look at some of the terms in this equation to find the highest-order dependence on s.
β(1− α) = ( 1− s1/2η−1 )(
1− 1 1 + s−1/2η ) = ( 1− s1/2η−1 ) s−1/2η
1 + s−1/2η
= s −1/2η − 1 s−1/2η + 1
= 1− s 1/2η−1
1 + s1/2η−1
= 1− 2s1/2η−1 +O(s)
We also have:
(β − 1)(1− α)− α = β(1− α)− 1 = −2s1/2η−1 +O(s)
Hence using these facts on equation C.4, we have: 0 = ( −s1/2Ẏ − 12sŸ ) + ( 1− 2s1/2η−1 +O(s) )( s1/2Ẏ − 12sŸ )
+ s1/2η−1 ( −2s1/2η−1 +O(s) ) ∇f(yk) +O ( s3/2 ) 0 = −s1/2Ẏ − 12sŸ + ( s1/2Ẏ − 12sŸ − 2s 1η−1Ẏ +O ( s3/2
)) ( −2s1η−2 +O ( s3/2 )) ∇f(yk) +O ( s3/2
) 0 = −sŸ − 2sη−1Ẏ − 2sη−2∇f(yk) +O ( s3/2
) 0 = −Ÿ − 2η−1Ẏ − 2η−2∇f(yk) +O ( s1/2
) Taking the limit as s→ 0, we obtain the ODE | 1. What are the main contributions and novel aspects of the paper in distributed optimization?
2. What are the strengths of the paper, particularly in its theoretical analysis and elegance?
3. Do you have any concerns or suggestions regarding the paper's comparisons with other works and extensions to partially separable functions? | Review | Review
In distributed optimisation, it is well known that asynchronous methods outperform synchronous methods in many cases. However, the questions as to whether (and when) asynchronous methods can be shown to have any speed-up, as the number of nodes increases, has been open. The paper under review answers the question in the affirmative and does so very elegantly.
I have only a few minor quibbles and a question. There are some recent papers that could be cited:
http://proceedings.mlr.press/v80/zhou18b.html
http://proceedings.mlr.press/v80/lian18a.html
https://nips.cc/Conferences/2018/Schedule?showEvent=11368
and the formatting of the bibliography needs to be improved.
In the synchronous case, some of the analyses extend to partially separable functions, e.g.:
https://arxiv.org/abs/1406.0238
and citations thereof. Would it be possible to extend the present work in that direction? |
ICLR | Title
Visual Representation Learning over Latent Domains
Abstract
A fundamental shortcoming of deep neural networks is their specialization to a single task and domain. While multi-domain learning enables the learning of compact models that span multiple visual domains, these rely on the presence of domain labels, in turn requiring laborious curation of datasets. This paper proposes a less explored, but highly realistic new setting called latent domain learning: learning over data from different domains, without access to domain annotations. Experiments show that this setting is challenging for standard models and existing multi-domain approaches, calling for new customized solutions: a sparse adaptation strategy is formulated which enhances performance by accounting for latent domains in data. Our method can be paired seamlessly with existing models, and benefits conceptually related tasks, e.g. empirical fairness problems and long-tailed recognition.
N/A
A fundamental shortcoming of deep neural networks is their specialization to a single task and domain. While multi-domain learning enables the learning of compact models that span multiple visual domains, these rely on the presence of domain labels, in turn requiring laborious curation of datasets. This paper proposes a less explored, but highly realistic new setting called latent domain learning: learning over data from different domains, without access to domain annotations. Experiments show that this setting is challenging for standard models and existing multi-domain approaches, calling for new customized solutions: a sparse adaptation strategy is formulated which enhances performance by accounting for latent domains in data. Our method can be paired seamlessly with existing models, and benefits conceptually related tasks, e.g. empirical fairness problems and long-tailed recognition.
1 INTRODUCTION
Datasets have been a major driving force behind the rapid progress in computer vision research in the last two decades. They provide a testbed for developing new algorithms and comparing them to existing ones. However, datasets can also narrow down the focus of research into overspecialized solutions and impede developing a broader understanding of the world.
In recent years this narrow scope of datasets has been widely questioned (Torralba & Efros, 2011; Tommasi et al., 2017; Recht et al., 2019) and addressing some of these limitations has become a very active area of research. Two actively studied themes to investigate broader learning criteria are multi-domain learning (Nam & Han, 2016; Bulat et al., 2019; Schoenauer-Sebag et al., 2019) and domain adaptation (Ganin et al., 2016; Tzeng et al., 2017; Hoffman et al., 2018; Xu et al., 2018; Peng et al., 2019a; Sun et al., 2019b). While multi-domain techniques focus on learning a single model that can generalize over multiple domains, domain adaptation techniques aim to efficiently transfer the representations that are learned in one dataset to another.
Related themes have also been studied in domain generalization (Li et al., 2018; 2019b;a; Gulrajani & Lopez-Paz, 2020) and continual learning (Kirkpatrick et al., 2017; Lopez-Paz & Ranzato, 2017; Riemer et al., 2019), where the focus lies on learning representations that can generalize to unseen domains, and to preserve knowledge acquired from previously seen tasks, respectively.
While there exists no canonical definition for what exactly a visual domain is, previous works in multi-domain learning assume that different subsets of data exist, with some defining characteristic that allows them to be separated from each other. Each subset, indexed by d = 1, . . . , D, is assigned to a pre-defined visual domain, and vice-versa multi-domain methods then use such domain associations to parameterize their representations and learn some pθ(y|x, d). In some cases domains are intuitive and their annotation straightforward. Consider a problem where images have little visual relationship, for example joint learning of Omniglot handwritten symbols (Lake et al., 2015) and CIFAR-10 objects (Krizhevsky & Hinton, 2009). In this case, it is safe to assume that encoding an explicit domain-specific identifier into pθ is a good idea, and results in the multi-domain literature provide clear evidence that it is beneficial to do so (Rebuffi et al., 2018; Liu et al., 2019a; Guo et al., 2019a; Mancini et al., 2020).
The assumption that domain labels are always available has been widely adopted in multi-domain learning; however this assumption is not without difficulty. For one, unless the process of domain annotation is automated due to combining existing datasets as in e.g. Rebuffi et al. (2017), their manual collection, curation, and domain labeling is very laborious.
And even if adequate resources exist, it is often difficult to decide the optimal criteria for the annotation of d: some datasets contain sketches, paintings and real world images (Li et al., 2017), others images captured during day or night (Sultani et al., 2018). Automatically collected datasets (Thomee et al., 2016; Sun et al., 2017) contain mixtures of low/high resolution images, taken with different cameras by amateurs/professionals. There is no obvious answer which of these should form their own distinct domain subset.
Moreover, the work of Bouchacourt et al. (2018) considers semantic groupings of data: they show that when dividing data by subcategories, such as size, shape, etc., and incorporating this information into the model, then this benefits performance. Should one therefore also encode the number of objects into domains, or their color, shape, and so on?
Given the relatively loose requirement that domains are supposed to be different while related in some sense (Pan & Yang, 2009), these examples hint at the difficulty of deciding whether domains are needed, and – if the answer to that is yes – what the optimal domain criteria are. And note that even if such assignments are made very carefully for some problem, nothing guarantees that they will transfer effectively to some other task.
This paper carefully investigates this ambiguity and studies two central questions:
1. Are domain labels always optimal for learning multi-domain representations? 2. How can models best be learned that generalize well over visually diverse domains, without
domain labels?
To study this problem, we introduce a new setting (c.f. Fig. 1) in which models are learned over multiple domains without domain annotations — latent domain learning for short.
While latent domain learning is a highly practical research problem in the context of transfer learning, it poses multiple challenges that have not been previously investigated in connection with deep visual representation learning. In particular, we find that the removal of domain associations leads to performance losses for standard architectures due to imbalances in the underlying distribution and different difficulty levels of the associated domain-level tasks.
We carry out a rigorous quantitative analysis that includes concepts from multi-domain learning (Rebuffi et al., 2018; Chang et al., 2018), and find that their performance benefits do not directly extend to latent domain learning. To account for this lost performance, we formulate a novel method called sparse latent adaptation (Section 3.2) which enables internal feature representations to dynamically adapt to instances from multiple domains in data, without requiring annotations for this. Moreover, we show that latent domain methods appear to benefit single domain data and real world tasks, such as fairness problems (Appendix F), and long-tailed recognition (Appendix G).
2 LATENT DOMAIN LEARNING
This section provides an overview over latent domain learning and contrasts it against other types of related learning problems, in particular multi-domain learning.
2.1 PROBLEM SETTING
When learning on multiple domains, the common assumption is that data is sampled i.i.d. from a mixture of distributions Pd with domain indices d = 1, . . . , D. Together, they constitute the datagenerating distribution as P = ∑ d πdPd, where each domain is associated with a relative share πd = Nd/N , with N the total number of samples, and Nd those belonging to the d’th domain. In multi-domain learning, domain labels are available for all samples (Nam & Han, 2016; Rebuffi et al., 2017; 2018; Bulat et al., 2019), such that the overall data available for learning consists of DMD = {(xi, di, yi)} with i = 1, . . . , N . In latent domain learning the information associating each sample xi with a domain di is not available. As such, domain-specific labels yi cannot be inferred from sample-domain pairs (xi, di) and one is instead forced to learn a single model fθ over the latent domain dataset DLD = {(xi, yi)}. While latent domain learning can include mutually exclusive classes and disjoint label spaces Y1 ∪ · · · ∪ YD (as in long-tailed recognition, see Appendix G), we mainly focus on the setting of shared label spaces, i.e. Yd = Yd′ . For example a dataset may contain images of dogs or elephants that can appear as either photos, paintings, or sketches.
Latent domains have previously attracted interest in the context of domain adaptation, where the lack of annotations was recovered through hierarchical Hoffman et al. (2012) and kernel-based clustering (Gong et al., 2013), via exemplar SVMs (Xu et al., 2014), or by measuring mutual information (Xiong et al., 2014). More recent work corrects batch statistics of domain adaptation layers using Gaussian mixtures (Mancini et al., 2018), or studies the shift from some source domain to a target distribution that contains multiple latent domains (Peng et al., 2019b; Matsuura & Harada, 2020). Latent domain learning however differs fundamentally from these works: Table 1 contains a comparison to existing transfer learning settings.
A common baseline in multi-domain learning is to finetune D models, one for each individual domain (Rebuffi et al., 2018; Liu et al., 2019a). This requires learning a large number of parameters and shares no parameters across domains, but can serve as a strong baseline to compare against. We show that in many cases, even when domains were carefully annotated, a dynamic latent domain approach can surpass the performance of such domain-supervised baselines (see Section 4).
2.2 OBSERVED VS. UNIFORM ACCURACY
Consider a problem in which the data is sampled i.i.d. from P = πaPda + πbPdb , i.e. two hidden domains. When domain labels are not available in the data, a standard strategy is to treat all samples equally, and measure the observed accuracy:
OAcc[f ] = E(xi,yi)∼P[1yf(xi)=yi ], (1)
where yf denotes the class assigned to sample xi by the model f , and yi its corresponding label for training. The OAcc has a problematic property: if P consists of two imbalanced domains such that πa ≥ πb, then the performance on da dominates it. For example if da has a 90% overall share, and the model perfectly classifies this domain while obtaining 0% accuracy on db, then OAcc would still assume 0.9, hiding the underlying damage to domain db.
This motivates alternative formulations for latent domain learning, to anticipate (and account for) imbalanced domains in data. If it is possible to identify some semantic domain labeling (as typically included in multi-domain/domain adaptation benchmarks), one can compare performances across individual subgroups. This allows picking up on domain-specific performance losses which traditional metrics (such as OAcc) fail to capture.
Where this is possible, we therefore propose to also measure latent domain performance in terms of uniform accuracy which decouples accuracies from relative ground-truth domain sizes:
UAcc[f ] = 1
D D∑ d=1 E(xi,yi)∼Pd [1yf (xi)=yi ]. (2)
Returning to the above example, a uniform measurement reflects the model’s lack of performance on db as UAcc = 0.5. Once again note while ground-truth domain annotations are required in order to compute uniform accuracy, these are never used to train latent domain models.
3 METHODS
To enable robust learning in the new proposed setting, we formulate a novel module called sparse latent adaptation which can adaptively account for latent domains. Section 3.1 reviews adaptation strategies popular in the multi-domain context, which our method extends (and generalizes).
3.1 LATENT ADAPTATION
When domain labels d are available (not the case in latent domain learning) one strategy established by Rebuffi et al. (2017) is to modulate networks by constraining the layerwise transformation of residual networks (He et al., 2016) Φ(x) = x + f(x) to allow at most a linear change Vd per each domain from some pretrained mapping Φ0 (with f0 in every layer), whereby Φ(x)−Φ0(x) = Vdx. Note the slight abuse of notation here in letting x denote a feature map with channels C. Rearranging this yields:
Φ(x, d) = x+ f0(x) + D∑ d=1 gdVd(x), (3)
with a domain-supervised switch that assigns corrections to domains, i.e. gd = 1 for d associated with x and 0 otherwise. Each Vd is parametrized through 1x1 convolutions, and f0 denotes a shared 3x3 convolution obtained e.g. on ImageNet (Deng et al., 2009). This builds on the assumption that models with strong general-purpose representations require minimal changes to adapt to new tasks (Bilen & Vedaldi, 2017), making learning each Vd sufficient, while f0 remains as is. Such adaptation strategies have been successfully used in few shot learning (Li et al., 2021) and NLP (Stickland & Murray, 2019) to restrict the number of learnable parameters there.
In latent domain learning access to d is removed, resulting in two new challenges: we have no a priori information about the right number of corrections {Vd}, and we cannot use d to decide which one of these to apply.
To mitigate the lack of domain labels d, first we assume that input data is constituted by K latent distributions Pk. Second we propose to replace the switch gd with a learnable gating mechanism g1(x), . . . , gK(x) that assigns each sample x to latent domains as follows:
Φ(x) = x+ f0(x) + K∑ k=1 gk(x)Vk(x), (4)
The gates gk control which convolution is applied to which sample x, and correspond to a categorical variable over K categories, i.e. 0 ≤ gk ≤ 1 and ∑ k gk = 1. Note in particular how parametric dependency of Φ on d is removed. How to best choose K is discussed in more detail in Section 4.
While we motivate our latent domain module from learning over multiple domains, the main goal is not to recover the domain labels annotated in some datasets. When optimizing some loss (standard cross-entropy in the classification case), there is no guarantee that the learned Vk will correspond to an annotated visual domain and many additional factors (shape, pose, color, etc.) can enter them as
well. Latent domain models are simply optimized to produce the lowest training error, and in fact seldom recover ground-truth domains (c.f. Fig. 5). Note the broader concept presented here may in principle also be incorporated with other multi-task concepts (Perez et al., 2018; Guo et al., 2019a), adaptation strategies however stand out due to their methodological simplicity.
Different options exist for parametrizing the gating function g : X → G ⊆ RK . An ideal gating mechanism for latent domain learning would fulfill two seemingly incompatible properties: be able to filter latent domains in some layers (requiring a discrete gate), but also share parameters between related latent domains in other layers (smooth gates). The next section proposes how this can be resolved without requiring task relationships (Vandenhende et al., 2020) or outer optimization loops (Wu et al., 2018) through the use of sparseness.
3.2 SPARSE LATENT ADAPTERS (SLA)
We parameterize the gating function g with a small linear transformation W : C → RK that constitutes the pre-activation q=Wϕ(x) within the gates, where ϕ : X → C denotes an average pooling projection onto the channels.
A crucial choice is whether the activation for q ∈ RK should map to a discrete space G = {0, 1}K or a continuous G = [0, 1]K in which the Vk are shared. We propose a different strategy that lets gates be smooth when appropriate, but a threshold τ allows for sparse (or discrete) outputs fτ (q) = [q − τ(q)]+ with [·]+ = max(0, ·). Crucially fτ can be solved in a differentiable manner (Martins & Astudillo, 2016) by sorting q1 ≥ · · · ≥ qK , solving k∗ = max{k | 1 + kqk > ∑ j≤k qj} and computing τ = [( ∑ j≤k∗ qj)− 1]/k∗.
Consider q = [0.1, 1.0, 0.5] for which sparse activation results in fτ (q) = [0.0, 0.75, 0.25] while softmax yields [0.202, 0.497, 0.301]. Sparse activation filters out q1, while sharing between q2 and q3. We may now define:
SLA(x) , x+ f0(x) + K∑ k=1 [ fτ ◦W ◦ ϕ(x) ] k Vk(x), (5)
where [·]k picks the k’th element of the gating sequence. To the best of our knowledge sparse activation strategies were never previously employed for expert models in computer vision and have so far been restricted to the NLP setting (Deng et al., 2017; Peters et al., 2019). Note SLA generalizes residual adaption (Rebuffi et al., 2017; 2018), which is recovered by setting K= 1.
While gating is subject to complex interactions such as negative transfer (Rosenbaum et al., 2019), our ablations in Table 5 clearly show that taking a sparse perspective – which allows the model to assume either continuous or discrete forms – outperforms the alternative of a priori fixing either smoothness through self-attention (Lin et al., 2017b), or discrete Gumbel-based sampling (Jang et al., 2016). Note this choice between discrete (Veit & Belongie, 2018; Guo et al., 2019b) and continuous mechanisms (Shazeer et al., 2017; Sun et al., 2019a; Wang et al., 2019) delineates previous work that employs differentiable gates.
A softmax-activated model can in principle also learn to suppress individual preactivation components by letting some qk go to −∞. This however requires either learning extra calibration parameters at every layer, defining a hard cutoff value (Shazeer et al., 2017) (thereby removing differentiability), or very large row-norms within the linear mapping W— a highly unlikely outcome given the several mechanisms found in state-of-the-art models (in particular weight decay, norm-penalties, or BN (Ioffe & Szegedy, 2015)) which act as direct counterforces to this.
4 EXPERIMENTS
We evaluate our proposed methods on three latent domain benchmarks: Office-Home, PACS, and DomainNet (c.f. Fig. 6, which shows example images from these benchmarks). The main goal here is not to compare to existing multi-domain or domain adaptation methods that these datasets were initially designed for, but to study our two central research questions: whether domain labels are useful for effectively learning over multiple domains, and whether one can learn such representations without domain labels.
We also examine a recent fairness benchmark (see Appendix F), and show that SLA improves robustness under single domain long-tailed distributions (Appendix G). All experiments were implemented in PyTorch (Paszke et al., 2017).1
Optimization In all experiments, we couple our method with a ResNet26 model pretrained on a downsized version of ImageNet that was used in previous work by Rebuffi et al. (2018). In SLA only gates and corrections are learned, the residual backbone f0 remains fixed at its initial parameters, which implicitly regularizes the model (Rebuffi et al., 2017). Training is carried out for 120 epochs using stochastic gradient descent (momentum parameter of 0.9), batch size of 128, weight decay of 10−4, and an initial learning rate of 0.1 (reduced by 1/10 at epochs 80, 100).
All experiments follow the preprocessing of Rebuffi et al. (2017; 2018), alongside standard augmentations such as normalization, random cropping, etc. Accuracies are averaged over five seeds.
Increasing the number of corrections K within SLA results in small, consistent performance gains. As K = 2 already represents a solid boost from the baseline of having no adapters, we focus on this result in the main part, and report results for higher K alongside variances in Appendix C.
Office-Home The underlying data contains a variety of objects classes (alarm clock, backpack, etc.) among four domains: art, clipart, product, and real world (Venkateswara et al., 2017). In Table 2 we show results for d-supervised multi-domain (MD) approaches: RA (Rebuffi et al., 2018), domain-adversarial learning (Ganin et al., 2016) and a baseline of 4×ResNet26, one for each domain. For latent domain (LD) baselines, we then learn a single ResNet26, this time as a latent domain model over all domains. Next, we couple SLA with the very same ResNet26.
Learning a single ResNet26 over latent domains with no access to d-labels significantly harms performance. This problem is not addressed by simply increasing the depth of the network: while accuracy improves slightly, a ResNet56 exhibits the same performance losses — in particular on the latent domains product (P) and real world (R).
While residual adaptation (RA) (Rebuffi et al., 2018) was shown to work extremely well in many multi-domain scenarios, performance here is sub-par, regardless of whether it accesses d (MD: one Vd per-domain) or not (LD). This likely results from linear modules being reserved for each d when using annotations, enabling no native cross-domain sharing of parameters. When d is hidden on the other hand, the model is forced to share a single linear adaptation module V between all four hidden domains, without the flexible gating we propose in SLA.
Learning annotations through latent domain clustering and coupling this with domain-adversarial gradient reversal as in MMLD (Matsuura & Harada, 2020) increases performance relative to its dannotated counterpart (Ganin et al., 2016). The increase is modest however, likely because enforcing domain-invariance on the gradient level negatively impacts the model’s ability to discriminate between classes (Wang et al., 2020). Another related baseline is MLFN (Chang et al., 2018) which builds on ResNeXt (Xie et al., 2017) to define a latent-factor architecture that accounts for multi-
1Code is available at github.com/VICO-UoE/LatentDomainLearning.
modality in data. Crucially where our method is fine-grained and shares convolutions at every layer, MLFN instead enables and disables entire network blocks, allowing us to outperform it.
SLA outperforms the currently available latent domain models by a consistent margin, and increases UAcc by 12.79% relative to ResNet26. Best performance is obtained when K = D, with performance being reducing slightly from overfitting of larger domains for K > D (see Appendix C).
PACS The second experiment examines performance on the PACS dataset (Li et al., 2017). Crucially PACS domains (art, cartoon, photo, sketch) differ more markedly from one another (c.f. examples in Fig. 6), hence constituting an interesting latent domain problem.
Even for more distinct domains as in PACS, results in Table 3 show that SLA improves over existing baselines. The largest gains occur on smaller domains (e.g. art), where standard models suppress underrepresented parts of the distribution (see additional discussion on imbalanced distributions in Appendix G). Our method again surpasses the accuracy of 4×ResNet26, while requiring a fraction of the total parameters (∼ 9.7 mil for K = 5 vs. ∼ 24.8 mil). The performance of SLA again continues to increase with larger K (see Appendix C).
The performance increase from using a latent domain-adversarial approach (Matsuura & Harada, 2020) versus using domain-annotations (Ganin et al., 2016) confirms that learning domains alongside the rest of the network can be a better strategy than trusting in annotations. Our approach again improves over this, without requiring a clustering stage as in MMLD.
Results for k-means (usingD= 4 centers and clustered on the feature level) and subsequent finetuning show that a two-stage strategy is suboptimal. This is not surprising since, similar to d-supervision via gd in Φ of eq. (3), clustering learns fixed switches that get used across all layers. In contrast to this in SLA we flexibly share or separate features individually at every layer (c.f. qualitative results in Fig. 3), synergizing only where appropriate.
DomainNet We also evaluate models on a large-scale benchmark called DomainNet (Peng et al., 2019a). This dataset contains 518 447 images from six domains (clipart, painting, photos, sketch, infographics, and quickdraw), with a total of |Y| = 345 object classes. The optimization settings remain unchanged from those in previous sections.
Results are shown in Table 4. MLFN performs best on quickdraw, a domain that differs visibly from others (c.f. Fig. 6 for examples from each domain), and having entire network blocks dedicated to it seems to benefit performance. On all remaining domains, SLA outperforms existing models,
regardless of whether they were designed specifically for multi-domain problems, such as RA, or whether they are much deeper/parameter-intensive (ResNet56).
Qualitative analysis We (i) compare global statistics of Office-Home and PACS domains as well as (ii) their per-layer treatment within SLA; (iii) analyze sparse gating, (iv) representations learned by SLA, and show that (v) our module shares between geometric properties (shape, pose, etc.).
i) Fig. 2: average cosine similarities of per-domain gating vectors g∈GL across l= 1, . . . , L layers of ResNet26 show that Office-Home domains differ less than those in PACS.
ii) Fig. 3: layerwise measurements of Corr[gl(x), gl(x′)] for x, x′ drawn from differing d 6= d′ for Office-Home. If inter-domain correlation is high, then similar corrections Vk are responsible for processing samples from two domains. Across top layers of the network there is little correlation, presumably as low-level information associated with each domain is processed independently. In the mid to bottom stages correlation increases: these layers are typically associated with higherorder features (Yosinski et al., 2014; Mahendran & Vedaldi, 2016; Asano et al., 2020), and since label spaces are shared between latent domains, similar object-level features are required to classify objects into their respective categories.
iii) Fig. 4: sparse gates have the flexibility to either output singular activations (i.e. become fully discrete) or all non-zero values (a continuous gate). We measure the per-layer sparsity Ex∼Pd [K− ‖gl(x)‖0]/(K − 1) where ‖ · ‖0 counts values different from zero, finding sparsity of SLA to vary across model depth. Interestingly after each downsampling operation SLA tends to be relatively sparse, followed by a dense gate, then again a sparse one, and so forth. The model thus utilizes the extra flexibility resulting from sparse gates.
Due to PACS domains being relatively distinctive, the dataset is an interesting candidate for additional analysis in (iv) and (v) of how sparse adaptation handles the different ground-truth domains.
iv) Fig. 5 (left): gate vectors g ∈ GL for samples from all four domains in PACS visualized by their principal components. SLA exhibits an intuitive clustering of human-annotated PACS domains: visually similar art and photo (•,•) cluster together. The manifold describing sketches (•) is arguably more primitive than those of the other domains, and indeed only maps to a small region. Cartoon (•) lies somewhere between sketches and real world images. This matches intuition: a cartoon is, more or less, just a colored sketch.
Fig. 5 also highlights one sample that shows an elephant that SLA places among the cartoon (•) domain, but which has been assigned a ground-truth domain label of photo (•) in the PACS dataset. The ground-truth label seems to have been annotated in error, but different from approaches that use d-supervision, our SLA processes latent domains on-the-fly and is therefore not irritated by this.
v) Fig. 5 (right): pairs of samples with similar gates. This shows that latent domains are indicative of more than ground-truth domain labels and extend to geometric similarities: pose, color, etc. of the samples are visibly related. Compare in particular the poses of elephants/dogs (second/third row).
5 CONCLUSION
In this paper we explored two questions: (i) whether domain associations are required for learning effective models over multiple visual domains and (ii) how multi-domain models may best be learned without depending on manually curated domain labels.
As has been shown, the performance of existing models does degrade without domain labels, raising doubts about their suitability for realistic problems that involve diverse data sources. As a remedy, we proposed a novel adaptation strategy which reclaims (and often exceeds) lost accuracy on latent domains, benefiting several problems where some notion of a domain (but no annotation) exists.
ACKNOWLEDGEMENT
HB is supported by the EPSRC programme grant Visual AI EP/T028572/1. TH was supported by EPSRC grant EP/R026173/1.
A DATASETS
Fig. 6 shows examples from the latent domain benchmarks evaluated in Section 4. The selected images have equivalent classes yd = yd′ ∈ Y (for example chair for Office-Home), but different domains (e.g. d = {art, clipart, product, real world}). These examples show that data from different domains often contain very different visual characteristics (compare e.g. photo vs. sketch for PACS), even when the object is the same. At the same time, other domains are more alike (e.g. art and photo), indicating that different amounts of sharing between per-domain parameters are required, which in SLA is facilitated by its gating mechanism.
B RELATED WORK
Multi-domain learning relates most closely to our work. The state-of-the-art methods introduce small convolutional corrections in residual networks to account for individual domains (Rebuffi et al., 2017; 2018), which was recently extended to obtain efficient multi-task models for related language tasks Stickland & Murray (2019). Other work makes use of task-specific attention mechanisms (Liu et al., 2019a), attempts to scale task-specific losses (Kendall et al., 2018), or addresses tasks at the level of gradients (Chen et al., 2017). Crucially, these approaches all rely firmly on domain labels.
Our work is loosely related to learning universal representations (Bilen & Vedaldi, 2017), which was used as a guiding principle in designing more transferable models (Tamaazousti et al., 2019). However, these works also assume the presence of domain labels. Multimodal learning does not make this assumption, and was shown to benefit from accounting for latent semantic factors to match images (Chang et al., 2018), or from normalizing data in separate groups (Deecke et al., 2019). As we show in our experiments (see Section 4), latent domain learning however benefits from more customized solutions than these.
The proposed module gives rise to a differentiable dynamic network architecture, studied e.g. for reinforcement learning (Zoph & Le, 2017; Pham et al., 2018), Bayesian optimization (Kandasamy et al., 2018), or when adapting to new tasks (Mallya et al., 2018; Rosenfeld & Tsotsos, 2018). For such architectures, two components are commonly used: discrete Gumbel-based sampling (Jang et al., 2016), e.g. leveraged in dynamic computer vision architectures (Veit & Belongie, 2018; Sun et al., 2019a), or continuous self-attentive approaches (Lin et al., 2017b), which have been used successfully to scale expert models (Jacobs et al., 1991; Jordan & Jacobs, 1994) to large problem spaces (Shazeer et al., 2017; Wang et al., 2019).
From the perspective of algorithmic fairness, a desirable model property is to ensure consistent predictive equality across different identifiable subgroups in data (Zemel et al., 2013; Hardt et al., 2016; Fish et al., 2016). This relates to one of the goals in latent domain learning: to limit implicit model bias towards large domains, and improve robustness on small domains. Recent work explores connections between models and empirical fairness for visual recognition (Bagdasaryan et al., 2019; Hooker et al., 2020; Wang et al., 2020), different from our experiments however (see Appendix F) they focus their analysis on a setting in which annotations for protected attributes are available.
C VARIATION OF RESULTS
Fig. 7 displays variances of accuracies recorded over ten random initializations on Office-Home (left) and PACS (right). We generally found SLA to be robust to different optimization settings, and as a result observed variances are relatively low across experiments.
LargerK brings an improvement of around 0.5-1% in performance at the expense of a linear increase in learnable parameters (c.f. next section). While accuracy is improved by setting K > 2, gains appear to saturate in line with previous observations around network width (Xie et al., 2017).
D MEMORY REQUIREMENTS
In SLA every layer contains O(K|C|+K|C|2) parameters to parametrize gates and corrections Vk, respectively. This is however an extremely modest requirement, in particular because f0 stays fixed: while a ResNet26 contains ∼ 6.2 mil learnable parameters, even when setting K= 5 within SLA it has just 3.5 mil free parameters, and is a fraction of the number of parameters needed to parametrize four ResNet26 (around 24.8 mil parameters).
Note also that the complexity of solving sparse gates in SLA scales as O(K logK), a negligible increase given the small K required in our method.
E ABLATION
Replacing sparse gating within SLA registers a drop in performance, regardless of whether smooth or discrete mechanisms are used. Accuracies for soft and straight-through Gumbel-softmax sampling (Jang et al., 2016) were on par; we report straight-through sampling here.
We also ran experiments where we did not fix the residual backbone f0 but updated its parameters alongside the learning of SLA. In line with what Rebuffi et al. (2017) report, this lead to overfitting and performance dropped to UAcc = 73.53.
F FAIRNESS
Recent work elevated the role of small subgroups in data and examined model fairness on CelebA (Bagdasaryan et al., 2019; Wang et al., 2020; Hooker et al., 2020). Because such subgroups may be interpreted as constituting an individual latent domain component Pd, they are an interesting candidate to evaluate our purpose-built SLA on.
The benchmark contains different labeled attributes (e.g. “brown hair”, “glasses”), and is modified from the original dataset by hiding gender labels. Models are evaluated on all 39 remaining
Table 6: Average precision and bias amplification of SLA on the CelebA fair attribute recognition benchmark (Wang et al., 2020).
ResNet18 + SLA ResNet34 + SLA ResNet50 + SLA
mAP (↑) 71.76 73.22 (+1.46) 71.33 73.98 (+2.65) 74.52 75.03 (+0.51) BA (↓) 0.025 0.014 0.022 0.009 0.012 0.008
0.5 0.6 0.7 0.8 0.9 1.0 Skew
0
2
4
6 8 Ch an ge i n AP [ %]
Figure 8: Change in AP between ResNet18 and ResNet18-SLA for different gender skews in CelebA attributes.
attributes, which subsequently experience varying amounts of gender skew. Framed as a latent domain problem we have d={female,male}, but models have no access to this information. The images used are the entire Aligned&Cropped subset (Liu et al., 2015) over which we finetune residual models, replacing only the fully-connected layer of the network. We use the optimization settings introduced in Section 4 for 70 epochs with reductions at epochs 30, 40, and 50, selecting the best model on the validation split. This experimental setup is identical to previous work on empirical fairness (Wang et al., 2020; Ramaswamy et al., 2020), which however – different from our work – focused on learning models that have access to the gender-attribute d.
We evaluate per-attribute accuracy using mean average precision (mAP) and report bias amplification (BA) (Zhao et al., 2017). This compares the propensity of a model to make positive predictions (i.e. f exceeds some threshold t+ ∈ [0, 1]) in the gender g∗y that appears most frequent within attribute y, compared to the true counted ratio of positive examples y+:
BA[f ] = Ex∼Px [1f(x)>t+|g∗y 1f(x)>t+ ] − Ey∼Py [1y=y+|g∗y 1y=y+ ] , (6)
where t+ is optimized for on the validation split. For example if 60% of male examples are wearing glasses but under the model this is raised to a total of 65%, then bias is amplified by BA = 0.05.
We report performance for ResNet18, ResNet34, and ResNet50 in Table 6 and compare this to the same model with SLA inserted. SLA consistently raises both mAP and reduces bias, indicating that it relies less on spurious correlations in data to formulate its predictions.
In Fig. 8 we compare per-attribute skew toward either female or male (whichever is more frequent) to the gain in performance from ResNet18 to the same model but with SLA inserted. We observe a clear trend here, whereby SLA is able to raise performance the most in those attributes that experience the largest amounts of skew.
G LONG-TAILED RECOGNITION
Standard models often experience difficulty when some classes are heavily underrepresented. This problem has recently been studied in long-tailed recognition (Liu et al., 2019b; Cao et al., 2019) with
benchmarks that modify CIFAR-10 and CIFAR-100 to an imbalanced version by dropping some classes (e.g. 6-10 for CIFAR-10) (Buda et al., 2018). The severity of the imbalance is described via the ratio ρ = nmax/nmin between the largest and smallest classes.
Long-tailed distributions may be viewed as containing an underrepresented latent component with π = 1/(1 + ρ), and previous results (c.f. Section 4) that fortified small latent domains within P motivate us to evaluate the imbalance setting more closely here.
Since our strategy is architecture-based, it can be combined with the most recent state-of-the-art (loss-based) techniques for long-tailed recognition: a label-distribution-aware margin loss with deferred reweighting (Cao et al., 2019), or reducing contributions from well-classified examples as in focal losses (Lin et al., 2017a). As Table 7 shows, adaptation via sparse gates acts as a regularizer on the underlying ResNet26, and consistently improves performance on long-tail benchmarks. | 1. What is the focus and contribution of the paper on multi-domain learning?
2. What are the strengths of the proposed approach, particularly in terms of learning sparse domain gates without domain labels?
3. What are the weaknesses of the paper regarding its claims and experiments?
4. Do you have any concerns about how the method learns domain-related gating without domain labels?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The authors present the latent domain learning aproach to multi-domain learning without domain annotations by introducing sparse latent adaptation (SLA) for learning sparse domain gates without domain labels.
Review
The paper is well written, easy to follow and the motivations and contributions are clear. Multiple datasets are used for evaluation against multiple SOTA approaches.
SLA is referenced before defining it in 3.1
I am confused on how g_k is learned in Eq 4. How does it learn domain related gating without domain labels? Figure 5 shows that in PACS some domains seem to be learned (eg sketch) while others may not have been learned. Could this suggest that sketch images required additional parameterization to increase performance due to task difficulty? It is mentioned that the goal is not to recover domain labels, but simply adding additional parameterization may be similarly beneficial in single domain data. Have the authors tested whether the performance improvement extends to single domain settings? The results are interesting and I feel the paper could benefit from a deeper analysis on what is being learned by the "domain" gates. If they are not domain specific is this really multi-domain learning?. Also, how are the number of gates (K) chosen?
I would prefer a bit more examination on the results and what is being learned as well as comparisons to only learning the gating parameters and training the whole model together with the gates. |
ICLR | Title
Visual Representation Learning over Latent Domains
Abstract
A fundamental shortcoming of deep neural networks is their specialization to a single task and domain. While multi-domain learning enables the learning of compact models that span multiple visual domains, these rely on the presence of domain labels, in turn requiring laborious curation of datasets. This paper proposes a less explored, but highly realistic new setting called latent domain learning: learning over data from different domains, without access to domain annotations. Experiments show that this setting is challenging for standard models and existing multi-domain approaches, calling for new customized solutions: a sparse adaptation strategy is formulated which enhances performance by accounting for latent domains in data. Our method can be paired seamlessly with existing models, and benefits conceptually related tasks, e.g. empirical fairness problems and long-tailed recognition.
N/A
A fundamental shortcoming of deep neural networks is their specialization to a single task and domain. While multi-domain learning enables the learning of compact models that span multiple visual domains, these rely on the presence of domain labels, in turn requiring laborious curation of datasets. This paper proposes a less explored, but highly realistic new setting called latent domain learning: learning over data from different domains, without access to domain annotations. Experiments show that this setting is challenging for standard models and existing multi-domain approaches, calling for new customized solutions: a sparse adaptation strategy is formulated which enhances performance by accounting for latent domains in data. Our method can be paired seamlessly with existing models, and benefits conceptually related tasks, e.g. empirical fairness problems and long-tailed recognition.
1 INTRODUCTION
Datasets have been a major driving force behind the rapid progress in computer vision research in the last two decades. They provide a testbed for developing new algorithms and comparing them to existing ones. However, datasets can also narrow down the focus of research into overspecialized solutions and impede developing a broader understanding of the world.
In recent years this narrow scope of datasets has been widely questioned (Torralba & Efros, 2011; Tommasi et al., 2017; Recht et al., 2019) and addressing some of these limitations has become a very active area of research. Two actively studied themes to investigate broader learning criteria are multi-domain learning (Nam & Han, 2016; Bulat et al., 2019; Schoenauer-Sebag et al., 2019) and domain adaptation (Ganin et al., 2016; Tzeng et al., 2017; Hoffman et al., 2018; Xu et al., 2018; Peng et al., 2019a; Sun et al., 2019b). While multi-domain techniques focus on learning a single model that can generalize over multiple domains, domain adaptation techniques aim to efficiently transfer the representations that are learned in one dataset to another.
Related themes have also been studied in domain generalization (Li et al., 2018; 2019b;a; Gulrajani & Lopez-Paz, 2020) and continual learning (Kirkpatrick et al., 2017; Lopez-Paz & Ranzato, 2017; Riemer et al., 2019), where the focus lies on learning representations that can generalize to unseen domains, and to preserve knowledge acquired from previously seen tasks, respectively.
While there exists no canonical definition for what exactly a visual domain is, previous works in multi-domain learning assume that different subsets of data exist, with some defining characteristic that allows them to be separated from each other. Each subset, indexed by d = 1, . . . , D, is assigned to a pre-defined visual domain, and vice-versa multi-domain methods then use such domain associations to parameterize their representations and learn some pθ(y|x, d). In some cases domains are intuitive and their annotation straightforward. Consider a problem where images have little visual relationship, for example joint learning of Omniglot handwritten symbols (Lake et al., 2015) and CIFAR-10 objects (Krizhevsky & Hinton, 2009). In this case, it is safe to assume that encoding an explicit domain-specific identifier into pθ is a good idea, and results in the multi-domain literature provide clear evidence that it is beneficial to do so (Rebuffi et al., 2018; Liu et al., 2019a; Guo et al., 2019a; Mancini et al., 2020).
The assumption that domain labels are always available has been widely adopted in multi-domain learning; however this assumption is not without difficulty. For one, unless the process of domain annotation is automated due to combining existing datasets as in e.g. Rebuffi et al. (2017), their manual collection, curation, and domain labeling is very laborious.
And even if adequate resources exist, it is often difficult to decide the optimal criteria for the annotation of d: some datasets contain sketches, paintings and real world images (Li et al., 2017), others images captured during day or night (Sultani et al., 2018). Automatically collected datasets (Thomee et al., 2016; Sun et al., 2017) contain mixtures of low/high resolution images, taken with different cameras by amateurs/professionals. There is no obvious answer which of these should form their own distinct domain subset.
Moreover, the work of Bouchacourt et al. (2018) considers semantic groupings of data: they show that when dividing data by subcategories, such as size, shape, etc., and incorporating this information into the model, then this benefits performance. Should one therefore also encode the number of objects into domains, or their color, shape, and so on?
Given the relatively loose requirement that domains are supposed to be different while related in some sense (Pan & Yang, 2009), these examples hint at the difficulty of deciding whether domains are needed, and – if the answer to that is yes – what the optimal domain criteria are. And note that even if such assignments are made very carefully for some problem, nothing guarantees that they will transfer effectively to some other task.
This paper carefully investigates this ambiguity and studies two central questions:
1. Are domain labels always optimal for learning multi-domain representations? 2. How can models best be learned that generalize well over visually diverse domains, without
domain labels?
To study this problem, we introduce a new setting (c.f. Fig. 1) in which models are learned over multiple domains without domain annotations — latent domain learning for short.
While latent domain learning is a highly practical research problem in the context of transfer learning, it poses multiple challenges that have not been previously investigated in connection with deep visual representation learning. In particular, we find that the removal of domain associations leads to performance losses for standard architectures due to imbalances in the underlying distribution and different difficulty levels of the associated domain-level tasks.
We carry out a rigorous quantitative analysis that includes concepts from multi-domain learning (Rebuffi et al., 2018; Chang et al., 2018), and find that their performance benefits do not directly extend to latent domain learning. To account for this lost performance, we formulate a novel method called sparse latent adaptation (Section 3.2) which enables internal feature representations to dynamically adapt to instances from multiple domains in data, without requiring annotations for this. Moreover, we show that latent domain methods appear to benefit single domain data and real world tasks, such as fairness problems (Appendix F), and long-tailed recognition (Appendix G).
2 LATENT DOMAIN LEARNING
This section provides an overview over latent domain learning and contrasts it against other types of related learning problems, in particular multi-domain learning.
2.1 PROBLEM SETTING
When learning on multiple domains, the common assumption is that data is sampled i.i.d. from a mixture of distributions Pd with domain indices d = 1, . . . , D. Together, they constitute the datagenerating distribution as P = ∑ d πdPd, where each domain is associated with a relative share πd = Nd/N , with N the total number of samples, and Nd those belonging to the d’th domain. In multi-domain learning, domain labels are available for all samples (Nam & Han, 2016; Rebuffi et al., 2017; 2018; Bulat et al., 2019), such that the overall data available for learning consists of DMD = {(xi, di, yi)} with i = 1, . . . , N . In latent domain learning the information associating each sample xi with a domain di is not available. As such, domain-specific labels yi cannot be inferred from sample-domain pairs (xi, di) and one is instead forced to learn a single model fθ over the latent domain dataset DLD = {(xi, yi)}. While latent domain learning can include mutually exclusive classes and disjoint label spaces Y1 ∪ · · · ∪ YD (as in long-tailed recognition, see Appendix G), we mainly focus on the setting of shared label spaces, i.e. Yd = Yd′ . For example a dataset may contain images of dogs or elephants that can appear as either photos, paintings, or sketches.
Latent domains have previously attracted interest in the context of domain adaptation, where the lack of annotations was recovered through hierarchical Hoffman et al. (2012) and kernel-based clustering (Gong et al., 2013), via exemplar SVMs (Xu et al., 2014), or by measuring mutual information (Xiong et al., 2014). More recent work corrects batch statistics of domain adaptation layers using Gaussian mixtures (Mancini et al., 2018), or studies the shift from some source domain to a target distribution that contains multiple latent domains (Peng et al., 2019b; Matsuura & Harada, 2020). Latent domain learning however differs fundamentally from these works: Table 1 contains a comparison to existing transfer learning settings.
A common baseline in multi-domain learning is to finetune D models, one for each individual domain (Rebuffi et al., 2018; Liu et al., 2019a). This requires learning a large number of parameters and shares no parameters across domains, but can serve as a strong baseline to compare against. We show that in many cases, even when domains were carefully annotated, a dynamic latent domain approach can surpass the performance of such domain-supervised baselines (see Section 4).
2.2 OBSERVED VS. UNIFORM ACCURACY
Consider a problem in which the data is sampled i.i.d. from P = πaPda + πbPdb , i.e. two hidden domains. When domain labels are not available in the data, a standard strategy is to treat all samples equally, and measure the observed accuracy:
OAcc[f ] = E(xi,yi)∼P[1yf(xi)=yi ], (1)
where yf denotes the class assigned to sample xi by the model f , and yi its corresponding label for training. The OAcc has a problematic property: if P consists of two imbalanced domains such that πa ≥ πb, then the performance on da dominates it. For example if da has a 90% overall share, and the model perfectly classifies this domain while obtaining 0% accuracy on db, then OAcc would still assume 0.9, hiding the underlying damage to domain db.
This motivates alternative formulations for latent domain learning, to anticipate (and account for) imbalanced domains in data. If it is possible to identify some semantic domain labeling (as typically included in multi-domain/domain adaptation benchmarks), one can compare performances across individual subgroups. This allows picking up on domain-specific performance losses which traditional metrics (such as OAcc) fail to capture.
Where this is possible, we therefore propose to also measure latent domain performance in terms of uniform accuracy which decouples accuracies from relative ground-truth domain sizes:
UAcc[f ] = 1
D D∑ d=1 E(xi,yi)∼Pd [1yf (xi)=yi ]. (2)
Returning to the above example, a uniform measurement reflects the model’s lack of performance on db as UAcc = 0.5. Once again note while ground-truth domain annotations are required in order to compute uniform accuracy, these are never used to train latent domain models.
3 METHODS
To enable robust learning in the new proposed setting, we formulate a novel module called sparse latent adaptation which can adaptively account for latent domains. Section 3.1 reviews adaptation strategies popular in the multi-domain context, which our method extends (and generalizes).
3.1 LATENT ADAPTATION
When domain labels d are available (not the case in latent domain learning) one strategy established by Rebuffi et al. (2017) is to modulate networks by constraining the layerwise transformation of residual networks (He et al., 2016) Φ(x) = x + f(x) to allow at most a linear change Vd per each domain from some pretrained mapping Φ0 (with f0 in every layer), whereby Φ(x)−Φ0(x) = Vdx. Note the slight abuse of notation here in letting x denote a feature map with channels C. Rearranging this yields:
Φ(x, d) = x+ f0(x) + D∑ d=1 gdVd(x), (3)
with a domain-supervised switch that assigns corrections to domains, i.e. gd = 1 for d associated with x and 0 otherwise. Each Vd is parametrized through 1x1 convolutions, and f0 denotes a shared 3x3 convolution obtained e.g. on ImageNet (Deng et al., 2009). This builds on the assumption that models with strong general-purpose representations require minimal changes to adapt to new tasks (Bilen & Vedaldi, 2017), making learning each Vd sufficient, while f0 remains as is. Such adaptation strategies have been successfully used in few shot learning (Li et al., 2021) and NLP (Stickland & Murray, 2019) to restrict the number of learnable parameters there.
In latent domain learning access to d is removed, resulting in two new challenges: we have no a priori information about the right number of corrections {Vd}, and we cannot use d to decide which one of these to apply.
To mitigate the lack of domain labels d, first we assume that input data is constituted by K latent distributions Pk. Second we propose to replace the switch gd with a learnable gating mechanism g1(x), . . . , gK(x) that assigns each sample x to latent domains as follows:
Φ(x) = x+ f0(x) + K∑ k=1 gk(x)Vk(x), (4)
The gates gk control which convolution is applied to which sample x, and correspond to a categorical variable over K categories, i.e. 0 ≤ gk ≤ 1 and ∑ k gk = 1. Note in particular how parametric dependency of Φ on d is removed. How to best choose K is discussed in more detail in Section 4.
While we motivate our latent domain module from learning over multiple domains, the main goal is not to recover the domain labels annotated in some datasets. When optimizing some loss (standard cross-entropy in the classification case), there is no guarantee that the learned Vk will correspond to an annotated visual domain and many additional factors (shape, pose, color, etc.) can enter them as
well. Latent domain models are simply optimized to produce the lowest training error, and in fact seldom recover ground-truth domains (c.f. Fig. 5). Note the broader concept presented here may in principle also be incorporated with other multi-task concepts (Perez et al., 2018; Guo et al., 2019a), adaptation strategies however stand out due to their methodological simplicity.
Different options exist for parametrizing the gating function g : X → G ⊆ RK . An ideal gating mechanism for latent domain learning would fulfill two seemingly incompatible properties: be able to filter latent domains in some layers (requiring a discrete gate), but also share parameters between related latent domains in other layers (smooth gates). The next section proposes how this can be resolved without requiring task relationships (Vandenhende et al., 2020) or outer optimization loops (Wu et al., 2018) through the use of sparseness.
3.2 SPARSE LATENT ADAPTERS (SLA)
We parameterize the gating function g with a small linear transformation W : C → RK that constitutes the pre-activation q=Wϕ(x) within the gates, where ϕ : X → C denotes an average pooling projection onto the channels.
A crucial choice is whether the activation for q ∈ RK should map to a discrete space G = {0, 1}K or a continuous G = [0, 1]K in which the Vk are shared. We propose a different strategy that lets gates be smooth when appropriate, but a threshold τ allows for sparse (or discrete) outputs fτ (q) = [q − τ(q)]+ with [·]+ = max(0, ·). Crucially fτ can be solved in a differentiable manner (Martins & Astudillo, 2016) by sorting q1 ≥ · · · ≥ qK , solving k∗ = max{k | 1 + kqk > ∑ j≤k qj} and computing τ = [( ∑ j≤k∗ qj)− 1]/k∗.
Consider q = [0.1, 1.0, 0.5] for which sparse activation results in fτ (q) = [0.0, 0.75, 0.25] while softmax yields [0.202, 0.497, 0.301]. Sparse activation filters out q1, while sharing between q2 and q3. We may now define:
SLA(x) , x+ f0(x) + K∑ k=1 [ fτ ◦W ◦ ϕ(x) ] k Vk(x), (5)
where [·]k picks the k’th element of the gating sequence. To the best of our knowledge sparse activation strategies were never previously employed for expert models in computer vision and have so far been restricted to the NLP setting (Deng et al., 2017; Peters et al., 2019). Note SLA generalizes residual adaption (Rebuffi et al., 2017; 2018), which is recovered by setting K= 1.
While gating is subject to complex interactions such as negative transfer (Rosenbaum et al., 2019), our ablations in Table 5 clearly show that taking a sparse perspective – which allows the model to assume either continuous or discrete forms – outperforms the alternative of a priori fixing either smoothness through self-attention (Lin et al., 2017b), or discrete Gumbel-based sampling (Jang et al., 2016). Note this choice between discrete (Veit & Belongie, 2018; Guo et al., 2019b) and continuous mechanisms (Shazeer et al., 2017; Sun et al., 2019a; Wang et al., 2019) delineates previous work that employs differentiable gates.
A softmax-activated model can in principle also learn to suppress individual preactivation components by letting some qk go to −∞. This however requires either learning extra calibration parameters at every layer, defining a hard cutoff value (Shazeer et al., 2017) (thereby removing differentiability), or very large row-norms within the linear mapping W— a highly unlikely outcome given the several mechanisms found in state-of-the-art models (in particular weight decay, norm-penalties, or BN (Ioffe & Szegedy, 2015)) which act as direct counterforces to this.
4 EXPERIMENTS
We evaluate our proposed methods on three latent domain benchmarks: Office-Home, PACS, and DomainNet (c.f. Fig. 6, which shows example images from these benchmarks). The main goal here is not to compare to existing multi-domain or domain adaptation methods that these datasets were initially designed for, but to study our two central research questions: whether domain labels are useful for effectively learning over multiple domains, and whether one can learn such representations without domain labels.
We also examine a recent fairness benchmark (see Appendix F), and show that SLA improves robustness under single domain long-tailed distributions (Appendix G). All experiments were implemented in PyTorch (Paszke et al., 2017).1
Optimization In all experiments, we couple our method with a ResNet26 model pretrained on a downsized version of ImageNet that was used in previous work by Rebuffi et al. (2018). In SLA only gates and corrections are learned, the residual backbone f0 remains fixed at its initial parameters, which implicitly regularizes the model (Rebuffi et al., 2017). Training is carried out for 120 epochs using stochastic gradient descent (momentum parameter of 0.9), batch size of 128, weight decay of 10−4, and an initial learning rate of 0.1 (reduced by 1/10 at epochs 80, 100).
All experiments follow the preprocessing of Rebuffi et al. (2017; 2018), alongside standard augmentations such as normalization, random cropping, etc. Accuracies are averaged over five seeds.
Increasing the number of corrections K within SLA results in small, consistent performance gains. As K = 2 already represents a solid boost from the baseline of having no adapters, we focus on this result in the main part, and report results for higher K alongside variances in Appendix C.
Office-Home The underlying data contains a variety of objects classes (alarm clock, backpack, etc.) among four domains: art, clipart, product, and real world (Venkateswara et al., 2017). In Table 2 we show results for d-supervised multi-domain (MD) approaches: RA (Rebuffi et al., 2018), domain-adversarial learning (Ganin et al., 2016) and a baseline of 4×ResNet26, one for each domain. For latent domain (LD) baselines, we then learn a single ResNet26, this time as a latent domain model over all domains. Next, we couple SLA with the very same ResNet26.
Learning a single ResNet26 over latent domains with no access to d-labels significantly harms performance. This problem is not addressed by simply increasing the depth of the network: while accuracy improves slightly, a ResNet56 exhibits the same performance losses — in particular on the latent domains product (P) and real world (R).
While residual adaptation (RA) (Rebuffi et al., 2018) was shown to work extremely well in many multi-domain scenarios, performance here is sub-par, regardless of whether it accesses d (MD: one Vd per-domain) or not (LD). This likely results from linear modules being reserved for each d when using annotations, enabling no native cross-domain sharing of parameters. When d is hidden on the other hand, the model is forced to share a single linear adaptation module V between all four hidden domains, without the flexible gating we propose in SLA.
Learning annotations through latent domain clustering and coupling this with domain-adversarial gradient reversal as in MMLD (Matsuura & Harada, 2020) increases performance relative to its dannotated counterpart (Ganin et al., 2016). The increase is modest however, likely because enforcing domain-invariance on the gradient level negatively impacts the model’s ability to discriminate between classes (Wang et al., 2020). Another related baseline is MLFN (Chang et al., 2018) which builds on ResNeXt (Xie et al., 2017) to define a latent-factor architecture that accounts for multi-
1Code is available at github.com/VICO-UoE/LatentDomainLearning.
modality in data. Crucially where our method is fine-grained and shares convolutions at every layer, MLFN instead enables and disables entire network blocks, allowing us to outperform it.
SLA outperforms the currently available latent domain models by a consistent margin, and increases UAcc by 12.79% relative to ResNet26. Best performance is obtained when K = D, with performance being reducing slightly from overfitting of larger domains for K > D (see Appendix C).
PACS The second experiment examines performance on the PACS dataset (Li et al., 2017). Crucially PACS domains (art, cartoon, photo, sketch) differ more markedly from one another (c.f. examples in Fig. 6), hence constituting an interesting latent domain problem.
Even for more distinct domains as in PACS, results in Table 3 show that SLA improves over existing baselines. The largest gains occur on smaller domains (e.g. art), where standard models suppress underrepresented parts of the distribution (see additional discussion on imbalanced distributions in Appendix G). Our method again surpasses the accuracy of 4×ResNet26, while requiring a fraction of the total parameters (∼ 9.7 mil for K = 5 vs. ∼ 24.8 mil). The performance of SLA again continues to increase with larger K (see Appendix C).
The performance increase from using a latent domain-adversarial approach (Matsuura & Harada, 2020) versus using domain-annotations (Ganin et al., 2016) confirms that learning domains alongside the rest of the network can be a better strategy than trusting in annotations. Our approach again improves over this, without requiring a clustering stage as in MMLD.
Results for k-means (usingD= 4 centers and clustered on the feature level) and subsequent finetuning show that a two-stage strategy is suboptimal. This is not surprising since, similar to d-supervision via gd in Φ of eq. (3), clustering learns fixed switches that get used across all layers. In contrast to this in SLA we flexibly share or separate features individually at every layer (c.f. qualitative results in Fig. 3), synergizing only where appropriate.
DomainNet We also evaluate models on a large-scale benchmark called DomainNet (Peng et al., 2019a). This dataset contains 518 447 images from six domains (clipart, painting, photos, sketch, infographics, and quickdraw), with a total of |Y| = 345 object classes. The optimization settings remain unchanged from those in previous sections.
Results are shown in Table 4. MLFN performs best on quickdraw, a domain that differs visibly from others (c.f. Fig. 6 for examples from each domain), and having entire network blocks dedicated to it seems to benefit performance. On all remaining domains, SLA outperforms existing models,
regardless of whether they were designed specifically for multi-domain problems, such as RA, or whether they are much deeper/parameter-intensive (ResNet56).
Qualitative analysis We (i) compare global statistics of Office-Home and PACS domains as well as (ii) their per-layer treatment within SLA; (iii) analyze sparse gating, (iv) representations learned by SLA, and show that (v) our module shares between geometric properties (shape, pose, etc.).
i) Fig. 2: average cosine similarities of per-domain gating vectors g∈GL across l= 1, . . . , L layers of ResNet26 show that Office-Home domains differ less than those in PACS.
ii) Fig. 3: layerwise measurements of Corr[gl(x), gl(x′)] for x, x′ drawn from differing d 6= d′ for Office-Home. If inter-domain correlation is high, then similar corrections Vk are responsible for processing samples from two domains. Across top layers of the network there is little correlation, presumably as low-level information associated with each domain is processed independently. In the mid to bottom stages correlation increases: these layers are typically associated with higherorder features (Yosinski et al., 2014; Mahendran & Vedaldi, 2016; Asano et al., 2020), and since label spaces are shared between latent domains, similar object-level features are required to classify objects into their respective categories.
iii) Fig. 4: sparse gates have the flexibility to either output singular activations (i.e. become fully discrete) or all non-zero values (a continuous gate). We measure the per-layer sparsity Ex∼Pd [K− ‖gl(x)‖0]/(K − 1) where ‖ · ‖0 counts values different from zero, finding sparsity of SLA to vary across model depth. Interestingly after each downsampling operation SLA tends to be relatively sparse, followed by a dense gate, then again a sparse one, and so forth. The model thus utilizes the extra flexibility resulting from sparse gates.
Due to PACS domains being relatively distinctive, the dataset is an interesting candidate for additional analysis in (iv) and (v) of how sparse adaptation handles the different ground-truth domains.
iv) Fig. 5 (left): gate vectors g ∈ GL for samples from all four domains in PACS visualized by their principal components. SLA exhibits an intuitive clustering of human-annotated PACS domains: visually similar art and photo (•,•) cluster together. The manifold describing sketches (•) is arguably more primitive than those of the other domains, and indeed only maps to a small region. Cartoon (•) lies somewhere between sketches and real world images. This matches intuition: a cartoon is, more or less, just a colored sketch.
Fig. 5 also highlights one sample that shows an elephant that SLA places among the cartoon (•) domain, but which has been assigned a ground-truth domain label of photo (•) in the PACS dataset. The ground-truth label seems to have been annotated in error, but different from approaches that use d-supervision, our SLA processes latent domains on-the-fly and is therefore not irritated by this.
v) Fig. 5 (right): pairs of samples with similar gates. This shows that latent domains are indicative of more than ground-truth domain labels and extend to geometric similarities: pose, color, etc. of the samples are visibly related. Compare in particular the poses of elephants/dogs (second/third row).
5 CONCLUSION
In this paper we explored two questions: (i) whether domain associations are required for learning effective models over multiple visual domains and (ii) how multi-domain models may best be learned without depending on manually curated domain labels.
As has been shown, the performance of existing models does degrade without domain labels, raising doubts about their suitability for realistic problems that involve diverse data sources. As a remedy, we proposed a novel adaptation strategy which reclaims (and often exceeds) lost accuracy on latent domains, benefiting several problems where some notion of a domain (but no annotation) exists.
ACKNOWLEDGEMENT
HB is supported by the EPSRC programme grant Visual AI EP/T028572/1. TH was supported by EPSRC grant EP/R026173/1.
A DATASETS
Fig. 6 shows examples from the latent domain benchmarks evaluated in Section 4. The selected images have equivalent classes yd = yd′ ∈ Y (for example chair for Office-Home), but different domains (e.g. d = {art, clipart, product, real world}). These examples show that data from different domains often contain very different visual characteristics (compare e.g. photo vs. sketch for PACS), even when the object is the same. At the same time, other domains are more alike (e.g. art and photo), indicating that different amounts of sharing between per-domain parameters are required, which in SLA is facilitated by its gating mechanism.
B RELATED WORK
Multi-domain learning relates most closely to our work. The state-of-the-art methods introduce small convolutional corrections in residual networks to account for individual domains (Rebuffi et al., 2017; 2018), which was recently extended to obtain efficient multi-task models for related language tasks Stickland & Murray (2019). Other work makes use of task-specific attention mechanisms (Liu et al., 2019a), attempts to scale task-specific losses (Kendall et al., 2018), or addresses tasks at the level of gradients (Chen et al., 2017). Crucially, these approaches all rely firmly on domain labels.
Our work is loosely related to learning universal representations (Bilen & Vedaldi, 2017), which was used as a guiding principle in designing more transferable models (Tamaazousti et al., 2019). However, these works also assume the presence of domain labels. Multimodal learning does not make this assumption, and was shown to benefit from accounting for latent semantic factors to match images (Chang et al., 2018), or from normalizing data in separate groups (Deecke et al., 2019). As we show in our experiments (see Section 4), latent domain learning however benefits from more customized solutions than these.
The proposed module gives rise to a differentiable dynamic network architecture, studied e.g. for reinforcement learning (Zoph & Le, 2017; Pham et al., 2018), Bayesian optimization (Kandasamy et al., 2018), or when adapting to new tasks (Mallya et al., 2018; Rosenfeld & Tsotsos, 2018). For such architectures, two components are commonly used: discrete Gumbel-based sampling (Jang et al., 2016), e.g. leveraged in dynamic computer vision architectures (Veit & Belongie, 2018; Sun et al., 2019a), or continuous self-attentive approaches (Lin et al., 2017b), which have been used successfully to scale expert models (Jacobs et al., 1991; Jordan & Jacobs, 1994) to large problem spaces (Shazeer et al., 2017; Wang et al., 2019).
From the perspective of algorithmic fairness, a desirable model property is to ensure consistent predictive equality across different identifiable subgroups in data (Zemel et al., 2013; Hardt et al., 2016; Fish et al., 2016). This relates to one of the goals in latent domain learning: to limit implicit model bias towards large domains, and improve robustness on small domains. Recent work explores connections between models and empirical fairness for visual recognition (Bagdasaryan et al., 2019; Hooker et al., 2020; Wang et al., 2020), different from our experiments however (see Appendix F) they focus their analysis on a setting in which annotations for protected attributes are available.
C VARIATION OF RESULTS
Fig. 7 displays variances of accuracies recorded over ten random initializations on Office-Home (left) and PACS (right). We generally found SLA to be robust to different optimization settings, and as a result observed variances are relatively low across experiments.
LargerK brings an improvement of around 0.5-1% in performance at the expense of a linear increase in learnable parameters (c.f. next section). While accuracy is improved by setting K > 2, gains appear to saturate in line with previous observations around network width (Xie et al., 2017).
D MEMORY REQUIREMENTS
In SLA every layer contains O(K|C|+K|C|2) parameters to parametrize gates and corrections Vk, respectively. This is however an extremely modest requirement, in particular because f0 stays fixed: while a ResNet26 contains ∼ 6.2 mil learnable parameters, even when setting K= 5 within SLA it has just 3.5 mil free parameters, and is a fraction of the number of parameters needed to parametrize four ResNet26 (around 24.8 mil parameters).
Note also that the complexity of solving sparse gates in SLA scales as O(K logK), a negligible increase given the small K required in our method.
E ABLATION
Replacing sparse gating within SLA registers a drop in performance, regardless of whether smooth or discrete mechanisms are used. Accuracies for soft and straight-through Gumbel-softmax sampling (Jang et al., 2016) were on par; we report straight-through sampling here.
We also ran experiments where we did not fix the residual backbone f0 but updated its parameters alongside the learning of SLA. In line with what Rebuffi et al. (2017) report, this lead to overfitting and performance dropped to UAcc = 73.53.
F FAIRNESS
Recent work elevated the role of small subgroups in data and examined model fairness on CelebA (Bagdasaryan et al., 2019; Wang et al., 2020; Hooker et al., 2020). Because such subgroups may be interpreted as constituting an individual latent domain component Pd, they are an interesting candidate to evaluate our purpose-built SLA on.
The benchmark contains different labeled attributes (e.g. “brown hair”, “glasses”), and is modified from the original dataset by hiding gender labels. Models are evaluated on all 39 remaining
Table 6: Average precision and bias amplification of SLA on the CelebA fair attribute recognition benchmark (Wang et al., 2020).
ResNet18 + SLA ResNet34 + SLA ResNet50 + SLA
mAP (↑) 71.76 73.22 (+1.46) 71.33 73.98 (+2.65) 74.52 75.03 (+0.51) BA (↓) 0.025 0.014 0.022 0.009 0.012 0.008
0.5 0.6 0.7 0.8 0.9 1.0 Skew
0
2
4
6 8 Ch an ge i n AP [ %]
Figure 8: Change in AP between ResNet18 and ResNet18-SLA for different gender skews in CelebA attributes.
attributes, which subsequently experience varying amounts of gender skew. Framed as a latent domain problem we have d={female,male}, but models have no access to this information. The images used are the entire Aligned&Cropped subset (Liu et al., 2015) over which we finetune residual models, replacing only the fully-connected layer of the network. We use the optimization settings introduced in Section 4 for 70 epochs with reductions at epochs 30, 40, and 50, selecting the best model on the validation split. This experimental setup is identical to previous work on empirical fairness (Wang et al., 2020; Ramaswamy et al., 2020), which however – different from our work – focused on learning models that have access to the gender-attribute d.
We evaluate per-attribute accuracy using mean average precision (mAP) and report bias amplification (BA) (Zhao et al., 2017). This compares the propensity of a model to make positive predictions (i.e. f exceeds some threshold t+ ∈ [0, 1]) in the gender g∗y that appears most frequent within attribute y, compared to the true counted ratio of positive examples y+:
BA[f ] = Ex∼Px [1f(x)>t+|g∗y 1f(x)>t+ ] − Ey∼Py [1y=y+|g∗y 1y=y+ ] , (6)
where t+ is optimized for on the validation split. For example if 60% of male examples are wearing glasses but under the model this is raised to a total of 65%, then bias is amplified by BA = 0.05.
We report performance for ResNet18, ResNet34, and ResNet50 in Table 6 and compare this to the same model with SLA inserted. SLA consistently raises both mAP and reduces bias, indicating that it relies less on spurious correlations in data to formulate its predictions.
In Fig. 8 we compare per-attribute skew toward either female or male (whichever is more frequent) to the gain in performance from ResNet18 to the same model but with SLA inserted. We observe a clear trend here, whereby SLA is able to raise performance the most in those attributes that experience the largest amounts of skew.
G LONG-TAILED RECOGNITION
Standard models often experience difficulty when some classes are heavily underrepresented. This problem has recently been studied in long-tailed recognition (Liu et al., 2019b; Cao et al., 2019) with
benchmarks that modify CIFAR-10 and CIFAR-100 to an imbalanced version by dropping some classes (e.g. 6-10 for CIFAR-10) (Buda et al., 2018). The severity of the imbalance is described via the ratio ρ = nmax/nmin between the largest and smallest classes.
Long-tailed distributions may be viewed as containing an underrepresented latent component with π = 1/(1 + ρ), and previous results (c.f. Section 4) that fortified small latent domains within P motivate us to evaluate the imbalance setting more closely here.
Since our strategy is architecture-based, it can be combined with the most recent state-of-the-art (loss-based) techniques for long-tailed recognition: a label-distribution-aware margin loss with deferred reweighting (Cao et al., 2019), or reducing contributions from well-classified examples as in focal losses (Lin et al., 2017a). As Table 7 shows, adaptation via sparse gates acts as a regularizer on the underlying ResNet26, and consistently improves performance on long-tail benchmarks. | 1. What is the focus and contribution of the paper on latent domain learning?
2. What are the strengths of the proposed approach, particularly in its technical soundness and potential applications?
3. What are the weaknesses of the paper, especially in terms of hyperparameter selection and experimental results' variability?
4. How can the authors improve the robustness of their model?
5. Are there any other approaches or methods that could enhance or complement the proposed technique? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, the authors proposed latent domain learning for adaptation. Experiments on multiple benchmark datasets show improved result. Several visualizations also illustrate the effectiveness of the proposed approach.
Review
Strength:
The proposed latent domain learning makes sense and technically sound to me. It also has great potential in real world applications.
Extensive experiments and sufficient analysis validated the approach empirically.
Writing is good and easy to follow.
Weakness:
It seems that the optimal value of hyperparameter K is different for different datasets. Is there any thorough methodology to pick a good value instead of using K from 2 to 5 or just 2 as experimented in the current draft?
For the results, it was averaged over 5 random initialization, what is the variance for each experiment? It is not sufficient enough to use the mean itself for comparison as there might be a very large variance that indicates the model is not robust. |
ICLR | Title
Visual Representation Learning over Latent Domains
Abstract
A fundamental shortcoming of deep neural networks is their specialization to a single task and domain. While multi-domain learning enables the learning of compact models that span multiple visual domains, these rely on the presence of domain labels, in turn requiring laborious curation of datasets. This paper proposes a less explored, but highly realistic new setting called latent domain learning: learning over data from different domains, without access to domain annotations. Experiments show that this setting is challenging for standard models and existing multi-domain approaches, calling for new customized solutions: a sparse adaptation strategy is formulated which enhances performance by accounting for latent domains in data. Our method can be paired seamlessly with existing models, and benefits conceptually related tasks, e.g. empirical fairness problems and long-tailed recognition.
N/A
A fundamental shortcoming of deep neural networks is their specialization to a single task and domain. While multi-domain learning enables the learning of compact models that span multiple visual domains, these rely on the presence of domain labels, in turn requiring laborious curation of datasets. This paper proposes a less explored, but highly realistic new setting called latent domain learning: learning over data from different domains, without access to domain annotations. Experiments show that this setting is challenging for standard models and existing multi-domain approaches, calling for new customized solutions: a sparse adaptation strategy is formulated which enhances performance by accounting for latent domains in data. Our method can be paired seamlessly with existing models, and benefits conceptually related tasks, e.g. empirical fairness problems and long-tailed recognition.
1 INTRODUCTION
Datasets have been a major driving force behind the rapid progress in computer vision research in the last two decades. They provide a testbed for developing new algorithms and comparing them to existing ones. However, datasets can also narrow down the focus of research into overspecialized solutions and impede developing a broader understanding of the world.
In recent years this narrow scope of datasets has been widely questioned (Torralba & Efros, 2011; Tommasi et al., 2017; Recht et al., 2019) and addressing some of these limitations has become a very active area of research. Two actively studied themes to investigate broader learning criteria are multi-domain learning (Nam & Han, 2016; Bulat et al., 2019; Schoenauer-Sebag et al., 2019) and domain adaptation (Ganin et al., 2016; Tzeng et al., 2017; Hoffman et al., 2018; Xu et al., 2018; Peng et al., 2019a; Sun et al., 2019b). While multi-domain techniques focus on learning a single model that can generalize over multiple domains, domain adaptation techniques aim to efficiently transfer the representations that are learned in one dataset to another.
Related themes have also been studied in domain generalization (Li et al., 2018; 2019b;a; Gulrajani & Lopez-Paz, 2020) and continual learning (Kirkpatrick et al., 2017; Lopez-Paz & Ranzato, 2017; Riemer et al., 2019), where the focus lies on learning representations that can generalize to unseen domains, and to preserve knowledge acquired from previously seen tasks, respectively.
While there exists no canonical definition for what exactly a visual domain is, previous works in multi-domain learning assume that different subsets of data exist, with some defining characteristic that allows them to be separated from each other. Each subset, indexed by d = 1, . . . , D, is assigned to a pre-defined visual domain, and vice-versa multi-domain methods then use such domain associations to parameterize their representations and learn some pθ(y|x, d). In some cases domains are intuitive and their annotation straightforward. Consider a problem where images have little visual relationship, for example joint learning of Omniglot handwritten symbols (Lake et al., 2015) and CIFAR-10 objects (Krizhevsky & Hinton, 2009). In this case, it is safe to assume that encoding an explicit domain-specific identifier into pθ is a good idea, and results in the multi-domain literature provide clear evidence that it is beneficial to do so (Rebuffi et al., 2018; Liu et al., 2019a; Guo et al., 2019a; Mancini et al., 2020).
The assumption that domain labels are always available has been widely adopted in multi-domain learning; however this assumption is not without difficulty. For one, unless the process of domain annotation is automated due to combining existing datasets as in e.g. Rebuffi et al. (2017), their manual collection, curation, and domain labeling is very laborious.
And even if adequate resources exist, it is often difficult to decide the optimal criteria for the annotation of d: some datasets contain sketches, paintings and real world images (Li et al., 2017), others images captured during day or night (Sultani et al., 2018). Automatically collected datasets (Thomee et al., 2016; Sun et al., 2017) contain mixtures of low/high resolution images, taken with different cameras by amateurs/professionals. There is no obvious answer which of these should form their own distinct domain subset.
Moreover, the work of Bouchacourt et al. (2018) considers semantic groupings of data: they show that when dividing data by subcategories, such as size, shape, etc., and incorporating this information into the model, then this benefits performance. Should one therefore also encode the number of objects into domains, or their color, shape, and so on?
Given the relatively loose requirement that domains are supposed to be different while related in some sense (Pan & Yang, 2009), these examples hint at the difficulty of deciding whether domains are needed, and – if the answer to that is yes – what the optimal domain criteria are. And note that even if such assignments are made very carefully for some problem, nothing guarantees that they will transfer effectively to some other task.
This paper carefully investigates this ambiguity and studies two central questions:
1. Are domain labels always optimal for learning multi-domain representations? 2. How can models best be learned that generalize well over visually diverse domains, without
domain labels?
To study this problem, we introduce a new setting (c.f. Fig. 1) in which models are learned over multiple domains without domain annotations — latent domain learning for short.
While latent domain learning is a highly practical research problem in the context of transfer learning, it poses multiple challenges that have not been previously investigated in connection with deep visual representation learning. In particular, we find that the removal of domain associations leads to performance losses for standard architectures due to imbalances in the underlying distribution and different difficulty levels of the associated domain-level tasks.
We carry out a rigorous quantitative analysis that includes concepts from multi-domain learning (Rebuffi et al., 2018; Chang et al., 2018), and find that their performance benefits do not directly extend to latent domain learning. To account for this lost performance, we formulate a novel method called sparse latent adaptation (Section 3.2) which enables internal feature representations to dynamically adapt to instances from multiple domains in data, without requiring annotations for this. Moreover, we show that latent domain methods appear to benefit single domain data and real world tasks, such as fairness problems (Appendix F), and long-tailed recognition (Appendix G).
2 LATENT DOMAIN LEARNING
This section provides an overview over latent domain learning and contrasts it against other types of related learning problems, in particular multi-domain learning.
2.1 PROBLEM SETTING
When learning on multiple domains, the common assumption is that data is sampled i.i.d. from a mixture of distributions Pd with domain indices d = 1, . . . , D. Together, they constitute the datagenerating distribution as P = ∑ d πdPd, where each domain is associated with a relative share πd = Nd/N , with N the total number of samples, and Nd those belonging to the d’th domain. In multi-domain learning, domain labels are available for all samples (Nam & Han, 2016; Rebuffi et al., 2017; 2018; Bulat et al., 2019), such that the overall data available for learning consists of DMD = {(xi, di, yi)} with i = 1, . . . , N . In latent domain learning the information associating each sample xi with a domain di is not available. As such, domain-specific labels yi cannot be inferred from sample-domain pairs (xi, di) and one is instead forced to learn a single model fθ over the latent domain dataset DLD = {(xi, yi)}. While latent domain learning can include mutually exclusive classes and disjoint label spaces Y1 ∪ · · · ∪ YD (as in long-tailed recognition, see Appendix G), we mainly focus on the setting of shared label spaces, i.e. Yd = Yd′ . For example a dataset may contain images of dogs or elephants that can appear as either photos, paintings, or sketches.
Latent domains have previously attracted interest in the context of domain adaptation, where the lack of annotations was recovered through hierarchical Hoffman et al. (2012) and kernel-based clustering (Gong et al., 2013), via exemplar SVMs (Xu et al., 2014), or by measuring mutual information (Xiong et al., 2014). More recent work corrects batch statistics of domain adaptation layers using Gaussian mixtures (Mancini et al., 2018), or studies the shift from some source domain to a target distribution that contains multiple latent domains (Peng et al., 2019b; Matsuura & Harada, 2020). Latent domain learning however differs fundamentally from these works: Table 1 contains a comparison to existing transfer learning settings.
A common baseline in multi-domain learning is to finetune D models, one for each individual domain (Rebuffi et al., 2018; Liu et al., 2019a). This requires learning a large number of parameters and shares no parameters across domains, but can serve as a strong baseline to compare against. We show that in many cases, even when domains were carefully annotated, a dynamic latent domain approach can surpass the performance of such domain-supervised baselines (see Section 4).
2.2 OBSERVED VS. UNIFORM ACCURACY
Consider a problem in which the data is sampled i.i.d. from P = πaPda + πbPdb , i.e. two hidden domains. When domain labels are not available in the data, a standard strategy is to treat all samples equally, and measure the observed accuracy:
OAcc[f ] = E(xi,yi)∼P[1yf(xi)=yi ], (1)
where yf denotes the class assigned to sample xi by the model f , and yi its corresponding label for training. The OAcc has a problematic property: if P consists of two imbalanced domains such that πa ≥ πb, then the performance on da dominates it. For example if da has a 90% overall share, and the model perfectly classifies this domain while obtaining 0% accuracy on db, then OAcc would still assume 0.9, hiding the underlying damage to domain db.
This motivates alternative formulations for latent domain learning, to anticipate (and account for) imbalanced domains in data. If it is possible to identify some semantic domain labeling (as typically included in multi-domain/domain adaptation benchmarks), one can compare performances across individual subgroups. This allows picking up on domain-specific performance losses which traditional metrics (such as OAcc) fail to capture.
Where this is possible, we therefore propose to also measure latent domain performance in terms of uniform accuracy which decouples accuracies from relative ground-truth domain sizes:
UAcc[f ] = 1
D D∑ d=1 E(xi,yi)∼Pd [1yf (xi)=yi ]. (2)
Returning to the above example, a uniform measurement reflects the model’s lack of performance on db as UAcc = 0.5. Once again note while ground-truth domain annotations are required in order to compute uniform accuracy, these are never used to train latent domain models.
3 METHODS
To enable robust learning in the new proposed setting, we formulate a novel module called sparse latent adaptation which can adaptively account for latent domains. Section 3.1 reviews adaptation strategies popular in the multi-domain context, which our method extends (and generalizes).
3.1 LATENT ADAPTATION
When domain labels d are available (not the case in latent domain learning) one strategy established by Rebuffi et al. (2017) is to modulate networks by constraining the layerwise transformation of residual networks (He et al., 2016) Φ(x) = x + f(x) to allow at most a linear change Vd per each domain from some pretrained mapping Φ0 (with f0 in every layer), whereby Φ(x)−Φ0(x) = Vdx. Note the slight abuse of notation here in letting x denote a feature map with channels C. Rearranging this yields:
Φ(x, d) = x+ f0(x) + D∑ d=1 gdVd(x), (3)
with a domain-supervised switch that assigns corrections to domains, i.e. gd = 1 for d associated with x and 0 otherwise. Each Vd is parametrized through 1x1 convolutions, and f0 denotes a shared 3x3 convolution obtained e.g. on ImageNet (Deng et al., 2009). This builds on the assumption that models with strong general-purpose representations require minimal changes to adapt to new tasks (Bilen & Vedaldi, 2017), making learning each Vd sufficient, while f0 remains as is. Such adaptation strategies have been successfully used in few shot learning (Li et al., 2021) and NLP (Stickland & Murray, 2019) to restrict the number of learnable parameters there.
In latent domain learning access to d is removed, resulting in two new challenges: we have no a priori information about the right number of corrections {Vd}, and we cannot use d to decide which one of these to apply.
To mitigate the lack of domain labels d, first we assume that input data is constituted by K latent distributions Pk. Second we propose to replace the switch gd with a learnable gating mechanism g1(x), . . . , gK(x) that assigns each sample x to latent domains as follows:
Φ(x) = x+ f0(x) + K∑ k=1 gk(x)Vk(x), (4)
The gates gk control which convolution is applied to which sample x, and correspond to a categorical variable over K categories, i.e. 0 ≤ gk ≤ 1 and ∑ k gk = 1. Note in particular how parametric dependency of Φ on d is removed. How to best choose K is discussed in more detail in Section 4.
While we motivate our latent domain module from learning over multiple domains, the main goal is not to recover the domain labels annotated in some datasets. When optimizing some loss (standard cross-entropy in the classification case), there is no guarantee that the learned Vk will correspond to an annotated visual domain and many additional factors (shape, pose, color, etc.) can enter them as
well. Latent domain models are simply optimized to produce the lowest training error, and in fact seldom recover ground-truth domains (c.f. Fig. 5). Note the broader concept presented here may in principle also be incorporated with other multi-task concepts (Perez et al., 2018; Guo et al., 2019a), adaptation strategies however stand out due to their methodological simplicity.
Different options exist for parametrizing the gating function g : X → G ⊆ RK . An ideal gating mechanism for latent domain learning would fulfill two seemingly incompatible properties: be able to filter latent domains in some layers (requiring a discrete gate), but also share parameters between related latent domains in other layers (smooth gates). The next section proposes how this can be resolved without requiring task relationships (Vandenhende et al., 2020) or outer optimization loops (Wu et al., 2018) through the use of sparseness.
3.2 SPARSE LATENT ADAPTERS (SLA)
We parameterize the gating function g with a small linear transformation W : C → RK that constitutes the pre-activation q=Wϕ(x) within the gates, where ϕ : X → C denotes an average pooling projection onto the channels.
A crucial choice is whether the activation for q ∈ RK should map to a discrete space G = {0, 1}K or a continuous G = [0, 1]K in which the Vk are shared. We propose a different strategy that lets gates be smooth when appropriate, but a threshold τ allows for sparse (or discrete) outputs fτ (q) = [q − τ(q)]+ with [·]+ = max(0, ·). Crucially fτ can be solved in a differentiable manner (Martins & Astudillo, 2016) by sorting q1 ≥ · · · ≥ qK , solving k∗ = max{k | 1 + kqk > ∑ j≤k qj} and computing τ = [( ∑ j≤k∗ qj)− 1]/k∗.
Consider q = [0.1, 1.0, 0.5] for which sparse activation results in fτ (q) = [0.0, 0.75, 0.25] while softmax yields [0.202, 0.497, 0.301]. Sparse activation filters out q1, while sharing between q2 and q3. We may now define:
SLA(x) , x+ f0(x) + K∑ k=1 [ fτ ◦W ◦ ϕ(x) ] k Vk(x), (5)
where [·]k picks the k’th element of the gating sequence. To the best of our knowledge sparse activation strategies were never previously employed for expert models in computer vision and have so far been restricted to the NLP setting (Deng et al., 2017; Peters et al., 2019). Note SLA generalizes residual adaption (Rebuffi et al., 2017; 2018), which is recovered by setting K= 1.
While gating is subject to complex interactions such as negative transfer (Rosenbaum et al., 2019), our ablations in Table 5 clearly show that taking a sparse perspective – which allows the model to assume either continuous or discrete forms – outperforms the alternative of a priori fixing either smoothness through self-attention (Lin et al., 2017b), or discrete Gumbel-based sampling (Jang et al., 2016). Note this choice between discrete (Veit & Belongie, 2018; Guo et al., 2019b) and continuous mechanisms (Shazeer et al., 2017; Sun et al., 2019a; Wang et al., 2019) delineates previous work that employs differentiable gates.
A softmax-activated model can in principle also learn to suppress individual preactivation components by letting some qk go to −∞. This however requires either learning extra calibration parameters at every layer, defining a hard cutoff value (Shazeer et al., 2017) (thereby removing differentiability), or very large row-norms within the linear mapping W— a highly unlikely outcome given the several mechanisms found in state-of-the-art models (in particular weight decay, norm-penalties, or BN (Ioffe & Szegedy, 2015)) which act as direct counterforces to this.
4 EXPERIMENTS
We evaluate our proposed methods on three latent domain benchmarks: Office-Home, PACS, and DomainNet (c.f. Fig. 6, which shows example images from these benchmarks). The main goal here is not to compare to existing multi-domain or domain adaptation methods that these datasets were initially designed for, but to study our two central research questions: whether domain labels are useful for effectively learning over multiple domains, and whether one can learn such representations without domain labels.
We also examine a recent fairness benchmark (see Appendix F), and show that SLA improves robustness under single domain long-tailed distributions (Appendix G). All experiments were implemented in PyTorch (Paszke et al., 2017).1
Optimization In all experiments, we couple our method with a ResNet26 model pretrained on a downsized version of ImageNet that was used in previous work by Rebuffi et al. (2018). In SLA only gates and corrections are learned, the residual backbone f0 remains fixed at its initial parameters, which implicitly regularizes the model (Rebuffi et al., 2017). Training is carried out for 120 epochs using stochastic gradient descent (momentum parameter of 0.9), batch size of 128, weight decay of 10−4, and an initial learning rate of 0.1 (reduced by 1/10 at epochs 80, 100).
All experiments follow the preprocessing of Rebuffi et al. (2017; 2018), alongside standard augmentations such as normalization, random cropping, etc. Accuracies are averaged over five seeds.
Increasing the number of corrections K within SLA results in small, consistent performance gains. As K = 2 already represents a solid boost from the baseline of having no adapters, we focus on this result in the main part, and report results for higher K alongside variances in Appendix C.
Office-Home The underlying data contains a variety of objects classes (alarm clock, backpack, etc.) among four domains: art, clipart, product, and real world (Venkateswara et al., 2017). In Table 2 we show results for d-supervised multi-domain (MD) approaches: RA (Rebuffi et al., 2018), domain-adversarial learning (Ganin et al., 2016) and a baseline of 4×ResNet26, one for each domain. For latent domain (LD) baselines, we then learn a single ResNet26, this time as a latent domain model over all domains. Next, we couple SLA with the very same ResNet26.
Learning a single ResNet26 over latent domains with no access to d-labels significantly harms performance. This problem is not addressed by simply increasing the depth of the network: while accuracy improves slightly, a ResNet56 exhibits the same performance losses — in particular on the latent domains product (P) and real world (R).
While residual adaptation (RA) (Rebuffi et al., 2018) was shown to work extremely well in many multi-domain scenarios, performance here is sub-par, regardless of whether it accesses d (MD: one Vd per-domain) or not (LD). This likely results from linear modules being reserved for each d when using annotations, enabling no native cross-domain sharing of parameters. When d is hidden on the other hand, the model is forced to share a single linear adaptation module V between all four hidden domains, without the flexible gating we propose in SLA.
Learning annotations through latent domain clustering and coupling this with domain-adversarial gradient reversal as in MMLD (Matsuura & Harada, 2020) increases performance relative to its dannotated counterpart (Ganin et al., 2016). The increase is modest however, likely because enforcing domain-invariance on the gradient level negatively impacts the model’s ability to discriminate between classes (Wang et al., 2020). Another related baseline is MLFN (Chang et al., 2018) which builds on ResNeXt (Xie et al., 2017) to define a latent-factor architecture that accounts for multi-
1Code is available at github.com/VICO-UoE/LatentDomainLearning.
modality in data. Crucially where our method is fine-grained and shares convolutions at every layer, MLFN instead enables and disables entire network blocks, allowing us to outperform it.
SLA outperforms the currently available latent domain models by a consistent margin, and increases UAcc by 12.79% relative to ResNet26. Best performance is obtained when K = D, with performance being reducing slightly from overfitting of larger domains for K > D (see Appendix C).
PACS The second experiment examines performance on the PACS dataset (Li et al., 2017). Crucially PACS domains (art, cartoon, photo, sketch) differ more markedly from one another (c.f. examples in Fig. 6), hence constituting an interesting latent domain problem.
Even for more distinct domains as in PACS, results in Table 3 show that SLA improves over existing baselines. The largest gains occur on smaller domains (e.g. art), where standard models suppress underrepresented parts of the distribution (see additional discussion on imbalanced distributions in Appendix G). Our method again surpasses the accuracy of 4×ResNet26, while requiring a fraction of the total parameters (∼ 9.7 mil for K = 5 vs. ∼ 24.8 mil). The performance of SLA again continues to increase with larger K (see Appendix C).
The performance increase from using a latent domain-adversarial approach (Matsuura & Harada, 2020) versus using domain-annotations (Ganin et al., 2016) confirms that learning domains alongside the rest of the network can be a better strategy than trusting in annotations. Our approach again improves over this, without requiring a clustering stage as in MMLD.
Results for k-means (usingD= 4 centers and clustered on the feature level) and subsequent finetuning show that a two-stage strategy is suboptimal. This is not surprising since, similar to d-supervision via gd in Φ of eq. (3), clustering learns fixed switches that get used across all layers. In contrast to this in SLA we flexibly share or separate features individually at every layer (c.f. qualitative results in Fig. 3), synergizing only where appropriate.
DomainNet We also evaluate models on a large-scale benchmark called DomainNet (Peng et al., 2019a). This dataset contains 518 447 images from six domains (clipart, painting, photos, sketch, infographics, and quickdraw), with a total of |Y| = 345 object classes. The optimization settings remain unchanged from those in previous sections.
Results are shown in Table 4. MLFN performs best on quickdraw, a domain that differs visibly from others (c.f. Fig. 6 for examples from each domain), and having entire network blocks dedicated to it seems to benefit performance. On all remaining domains, SLA outperforms existing models,
regardless of whether they were designed specifically for multi-domain problems, such as RA, or whether they are much deeper/parameter-intensive (ResNet56).
Qualitative analysis We (i) compare global statistics of Office-Home and PACS domains as well as (ii) their per-layer treatment within SLA; (iii) analyze sparse gating, (iv) representations learned by SLA, and show that (v) our module shares between geometric properties (shape, pose, etc.).
i) Fig. 2: average cosine similarities of per-domain gating vectors g∈GL across l= 1, . . . , L layers of ResNet26 show that Office-Home domains differ less than those in PACS.
ii) Fig. 3: layerwise measurements of Corr[gl(x), gl(x′)] for x, x′ drawn from differing d 6= d′ for Office-Home. If inter-domain correlation is high, then similar corrections Vk are responsible for processing samples from two domains. Across top layers of the network there is little correlation, presumably as low-level information associated with each domain is processed independently. In the mid to bottom stages correlation increases: these layers are typically associated with higherorder features (Yosinski et al., 2014; Mahendran & Vedaldi, 2016; Asano et al., 2020), and since label spaces are shared between latent domains, similar object-level features are required to classify objects into their respective categories.
iii) Fig. 4: sparse gates have the flexibility to either output singular activations (i.e. become fully discrete) or all non-zero values (a continuous gate). We measure the per-layer sparsity Ex∼Pd [K− ‖gl(x)‖0]/(K − 1) where ‖ · ‖0 counts values different from zero, finding sparsity of SLA to vary across model depth. Interestingly after each downsampling operation SLA tends to be relatively sparse, followed by a dense gate, then again a sparse one, and so forth. The model thus utilizes the extra flexibility resulting from sparse gates.
Due to PACS domains being relatively distinctive, the dataset is an interesting candidate for additional analysis in (iv) and (v) of how sparse adaptation handles the different ground-truth domains.
iv) Fig. 5 (left): gate vectors g ∈ GL for samples from all four domains in PACS visualized by their principal components. SLA exhibits an intuitive clustering of human-annotated PACS domains: visually similar art and photo (•,•) cluster together. The manifold describing sketches (•) is arguably more primitive than those of the other domains, and indeed only maps to a small region. Cartoon (•) lies somewhere between sketches and real world images. This matches intuition: a cartoon is, more or less, just a colored sketch.
Fig. 5 also highlights one sample that shows an elephant that SLA places among the cartoon (•) domain, but which has been assigned a ground-truth domain label of photo (•) in the PACS dataset. The ground-truth label seems to have been annotated in error, but different from approaches that use d-supervision, our SLA processes latent domains on-the-fly and is therefore not irritated by this.
v) Fig. 5 (right): pairs of samples with similar gates. This shows that latent domains are indicative of more than ground-truth domain labels and extend to geometric similarities: pose, color, etc. of the samples are visibly related. Compare in particular the poses of elephants/dogs (second/third row).
5 CONCLUSION
In this paper we explored two questions: (i) whether domain associations are required for learning effective models over multiple visual domains and (ii) how multi-domain models may best be learned without depending on manually curated domain labels.
As has been shown, the performance of existing models does degrade without domain labels, raising doubts about their suitability for realistic problems that involve diverse data sources. As a remedy, we proposed a novel adaptation strategy which reclaims (and often exceeds) lost accuracy on latent domains, benefiting several problems where some notion of a domain (but no annotation) exists.
ACKNOWLEDGEMENT
HB is supported by the EPSRC programme grant Visual AI EP/T028572/1. TH was supported by EPSRC grant EP/R026173/1.
A DATASETS
Fig. 6 shows examples from the latent domain benchmarks evaluated in Section 4. The selected images have equivalent classes yd = yd′ ∈ Y (for example chair for Office-Home), but different domains (e.g. d = {art, clipart, product, real world}). These examples show that data from different domains often contain very different visual characteristics (compare e.g. photo vs. sketch for PACS), even when the object is the same. At the same time, other domains are more alike (e.g. art and photo), indicating that different amounts of sharing between per-domain parameters are required, which in SLA is facilitated by its gating mechanism.
B RELATED WORK
Multi-domain learning relates most closely to our work. The state-of-the-art methods introduce small convolutional corrections in residual networks to account for individual domains (Rebuffi et al., 2017; 2018), which was recently extended to obtain efficient multi-task models for related language tasks Stickland & Murray (2019). Other work makes use of task-specific attention mechanisms (Liu et al., 2019a), attempts to scale task-specific losses (Kendall et al., 2018), or addresses tasks at the level of gradients (Chen et al., 2017). Crucially, these approaches all rely firmly on domain labels.
Our work is loosely related to learning universal representations (Bilen & Vedaldi, 2017), which was used as a guiding principle in designing more transferable models (Tamaazousti et al., 2019). However, these works also assume the presence of domain labels. Multimodal learning does not make this assumption, and was shown to benefit from accounting for latent semantic factors to match images (Chang et al., 2018), or from normalizing data in separate groups (Deecke et al., 2019). As we show in our experiments (see Section 4), latent domain learning however benefits from more customized solutions than these.
The proposed module gives rise to a differentiable dynamic network architecture, studied e.g. for reinforcement learning (Zoph & Le, 2017; Pham et al., 2018), Bayesian optimization (Kandasamy et al., 2018), or when adapting to new tasks (Mallya et al., 2018; Rosenfeld & Tsotsos, 2018). For such architectures, two components are commonly used: discrete Gumbel-based sampling (Jang et al., 2016), e.g. leveraged in dynamic computer vision architectures (Veit & Belongie, 2018; Sun et al., 2019a), or continuous self-attentive approaches (Lin et al., 2017b), which have been used successfully to scale expert models (Jacobs et al., 1991; Jordan & Jacobs, 1994) to large problem spaces (Shazeer et al., 2017; Wang et al., 2019).
From the perspective of algorithmic fairness, a desirable model property is to ensure consistent predictive equality across different identifiable subgroups in data (Zemel et al., 2013; Hardt et al., 2016; Fish et al., 2016). This relates to one of the goals in latent domain learning: to limit implicit model bias towards large domains, and improve robustness on small domains. Recent work explores connections between models and empirical fairness for visual recognition (Bagdasaryan et al., 2019; Hooker et al., 2020; Wang et al., 2020), different from our experiments however (see Appendix F) they focus their analysis on a setting in which annotations for protected attributes are available.
C VARIATION OF RESULTS
Fig. 7 displays variances of accuracies recorded over ten random initializations on Office-Home (left) and PACS (right). We generally found SLA to be robust to different optimization settings, and as a result observed variances are relatively low across experiments.
LargerK brings an improvement of around 0.5-1% in performance at the expense of a linear increase in learnable parameters (c.f. next section). While accuracy is improved by setting K > 2, gains appear to saturate in line with previous observations around network width (Xie et al., 2017).
D MEMORY REQUIREMENTS
In SLA every layer contains O(K|C|+K|C|2) parameters to parametrize gates and corrections Vk, respectively. This is however an extremely modest requirement, in particular because f0 stays fixed: while a ResNet26 contains ∼ 6.2 mil learnable parameters, even when setting K= 5 within SLA it has just 3.5 mil free parameters, and is a fraction of the number of parameters needed to parametrize four ResNet26 (around 24.8 mil parameters).
Note also that the complexity of solving sparse gates in SLA scales as O(K logK), a negligible increase given the small K required in our method.
E ABLATION
Replacing sparse gating within SLA registers a drop in performance, regardless of whether smooth or discrete mechanisms are used. Accuracies for soft and straight-through Gumbel-softmax sampling (Jang et al., 2016) were on par; we report straight-through sampling here.
We also ran experiments where we did not fix the residual backbone f0 but updated its parameters alongside the learning of SLA. In line with what Rebuffi et al. (2017) report, this lead to overfitting and performance dropped to UAcc = 73.53.
F FAIRNESS
Recent work elevated the role of small subgroups in data and examined model fairness on CelebA (Bagdasaryan et al., 2019; Wang et al., 2020; Hooker et al., 2020). Because such subgroups may be interpreted as constituting an individual latent domain component Pd, they are an interesting candidate to evaluate our purpose-built SLA on.
The benchmark contains different labeled attributes (e.g. “brown hair”, “glasses”), and is modified from the original dataset by hiding gender labels. Models are evaluated on all 39 remaining
Table 6: Average precision and bias amplification of SLA on the CelebA fair attribute recognition benchmark (Wang et al., 2020).
ResNet18 + SLA ResNet34 + SLA ResNet50 + SLA
mAP (↑) 71.76 73.22 (+1.46) 71.33 73.98 (+2.65) 74.52 75.03 (+0.51) BA (↓) 0.025 0.014 0.022 0.009 0.012 0.008
0.5 0.6 0.7 0.8 0.9 1.0 Skew
0
2
4
6 8 Ch an ge i n AP [ %]
Figure 8: Change in AP between ResNet18 and ResNet18-SLA for different gender skews in CelebA attributes.
attributes, which subsequently experience varying amounts of gender skew. Framed as a latent domain problem we have d={female,male}, but models have no access to this information. The images used are the entire Aligned&Cropped subset (Liu et al., 2015) over which we finetune residual models, replacing only the fully-connected layer of the network. We use the optimization settings introduced in Section 4 for 70 epochs with reductions at epochs 30, 40, and 50, selecting the best model on the validation split. This experimental setup is identical to previous work on empirical fairness (Wang et al., 2020; Ramaswamy et al., 2020), which however – different from our work – focused on learning models that have access to the gender-attribute d.
We evaluate per-attribute accuracy using mean average precision (mAP) and report bias amplification (BA) (Zhao et al., 2017). This compares the propensity of a model to make positive predictions (i.e. f exceeds some threshold t+ ∈ [0, 1]) in the gender g∗y that appears most frequent within attribute y, compared to the true counted ratio of positive examples y+:
BA[f ] = Ex∼Px [1f(x)>t+|g∗y 1f(x)>t+ ] − Ey∼Py [1y=y+|g∗y 1y=y+ ] , (6)
where t+ is optimized for on the validation split. For example if 60% of male examples are wearing glasses but under the model this is raised to a total of 65%, then bias is amplified by BA = 0.05.
We report performance for ResNet18, ResNet34, and ResNet50 in Table 6 and compare this to the same model with SLA inserted. SLA consistently raises both mAP and reduces bias, indicating that it relies less on spurious correlations in data to formulate its predictions.
In Fig. 8 we compare per-attribute skew toward either female or male (whichever is more frequent) to the gain in performance from ResNet18 to the same model but with SLA inserted. We observe a clear trend here, whereby SLA is able to raise performance the most in those attributes that experience the largest amounts of skew.
G LONG-TAILED RECOGNITION
Standard models often experience difficulty when some classes are heavily underrepresented. This problem has recently been studied in long-tailed recognition (Liu et al., 2019b; Cao et al., 2019) with
benchmarks that modify CIFAR-10 and CIFAR-100 to an imbalanced version by dropping some classes (e.g. 6-10 for CIFAR-10) (Buda et al., 2018). The severity of the imbalance is described via the ratio ρ = nmax/nmin between the largest and smallest classes.
Long-tailed distributions may be viewed as containing an underrepresented latent component with π = 1/(1 + ρ), and previous results (c.f. Section 4) that fortified small latent domains within P motivate us to evaluate the imbalance setting more closely here.
Since our strategy is architecture-based, it can be combined with the most recent state-of-the-art (loss-based) techniques for long-tailed recognition: a label-distribution-aware margin loss with deferred reweighting (Cao et al., 2019), or reducing contributions from well-classified examples as in focal losses (Lin et al., 2017a). As Table 7 shows, adaptation via sparse gates acts as a regularizer on the underlying ResNet26, and consistently improves performance on long-tail benchmarks. | 1. What is the focus and contribution of the paper regarding learning from multiple domains?
2. What are the strengths of the proposed approach, particularly in addressing the problem of unknown domain labels?
3. What are the weaknesses of the paper, especially regarding experimental setup and baselines?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any connections between self-supervised learning approaches and latent domain learning that the authors could explore further? | Summary Of The Paper
Review | Summary Of The Paper
The paper addresses the problem of learning a classification model on data from multiple domains, when explicit domain assignment for each data point is not provided.
To solve this problem, the paper proposes the data to be coming from 1 of K unknown (or latent) domains. A single 1x1 convolutional layer per latent domain is applied to feature maps in each residual layer of ResNet. A gating function (based on sparsemax) is proposed to decide which of the K convolutional layers will be applied. The entire network is learned jointly by minimizing the classification objective function.
Review
Strengths
The paper is well-written, easy to follow and understand.
The paper claims to propose a new setting for learning from multiple domains, where domain labels are unknown. This is an important problem since labeling domains can be challenging and/or expensive in real-world applications. While I am not extremely familiar with the field, according to the paper, prior works assume the knowledge of domain associated with a data point.
The idea of using sparse gating mechanism to allow for multiple transformations makes sense since it is possible that transformations learned across various domains are useful for predicting image category.
Qualitative and quantitative experiments seem to corroborate that authors' proposed method is learning a sensible domain representation.
Weaknesses
The experimental setup used in the paper, as well as the baselines are not properly justified. I specifically don't understand
Why is the ResNet backbone kept frozen, and not fine-tuned along with rest of the parameters?
Why do the authors use same parameters across different methods/models? A fair way to compare across methods would be to perform cross-validation across range of hyper-parameters and report the performance on the test set.
It is a bit surprising to me that all the proposed multi-domain approaches match perform worse when compared to latent-domain? I'd imagine that a model having a domain information for each datapoint would set an upper bound on the performance when compared to methods with no domain information. Is the presence of domain information hurting for the model performance?
Although the paper states that the reported numbers are averaged over five random initializations, the standard errors are not reported
I would like to hear authors' take on connection between self-supervised learning approaches, and given latent domain learning. The SSL methods learn image representations using self-similarity. In an extreme case where each image is its own domain, SSL might be suited to perform representation learning on such multi-domain data.
Minor points
In Table 1,
U
~
D
+
1
is undefined. I presume it is validation or test data from
D
+
1
domain.
In Section 3.1, using a linear shift in feature map of ResNets for mapping domains (to a "canonical domain"?) is not properly motivated. I assume this strategy has been used in prior works in language but it would be useful to motivate it in the context of domain adaptation problem.
In Section 3.2, some of the symbols (eg.
τ
,
q
) are introduced without proper definition. While I was able to understand them by going back and forth between current paper and referred paper, it would be helpful if the authors clarify the notations.
Section 2.2 is a relatively minor detail given that the datasets used in the paper are not highly imbalanced and both observed accuracy and uniform accuracy are highly correlated to each other. Authors can consider moving this detail to the experiments section. |
ICLR | Title
Visual Representation Learning over Latent Domains
Abstract
A fundamental shortcoming of deep neural networks is their specialization to a single task and domain. While multi-domain learning enables the learning of compact models that span multiple visual domains, these rely on the presence of domain labels, in turn requiring laborious curation of datasets. This paper proposes a less explored, but highly realistic new setting called latent domain learning: learning over data from different domains, without access to domain annotations. Experiments show that this setting is challenging for standard models and existing multi-domain approaches, calling for new customized solutions: a sparse adaptation strategy is formulated which enhances performance by accounting for latent domains in data. Our method can be paired seamlessly with existing models, and benefits conceptually related tasks, e.g. empirical fairness problems and long-tailed recognition.
N/A
A fundamental shortcoming of deep neural networks is their specialization to a single task and domain. While multi-domain learning enables the learning of compact models that span multiple visual domains, these rely on the presence of domain labels, in turn requiring laborious curation of datasets. This paper proposes a less explored, but highly realistic new setting called latent domain learning: learning over data from different domains, without access to domain annotations. Experiments show that this setting is challenging for standard models and existing multi-domain approaches, calling for new customized solutions: a sparse adaptation strategy is formulated which enhances performance by accounting for latent domains in data. Our method can be paired seamlessly with existing models, and benefits conceptually related tasks, e.g. empirical fairness problems and long-tailed recognition.
1 INTRODUCTION
Datasets have been a major driving force behind the rapid progress in computer vision research in the last two decades. They provide a testbed for developing new algorithms and comparing them to existing ones. However, datasets can also narrow down the focus of research into overspecialized solutions and impede developing a broader understanding of the world.
In recent years this narrow scope of datasets has been widely questioned (Torralba & Efros, 2011; Tommasi et al., 2017; Recht et al., 2019) and addressing some of these limitations has become a very active area of research. Two actively studied themes to investigate broader learning criteria are multi-domain learning (Nam & Han, 2016; Bulat et al., 2019; Schoenauer-Sebag et al., 2019) and domain adaptation (Ganin et al., 2016; Tzeng et al., 2017; Hoffman et al., 2018; Xu et al., 2018; Peng et al., 2019a; Sun et al., 2019b). While multi-domain techniques focus on learning a single model that can generalize over multiple domains, domain adaptation techniques aim to efficiently transfer the representations that are learned in one dataset to another.
Related themes have also been studied in domain generalization (Li et al., 2018; 2019b;a; Gulrajani & Lopez-Paz, 2020) and continual learning (Kirkpatrick et al., 2017; Lopez-Paz & Ranzato, 2017; Riemer et al., 2019), where the focus lies on learning representations that can generalize to unseen domains, and to preserve knowledge acquired from previously seen tasks, respectively.
While there exists no canonical definition for what exactly a visual domain is, previous works in multi-domain learning assume that different subsets of data exist, with some defining characteristic that allows them to be separated from each other. Each subset, indexed by d = 1, . . . , D, is assigned to a pre-defined visual domain, and vice-versa multi-domain methods then use such domain associations to parameterize their representations and learn some pθ(y|x, d). In some cases domains are intuitive and their annotation straightforward. Consider a problem where images have little visual relationship, for example joint learning of Omniglot handwritten symbols (Lake et al., 2015) and CIFAR-10 objects (Krizhevsky & Hinton, 2009). In this case, it is safe to assume that encoding an explicit domain-specific identifier into pθ is a good idea, and results in the multi-domain literature provide clear evidence that it is beneficial to do so (Rebuffi et al., 2018; Liu et al., 2019a; Guo et al., 2019a; Mancini et al., 2020).
The assumption that domain labels are always available has been widely adopted in multi-domain learning; however this assumption is not without difficulty. For one, unless the process of domain annotation is automated due to combining existing datasets as in e.g. Rebuffi et al. (2017), their manual collection, curation, and domain labeling is very laborious.
And even if adequate resources exist, it is often difficult to decide the optimal criteria for the annotation of d: some datasets contain sketches, paintings and real world images (Li et al., 2017), others images captured during day or night (Sultani et al., 2018). Automatically collected datasets (Thomee et al., 2016; Sun et al., 2017) contain mixtures of low/high resolution images, taken with different cameras by amateurs/professionals. There is no obvious answer which of these should form their own distinct domain subset.
Moreover, the work of Bouchacourt et al. (2018) considers semantic groupings of data: they show that when dividing data by subcategories, such as size, shape, etc., and incorporating this information into the model, then this benefits performance. Should one therefore also encode the number of objects into domains, or their color, shape, and so on?
Given the relatively loose requirement that domains are supposed to be different while related in some sense (Pan & Yang, 2009), these examples hint at the difficulty of deciding whether domains are needed, and – if the answer to that is yes – what the optimal domain criteria are. And note that even if such assignments are made very carefully for some problem, nothing guarantees that they will transfer effectively to some other task.
This paper carefully investigates this ambiguity and studies two central questions:
1. Are domain labels always optimal for learning multi-domain representations? 2. How can models best be learned that generalize well over visually diverse domains, without
domain labels?
To study this problem, we introduce a new setting (c.f. Fig. 1) in which models are learned over multiple domains without domain annotations — latent domain learning for short.
While latent domain learning is a highly practical research problem in the context of transfer learning, it poses multiple challenges that have not been previously investigated in connection with deep visual representation learning. In particular, we find that the removal of domain associations leads to performance losses for standard architectures due to imbalances in the underlying distribution and different difficulty levels of the associated domain-level tasks.
We carry out a rigorous quantitative analysis that includes concepts from multi-domain learning (Rebuffi et al., 2018; Chang et al., 2018), and find that their performance benefits do not directly extend to latent domain learning. To account for this lost performance, we formulate a novel method called sparse latent adaptation (Section 3.2) which enables internal feature representations to dynamically adapt to instances from multiple domains in data, without requiring annotations for this. Moreover, we show that latent domain methods appear to benefit single domain data and real world tasks, such as fairness problems (Appendix F), and long-tailed recognition (Appendix G).
2 LATENT DOMAIN LEARNING
This section provides an overview over latent domain learning and contrasts it against other types of related learning problems, in particular multi-domain learning.
2.1 PROBLEM SETTING
When learning on multiple domains, the common assumption is that data is sampled i.i.d. from a mixture of distributions Pd with domain indices d = 1, . . . , D. Together, they constitute the datagenerating distribution as P = ∑ d πdPd, where each domain is associated with a relative share πd = Nd/N , with N the total number of samples, and Nd those belonging to the d’th domain. In multi-domain learning, domain labels are available for all samples (Nam & Han, 2016; Rebuffi et al., 2017; 2018; Bulat et al., 2019), such that the overall data available for learning consists of DMD = {(xi, di, yi)} with i = 1, . . . , N . In latent domain learning the information associating each sample xi with a domain di is not available. As such, domain-specific labels yi cannot be inferred from sample-domain pairs (xi, di) and one is instead forced to learn a single model fθ over the latent domain dataset DLD = {(xi, yi)}. While latent domain learning can include mutually exclusive classes and disjoint label spaces Y1 ∪ · · · ∪ YD (as in long-tailed recognition, see Appendix G), we mainly focus on the setting of shared label spaces, i.e. Yd = Yd′ . For example a dataset may contain images of dogs or elephants that can appear as either photos, paintings, or sketches.
Latent domains have previously attracted interest in the context of domain adaptation, where the lack of annotations was recovered through hierarchical Hoffman et al. (2012) and kernel-based clustering (Gong et al., 2013), via exemplar SVMs (Xu et al., 2014), or by measuring mutual information (Xiong et al., 2014). More recent work corrects batch statistics of domain adaptation layers using Gaussian mixtures (Mancini et al., 2018), or studies the shift from some source domain to a target distribution that contains multiple latent domains (Peng et al., 2019b; Matsuura & Harada, 2020). Latent domain learning however differs fundamentally from these works: Table 1 contains a comparison to existing transfer learning settings.
A common baseline in multi-domain learning is to finetune D models, one for each individual domain (Rebuffi et al., 2018; Liu et al., 2019a). This requires learning a large number of parameters and shares no parameters across domains, but can serve as a strong baseline to compare against. We show that in many cases, even when domains were carefully annotated, a dynamic latent domain approach can surpass the performance of such domain-supervised baselines (see Section 4).
2.2 OBSERVED VS. UNIFORM ACCURACY
Consider a problem in which the data is sampled i.i.d. from P = πaPda + πbPdb , i.e. two hidden domains. When domain labels are not available in the data, a standard strategy is to treat all samples equally, and measure the observed accuracy:
OAcc[f ] = E(xi,yi)∼P[1yf(xi)=yi ], (1)
where yf denotes the class assigned to sample xi by the model f , and yi its corresponding label for training. The OAcc has a problematic property: if P consists of two imbalanced domains such that πa ≥ πb, then the performance on da dominates it. For example if da has a 90% overall share, and the model perfectly classifies this domain while obtaining 0% accuracy on db, then OAcc would still assume 0.9, hiding the underlying damage to domain db.
This motivates alternative formulations for latent domain learning, to anticipate (and account for) imbalanced domains in data. If it is possible to identify some semantic domain labeling (as typically included in multi-domain/domain adaptation benchmarks), one can compare performances across individual subgroups. This allows picking up on domain-specific performance losses which traditional metrics (such as OAcc) fail to capture.
Where this is possible, we therefore propose to also measure latent domain performance in terms of uniform accuracy which decouples accuracies from relative ground-truth domain sizes:
UAcc[f ] = 1
D D∑ d=1 E(xi,yi)∼Pd [1yf (xi)=yi ]. (2)
Returning to the above example, a uniform measurement reflects the model’s lack of performance on db as UAcc = 0.5. Once again note while ground-truth domain annotations are required in order to compute uniform accuracy, these are never used to train latent domain models.
3 METHODS
To enable robust learning in the new proposed setting, we formulate a novel module called sparse latent adaptation which can adaptively account for latent domains. Section 3.1 reviews adaptation strategies popular in the multi-domain context, which our method extends (and generalizes).
3.1 LATENT ADAPTATION
When domain labels d are available (not the case in latent domain learning) one strategy established by Rebuffi et al. (2017) is to modulate networks by constraining the layerwise transformation of residual networks (He et al., 2016) Φ(x) = x + f(x) to allow at most a linear change Vd per each domain from some pretrained mapping Φ0 (with f0 in every layer), whereby Φ(x)−Φ0(x) = Vdx. Note the slight abuse of notation here in letting x denote a feature map with channels C. Rearranging this yields:
Φ(x, d) = x+ f0(x) + D∑ d=1 gdVd(x), (3)
with a domain-supervised switch that assigns corrections to domains, i.e. gd = 1 for d associated with x and 0 otherwise. Each Vd is parametrized through 1x1 convolutions, and f0 denotes a shared 3x3 convolution obtained e.g. on ImageNet (Deng et al., 2009). This builds on the assumption that models with strong general-purpose representations require minimal changes to adapt to new tasks (Bilen & Vedaldi, 2017), making learning each Vd sufficient, while f0 remains as is. Such adaptation strategies have been successfully used in few shot learning (Li et al., 2021) and NLP (Stickland & Murray, 2019) to restrict the number of learnable parameters there.
In latent domain learning access to d is removed, resulting in two new challenges: we have no a priori information about the right number of corrections {Vd}, and we cannot use d to decide which one of these to apply.
To mitigate the lack of domain labels d, first we assume that input data is constituted by K latent distributions Pk. Second we propose to replace the switch gd with a learnable gating mechanism g1(x), . . . , gK(x) that assigns each sample x to latent domains as follows:
Φ(x) = x+ f0(x) + K∑ k=1 gk(x)Vk(x), (4)
The gates gk control which convolution is applied to which sample x, and correspond to a categorical variable over K categories, i.e. 0 ≤ gk ≤ 1 and ∑ k gk = 1. Note in particular how parametric dependency of Φ on d is removed. How to best choose K is discussed in more detail in Section 4.
While we motivate our latent domain module from learning over multiple domains, the main goal is not to recover the domain labels annotated in some datasets. When optimizing some loss (standard cross-entropy in the classification case), there is no guarantee that the learned Vk will correspond to an annotated visual domain and many additional factors (shape, pose, color, etc.) can enter them as
well. Latent domain models are simply optimized to produce the lowest training error, and in fact seldom recover ground-truth domains (c.f. Fig. 5). Note the broader concept presented here may in principle also be incorporated with other multi-task concepts (Perez et al., 2018; Guo et al., 2019a), adaptation strategies however stand out due to their methodological simplicity.
Different options exist for parametrizing the gating function g : X → G ⊆ RK . An ideal gating mechanism for latent domain learning would fulfill two seemingly incompatible properties: be able to filter latent domains in some layers (requiring a discrete gate), but also share parameters between related latent domains in other layers (smooth gates). The next section proposes how this can be resolved without requiring task relationships (Vandenhende et al., 2020) or outer optimization loops (Wu et al., 2018) through the use of sparseness.
3.2 SPARSE LATENT ADAPTERS (SLA)
We parameterize the gating function g with a small linear transformation W : C → RK that constitutes the pre-activation q=Wϕ(x) within the gates, where ϕ : X → C denotes an average pooling projection onto the channels.
A crucial choice is whether the activation for q ∈ RK should map to a discrete space G = {0, 1}K or a continuous G = [0, 1]K in which the Vk are shared. We propose a different strategy that lets gates be smooth when appropriate, but a threshold τ allows for sparse (or discrete) outputs fτ (q) = [q − τ(q)]+ with [·]+ = max(0, ·). Crucially fτ can be solved in a differentiable manner (Martins & Astudillo, 2016) by sorting q1 ≥ · · · ≥ qK , solving k∗ = max{k | 1 + kqk > ∑ j≤k qj} and computing τ = [( ∑ j≤k∗ qj)− 1]/k∗.
Consider q = [0.1, 1.0, 0.5] for which sparse activation results in fτ (q) = [0.0, 0.75, 0.25] while softmax yields [0.202, 0.497, 0.301]. Sparse activation filters out q1, while sharing between q2 and q3. We may now define:
SLA(x) , x+ f0(x) + K∑ k=1 [ fτ ◦W ◦ ϕ(x) ] k Vk(x), (5)
where [·]k picks the k’th element of the gating sequence. To the best of our knowledge sparse activation strategies were never previously employed for expert models in computer vision and have so far been restricted to the NLP setting (Deng et al., 2017; Peters et al., 2019). Note SLA generalizes residual adaption (Rebuffi et al., 2017; 2018), which is recovered by setting K= 1.
While gating is subject to complex interactions such as negative transfer (Rosenbaum et al., 2019), our ablations in Table 5 clearly show that taking a sparse perspective – which allows the model to assume either continuous or discrete forms – outperforms the alternative of a priori fixing either smoothness through self-attention (Lin et al., 2017b), or discrete Gumbel-based sampling (Jang et al., 2016). Note this choice between discrete (Veit & Belongie, 2018; Guo et al., 2019b) and continuous mechanisms (Shazeer et al., 2017; Sun et al., 2019a; Wang et al., 2019) delineates previous work that employs differentiable gates.
A softmax-activated model can in principle also learn to suppress individual preactivation components by letting some qk go to −∞. This however requires either learning extra calibration parameters at every layer, defining a hard cutoff value (Shazeer et al., 2017) (thereby removing differentiability), or very large row-norms within the linear mapping W— a highly unlikely outcome given the several mechanisms found in state-of-the-art models (in particular weight decay, norm-penalties, or BN (Ioffe & Szegedy, 2015)) which act as direct counterforces to this.
4 EXPERIMENTS
We evaluate our proposed methods on three latent domain benchmarks: Office-Home, PACS, and DomainNet (c.f. Fig. 6, which shows example images from these benchmarks). The main goal here is not to compare to existing multi-domain or domain adaptation methods that these datasets were initially designed for, but to study our two central research questions: whether domain labels are useful for effectively learning over multiple domains, and whether one can learn such representations without domain labels.
We also examine a recent fairness benchmark (see Appendix F), and show that SLA improves robustness under single domain long-tailed distributions (Appendix G). All experiments were implemented in PyTorch (Paszke et al., 2017).1
Optimization In all experiments, we couple our method with a ResNet26 model pretrained on a downsized version of ImageNet that was used in previous work by Rebuffi et al. (2018). In SLA only gates and corrections are learned, the residual backbone f0 remains fixed at its initial parameters, which implicitly regularizes the model (Rebuffi et al., 2017). Training is carried out for 120 epochs using stochastic gradient descent (momentum parameter of 0.9), batch size of 128, weight decay of 10−4, and an initial learning rate of 0.1 (reduced by 1/10 at epochs 80, 100).
All experiments follow the preprocessing of Rebuffi et al. (2017; 2018), alongside standard augmentations such as normalization, random cropping, etc. Accuracies are averaged over five seeds.
Increasing the number of corrections K within SLA results in small, consistent performance gains. As K = 2 already represents a solid boost from the baseline of having no adapters, we focus on this result in the main part, and report results for higher K alongside variances in Appendix C.
Office-Home The underlying data contains a variety of objects classes (alarm clock, backpack, etc.) among four domains: art, clipart, product, and real world (Venkateswara et al., 2017). In Table 2 we show results for d-supervised multi-domain (MD) approaches: RA (Rebuffi et al., 2018), domain-adversarial learning (Ganin et al., 2016) and a baseline of 4×ResNet26, one for each domain. For latent domain (LD) baselines, we then learn a single ResNet26, this time as a latent domain model over all domains. Next, we couple SLA with the very same ResNet26.
Learning a single ResNet26 over latent domains with no access to d-labels significantly harms performance. This problem is not addressed by simply increasing the depth of the network: while accuracy improves slightly, a ResNet56 exhibits the same performance losses — in particular on the latent domains product (P) and real world (R).
While residual adaptation (RA) (Rebuffi et al., 2018) was shown to work extremely well in many multi-domain scenarios, performance here is sub-par, regardless of whether it accesses d (MD: one Vd per-domain) or not (LD). This likely results from linear modules being reserved for each d when using annotations, enabling no native cross-domain sharing of parameters. When d is hidden on the other hand, the model is forced to share a single linear adaptation module V between all four hidden domains, without the flexible gating we propose in SLA.
Learning annotations through latent domain clustering and coupling this with domain-adversarial gradient reversal as in MMLD (Matsuura & Harada, 2020) increases performance relative to its dannotated counterpart (Ganin et al., 2016). The increase is modest however, likely because enforcing domain-invariance on the gradient level negatively impacts the model’s ability to discriminate between classes (Wang et al., 2020). Another related baseline is MLFN (Chang et al., 2018) which builds on ResNeXt (Xie et al., 2017) to define a latent-factor architecture that accounts for multi-
1Code is available at github.com/VICO-UoE/LatentDomainLearning.
modality in data. Crucially where our method is fine-grained and shares convolutions at every layer, MLFN instead enables and disables entire network blocks, allowing us to outperform it.
SLA outperforms the currently available latent domain models by a consistent margin, and increases UAcc by 12.79% relative to ResNet26. Best performance is obtained when K = D, with performance being reducing slightly from overfitting of larger domains for K > D (see Appendix C).
PACS The second experiment examines performance on the PACS dataset (Li et al., 2017). Crucially PACS domains (art, cartoon, photo, sketch) differ more markedly from one another (c.f. examples in Fig. 6), hence constituting an interesting latent domain problem.
Even for more distinct domains as in PACS, results in Table 3 show that SLA improves over existing baselines. The largest gains occur on smaller domains (e.g. art), where standard models suppress underrepresented parts of the distribution (see additional discussion on imbalanced distributions in Appendix G). Our method again surpasses the accuracy of 4×ResNet26, while requiring a fraction of the total parameters (∼ 9.7 mil for K = 5 vs. ∼ 24.8 mil). The performance of SLA again continues to increase with larger K (see Appendix C).
The performance increase from using a latent domain-adversarial approach (Matsuura & Harada, 2020) versus using domain-annotations (Ganin et al., 2016) confirms that learning domains alongside the rest of the network can be a better strategy than trusting in annotations. Our approach again improves over this, without requiring a clustering stage as in MMLD.
Results for k-means (usingD= 4 centers and clustered on the feature level) and subsequent finetuning show that a two-stage strategy is suboptimal. This is not surprising since, similar to d-supervision via gd in Φ of eq. (3), clustering learns fixed switches that get used across all layers. In contrast to this in SLA we flexibly share or separate features individually at every layer (c.f. qualitative results in Fig. 3), synergizing only where appropriate.
DomainNet We also evaluate models on a large-scale benchmark called DomainNet (Peng et al., 2019a). This dataset contains 518 447 images from six domains (clipart, painting, photos, sketch, infographics, and quickdraw), with a total of |Y| = 345 object classes. The optimization settings remain unchanged from those in previous sections.
Results are shown in Table 4. MLFN performs best on quickdraw, a domain that differs visibly from others (c.f. Fig. 6 for examples from each domain), and having entire network blocks dedicated to it seems to benefit performance. On all remaining domains, SLA outperforms existing models,
regardless of whether they were designed specifically for multi-domain problems, such as RA, or whether they are much deeper/parameter-intensive (ResNet56).
Qualitative analysis We (i) compare global statistics of Office-Home and PACS domains as well as (ii) their per-layer treatment within SLA; (iii) analyze sparse gating, (iv) representations learned by SLA, and show that (v) our module shares between geometric properties (shape, pose, etc.).
i) Fig. 2: average cosine similarities of per-domain gating vectors g∈GL across l= 1, . . . , L layers of ResNet26 show that Office-Home domains differ less than those in PACS.
ii) Fig. 3: layerwise measurements of Corr[gl(x), gl(x′)] for x, x′ drawn from differing d 6= d′ for Office-Home. If inter-domain correlation is high, then similar corrections Vk are responsible for processing samples from two domains. Across top layers of the network there is little correlation, presumably as low-level information associated with each domain is processed independently. In the mid to bottom stages correlation increases: these layers are typically associated with higherorder features (Yosinski et al., 2014; Mahendran & Vedaldi, 2016; Asano et al., 2020), and since label spaces are shared between latent domains, similar object-level features are required to classify objects into their respective categories.
iii) Fig. 4: sparse gates have the flexibility to either output singular activations (i.e. become fully discrete) or all non-zero values (a continuous gate). We measure the per-layer sparsity Ex∼Pd [K− ‖gl(x)‖0]/(K − 1) where ‖ · ‖0 counts values different from zero, finding sparsity of SLA to vary across model depth. Interestingly after each downsampling operation SLA tends to be relatively sparse, followed by a dense gate, then again a sparse one, and so forth. The model thus utilizes the extra flexibility resulting from sparse gates.
Due to PACS domains being relatively distinctive, the dataset is an interesting candidate for additional analysis in (iv) and (v) of how sparse adaptation handles the different ground-truth domains.
iv) Fig. 5 (left): gate vectors g ∈ GL for samples from all four domains in PACS visualized by their principal components. SLA exhibits an intuitive clustering of human-annotated PACS domains: visually similar art and photo (•,•) cluster together. The manifold describing sketches (•) is arguably more primitive than those of the other domains, and indeed only maps to a small region. Cartoon (•) lies somewhere between sketches and real world images. This matches intuition: a cartoon is, more or less, just a colored sketch.
Fig. 5 also highlights one sample that shows an elephant that SLA places among the cartoon (•) domain, but which has been assigned a ground-truth domain label of photo (•) in the PACS dataset. The ground-truth label seems to have been annotated in error, but different from approaches that use d-supervision, our SLA processes latent domains on-the-fly and is therefore not irritated by this.
v) Fig. 5 (right): pairs of samples with similar gates. This shows that latent domains are indicative of more than ground-truth domain labels and extend to geometric similarities: pose, color, etc. of the samples are visibly related. Compare in particular the poses of elephants/dogs (second/third row).
5 CONCLUSION
In this paper we explored two questions: (i) whether domain associations are required for learning effective models over multiple visual domains and (ii) how multi-domain models may best be learned without depending on manually curated domain labels.
As has been shown, the performance of existing models does degrade without domain labels, raising doubts about their suitability for realistic problems that involve diverse data sources. As a remedy, we proposed a novel adaptation strategy which reclaims (and often exceeds) lost accuracy on latent domains, benefiting several problems where some notion of a domain (but no annotation) exists.
ACKNOWLEDGEMENT
HB is supported by the EPSRC programme grant Visual AI EP/T028572/1. TH was supported by EPSRC grant EP/R026173/1.
A DATASETS
Fig. 6 shows examples from the latent domain benchmarks evaluated in Section 4. The selected images have equivalent classes yd = yd′ ∈ Y (for example chair for Office-Home), but different domains (e.g. d = {art, clipart, product, real world}). These examples show that data from different domains often contain very different visual characteristics (compare e.g. photo vs. sketch for PACS), even when the object is the same. At the same time, other domains are more alike (e.g. art and photo), indicating that different amounts of sharing between per-domain parameters are required, which in SLA is facilitated by its gating mechanism.
B RELATED WORK
Multi-domain learning relates most closely to our work. The state-of-the-art methods introduce small convolutional corrections in residual networks to account for individual domains (Rebuffi et al., 2017; 2018), which was recently extended to obtain efficient multi-task models for related language tasks Stickland & Murray (2019). Other work makes use of task-specific attention mechanisms (Liu et al., 2019a), attempts to scale task-specific losses (Kendall et al., 2018), or addresses tasks at the level of gradients (Chen et al., 2017). Crucially, these approaches all rely firmly on domain labels.
Our work is loosely related to learning universal representations (Bilen & Vedaldi, 2017), which was used as a guiding principle in designing more transferable models (Tamaazousti et al., 2019). However, these works also assume the presence of domain labels. Multimodal learning does not make this assumption, and was shown to benefit from accounting for latent semantic factors to match images (Chang et al., 2018), or from normalizing data in separate groups (Deecke et al., 2019). As we show in our experiments (see Section 4), latent domain learning however benefits from more customized solutions than these.
The proposed module gives rise to a differentiable dynamic network architecture, studied e.g. for reinforcement learning (Zoph & Le, 2017; Pham et al., 2018), Bayesian optimization (Kandasamy et al., 2018), or when adapting to new tasks (Mallya et al., 2018; Rosenfeld & Tsotsos, 2018). For such architectures, two components are commonly used: discrete Gumbel-based sampling (Jang et al., 2016), e.g. leveraged in dynamic computer vision architectures (Veit & Belongie, 2018; Sun et al., 2019a), or continuous self-attentive approaches (Lin et al., 2017b), which have been used successfully to scale expert models (Jacobs et al., 1991; Jordan & Jacobs, 1994) to large problem spaces (Shazeer et al., 2017; Wang et al., 2019).
From the perspective of algorithmic fairness, a desirable model property is to ensure consistent predictive equality across different identifiable subgroups in data (Zemel et al., 2013; Hardt et al., 2016; Fish et al., 2016). This relates to one of the goals in latent domain learning: to limit implicit model bias towards large domains, and improve robustness on small domains. Recent work explores connections between models and empirical fairness for visual recognition (Bagdasaryan et al., 2019; Hooker et al., 2020; Wang et al., 2020), different from our experiments however (see Appendix F) they focus their analysis on a setting in which annotations for protected attributes are available.
C VARIATION OF RESULTS
Fig. 7 displays variances of accuracies recorded over ten random initializations on Office-Home (left) and PACS (right). We generally found SLA to be robust to different optimization settings, and as a result observed variances are relatively low across experiments.
LargerK brings an improvement of around 0.5-1% in performance at the expense of a linear increase in learnable parameters (c.f. next section). While accuracy is improved by setting K > 2, gains appear to saturate in line with previous observations around network width (Xie et al., 2017).
D MEMORY REQUIREMENTS
In SLA every layer contains O(K|C|+K|C|2) parameters to parametrize gates and corrections Vk, respectively. This is however an extremely modest requirement, in particular because f0 stays fixed: while a ResNet26 contains ∼ 6.2 mil learnable parameters, even when setting K= 5 within SLA it has just 3.5 mil free parameters, and is a fraction of the number of parameters needed to parametrize four ResNet26 (around 24.8 mil parameters).
Note also that the complexity of solving sparse gates in SLA scales as O(K logK), a negligible increase given the small K required in our method.
E ABLATION
Replacing sparse gating within SLA registers a drop in performance, regardless of whether smooth or discrete mechanisms are used. Accuracies for soft and straight-through Gumbel-softmax sampling (Jang et al., 2016) were on par; we report straight-through sampling here.
We also ran experiments where we did not fix the residual backbone f0 but updated its parameters alongside the learning of SLA. In line with what Rebuffi et al. (2017) report, this lead to overfitting and performance dropped to UAcc = 73.53.
F FAIRNESS
Recent work elevated the role of small subgroups in data and examined model fairness on CelebA (Bagdasaryan et al., 2019; Wang et al., 2020; Hooker et al., 2020). Because such subgroups may be interpreted as constituting an individual latent domain component Pd, they are an interesting candidate to evaluate our purpose-built SLA on.
The benchmark contains different labeled attributes (e.g. “brown hair”, “glasses”), and is modified from the original dataset by hiding gender labels. Models are evaluated on all 39 remaining
Table 6: Average precision and bias amplification of SLA on the CelebA fair attribute recognition benchmark (Wang et al., 2020).
ResNet18 + SLA ResNet34 + SLA ResNet50 + SLA
mAP (↑) 71.76 73.22 (+1.46) 71.33 73.98 (+2.65) 74.52 75.03 (+0.51) BA (↓) 0.025 0.014 0.022 0.009 0.012 0.008
0.5 0.6 0.7 0.8 0.9 1.0 Skew
0
2
4
6 8 Ch an ge i n AP [ %]
Figure 8: Change in AP between ResNet18 and ResNet18-SLA for different gender skews in CelebA attributes.
attributes, which subsequently experience varying amounts of gender skew. Framed as a latent domain problem we have d={female,male}, but models have no access to this information. The images used are the entire Aligned&Cropped subset (Liu et al., 2015) over which we finetune residual models, replacing only the fully-connected layer of the network. We use the optimization settings introduced in Section 4 for 70 epochs with reductions at epochs 30, 40, and 50, selecting the best model on the validation split. This experimental setup is identical to previous work on empirical fairness (Wang et al., 2020; Ramaswamy et al., 2020), which however – different from our work – focused on learning models that have access to the gender-attribute d.
We evaluate per-attribute accuracy using mean average precision (mAP) and report bias amplification (BA) (Zhao et al., 2017). This compares the propensity of a model to make positive predictions (i.e. f exceeds some threshold t+ ∈ [0, 1]) in the gender g∗y that appears most frequent within attribute y, compared to the true counted ratio of positive examples y+:
BA[f ] = Ex∼Px [1f(x)>t+|g∗y 1f(x)>t+ ] − Ey∼Py [1y=y+|g∗y 1y=y+ ] , (6)
where t+ is optimized for on the validation split. For example if 60% of male examples are wearing glasses but under the model this is raised to a total of 65%, then bias is amplified by BA = 0.05.
We report performance for ResNet18, ResNet34, and ResNet50 in Table 6 and compare this to the same model with SLA inserted. SLA consistently raises both mAP and reduces bias, indicating that it relies less on spurious correlations in data to formulate its predictions.
In Fig. 8 we compare per-attribute skew toward either female or male (whichever is more frequent) to the gain in performance from ResNet18 to the same model but with SLA inserted. We observe a clear trend here, whereby SLA is able to raise performance the most in those attributes that experience the largest amounts of skew.
G LONG-TAILED RECOGNITION
Standard models often experience difficulty when some classes are heavily underrepresented. This problem has recently been studied in long-tailed recognition (Liu et al., 2019b; Cao et al., 2019) with
benchmarks that modify CIFAR-10 and CIFAR-100 to an imbalanced version by dropping some classes (e.g. 6-10 for CIFAR-10) (Buda et al., 2018). The severity of the imbalance is described via the ratio ρ = nmax/nmin between the largest and smallest classes.
Long-tailed distributions may be viewed as containing an underrepresented latent component with π = 1/(1 + ρ), and previous results (c.f. Section 4) that fortified small latent domains within P motivate us to evaluate the imbalance setting more closely here.
Since our strategy is architecture-based, it can be combined with the most recent state-of-the-art (loss-based) techniques for long-tailed recognition: a label-distribution-aware margin loss with deferred reweighting (Cao et al., 2019), or reducing contributions from well-classified examples as in focal losses (Lin et al., 2017a). As Table 7 shows, adaptation via sparse gates acts as a regularizer on the underlying ResNet26, and consistently improves performance on long-tail benchmarks. | 1. What is the focus and contribution of the paper on latent domain learning?
2. What are the strengths of the proposed approach, particularly in terms of performance and problem setting?
3. What are the weaknesses of the paper, especially regarding the quality of writing and novelty of the proposed method?
4. Do you have any concerns about the experimental results and comparisons with previous works?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper introduces a new task called latent domain learning and proposes a baseline that extends an existing multi-domain learning method. The latent domain learning assumes that training data are sampled from different domains yet their domain labels are latent, and aims at learning models that generalize well to the domains. The proposed baseline deploys multiple parallel feature transform layers that are chosen on the fly through gating variables, with the hope that the gating variables learn to predict the latent domain label of input and choose feature transforms of the domain accordingly. This model demonstrates superior performance to existing multi-domain learning methods in the latent domain learning setting.
I recognize that latent domain learning is an interesting problem with great potential, and has many applications such as learning using web-crawled images. However, the proposed method looks limited in terms of novelty and the quality of writing is below the standard.
Review
[Strengths]
A new and interesting problem setting with practical values: The new task introduced in this paper, i.e., latent domain learning, looks like a tweak of existing problems such as multi-domain learning at first glance, but is interesting and has great potential as it could be practically useful when learning and testing models with data from heterogeneous domains (e.g., web images crawled by predefined search keywords). I also agree that the definition of visual domains is often vague and contrived, thus believe the motivations of latent domain learning make sense.
Strong performance: The proposed method outperforms multi-domain learning methods in the latent domain learning setting (although such a result is to be expected, the gap seems not large enough, and some scores of previous work look abnormal).
[Weaknesses]
Weak quality of writing: The manuscript is overall readable but sometimes hard to follow, probably due to its weird terminology and expressions. Also,
U^tilde in Table 1 is not defined.
It is unclear why Eq. (2) is called “uniform” accuracy.
The description for Eq. (3) is overly complicated.
The operation denoted by stars is not defined appropriately, one may guess it stands for correlation though.
Limited novelty: The proposed model LSA looks like an extension of the residual adapter (Rebuffi et al., 2018), dubbed RA in the paper. Both RA and LSA adopt and modify the well-known residual connection by adding domain-specific feature transforms. The main difference between them is that to choose appropriate feature transforms RA utilizes domain labels explicitly while LSA estimates the latent domain label of input through gating variables. The use of gating variables could be counted as a contribution, but they are not a new idea as they have been widely used in other fields of machine learning. Also, I would note that some objections against RA in the paper look invalid and accordingly the contribution of this paper should be undervalued. As far as I understand the residual adaptation method (Rebuffi et al., 2018) aims at designing models that maximize parameter sharing across different domains; only a small number of parameters of their residual adaptation modules are learned in a domain-specific manner while the majority of their parameters are shared across domains.
Experiments: Compared to RA, the improvement by the proposed method does not look sufficiently large. Also, the records of RA are weird: I wonder why its score is degraded when using domain labels, and how it is trained without domain labels in the latent domain learning setting. |
ICLR | Title
Improving Exploration of Deep Reinforcement Learning using Planning for Policy Search
Abstract
Most Deep Reinforcement Learning methods perform local search and therefore are prone to get stuck on non-optimal solutions. Furthermore, in simulation based training, such as domain-randomized simulation training, the availability of a simulation model is not exploited, which potentially decreases efficiency. To overcome issues of local search and exploit access to simulation models, we propose the use of kinodynamic planning methods as part of a model-based reinforcement learning method and to learn in an off-policy fashion from solved planning instances. We show that, even on a simple toy domain, D-RL methods (DDPG, PPO, SAC) are not immune to local optima and require additional exploration mechanisms. We show that our planning method exhibits a better state space coverage, collects data that allows for better policies than D-RL methods without additional exploration mechanisms and that starting from the planner data and performing additional training results in as good as or better policies than vanilla D-RL methods, while also creating data that is more fit for re-use in modified tasks.
N/A
Most Deep Reinforcement Learning methods perform local search and therefore are prone to get stuck on non-optimal solutions. Furthermore, in simulation based training, such as domain-randomized simulation training, the availability of a simulation model is not exploited, which potentially decreases efficiency. To overcome issues of local search and exploit access to simulation models, we propose the use of kinodynamic planning methods as part of a model-based reinforcement learning method and to learn in an off-policy fashion from solved planning instances. We show that, even on a simple toy domain, D-RL methods (DDPG, PPO, SAC) are not immune to local optima and require additional exploration mechanisms. We show that our planning method exhibits a better state space coverage, collects data that allows for better policies than D-RL methods without additional exploration mechanisms and that starting from the planner data and performing additional training results in as good as or better policies than vanilla D-RL methods, while also creating data that is more fit for re-use in modified tasks.
1 INTRODUCTION
Robots in human-centric environments are confronted with less structured, more varied and more quickly changing situations than in typical automated manufacturing environments. Research in autonomous robots adresses these challenges using modern machine learning methods. However, learning and trying out actions directly on a real robot is time-consuming and potentially dangerous to the environment as well as to the robot. In contrast, physically-based simulation provides the benefit of faster, cheaper, and safer ways for robot learning.
If simulation models are available, they can be used by sampling-based planning methods that are able to directly plan robot behaviour using these models. However, the time required to perform planning can make this intractable for execution.
Finding policies that directly map from the current state to the next applicable action eliminates the need for planning. While Deep-Reinforcement Learning (D-RL) has shown promising results, for example those by OpenAI et al. (2018), D-RL training can be tedious and resource demanding. Plappert et al. (2017) report problems on the HalfCheetah environment where the algorithms converge to a local optimum corresponding to the cheetah wiggling on its back. They alleviated this problem by a different exploration scheme.
In preliminary experiments (not included in this paper) we found similar problems: D-RL algorithms were not able to learn a pushing task with a simulated 7-DoF robot arm. The algorithms we used were Deep Deterministic Policy Gradient (DDPG) Lillicrap et al. (2015) and Proximal Policy Gradient (PPO) Schulman et al. (2017) (from OpenAI Baselines by Dhariwal et al. (2017)).
The algorithms were also not reaching relevant parts of the state space. Consequently, and in line with the findings of Plappert et al. (2017) we assume that part of the problem of failing to learn good policies is related to insufficient exploration. To remedy this problem, one might increase search time while keeping exploration noise high, or use more principled exploration. While increasing search time will in the limit also yield acceptable solutions, directed exploration appears more promising to find good solutions more reliably and in less time.
We thus focus on the latter approach, as covering a more diverse area of the state space increases the chances of finding an optimal solution, and moving away from random or exhaustive search reduces the number of samples required to learn a good policy.
Model-based methods can use their models of the task in an efficient way to plan over multiple steps and explore the state space in a more directed way. Given an accurate model, optimal policies can be produced without interacting with the world and thus with fewer samples (Hester & Stone, 2012). In particular, Rapidly Exploring Random Tree (RRT) are planning methods that focus on maximizing state-space exploration.
We propose to take advantage of the benefits the aforementioned planning methods provide while tackling the problem of planning time by synthesizing the planning results into a policy. This essentially makes the proposed method a model-based method (Sutton & Barto, 2018). We will refer to this method as Planning for Policy Search (PPS). This is of particular interest in domain-randomized training, where simulation models are always available, to increase the data efficiency of exploration.
Here we investigate a preliminary version of this method that combines planning and policy search but does not perform randomizations yet. In particular, we investigate the following questions:
Q1 How do the data generated by RRT compare to those from D-RL methods? Do they cover a larger area of the state space? Do the reward distributions differ?
Q2 Are PPS methods less susceptible to local optima than D-RL methods? Q3 Can the data collected by PPS be reused more easily?
The experimental setup used to investigate these questions is described in Figure 1 . In a simulated environment, the planner and reinforcement learning agent are run – each separately – to generate environment interactions. In the case of the reinforcement learning agent a policy is learned, and its return is evaluated (Q2, Sec. 4.2 ). In both cases, the collected data are stored as a dataset. In a second step, these datasets are analyzed with respect to their state-space coverage (Q1, Sec. 4.1 ). Then the datasets are used to train an RL agent in an off-policy fashion. The returns of this agent’s policy are again evaluated (Q1, Sec. 4.1 ). In a further experiment an agent is trained partially from these datasets and partially from experience it generates (Q3, Sec. 4.3 ).
2 RELATED WORK
Using physically-based simulations for learning is limited by the necessity to approximate physical phenomena, causing discrepancies between simulated and real world results. This difference is called the reality gap and is a well-known problem in various fields of robotics. An important approach to cross the gap is Domain randomization (Tobin et al., 2017; Sadeghi & Levine, 2017;
James et al., 2017): instead of one simulated environment, learning is done using a distribution of models with varying properties – such as for example mass, friction, shape, position, force/torque noise, etc. The idea is to make the behavior policies learned by the reinforcement learning process more robust to the differences within this distribution, thereby increasing robustness against the difference between the training distribution and the target domain, i.e. against the reality gap.
The work from OpenAI has shown a successful use of domain randomization for learning in-handmanipulation, however the number of required training steps is raised by a factor of 33 (OpenAI et al., 2018) when domain randomizations are introduced. This increases the number of training steps to the magnitude of about 3.9·1010 from a magnitude of 1.2·109 – classical deep reinforcement learning approaches typically require 105 to 109 iterations of simulation steps – many algorithms are being tested on 106 timesteps, depending on the environment. The required amount of training data can make this method prohibitively expensive and typically the availability of a simulation model is not exploited.
Improving the efficiency of domain randomization is an active topic of research, for example by using adversarial randomizations (Mandlekar et al., 2017) or limiting the training to stop before overfitting to idiosyncrasies of the simulation (Muratore et al., 2018). There are also reinforcement learning methods that are more sample-efficient such as guided policy search (Levine & Koltun, 2013) which is a model-based deep reinforcement learning method. In Guided Policy Search (GPS), rollouts from a deep neural network controller are optimised by an optimal control method such as the iterative Linear Quadratic Regulator (iLQR) (Todorov & Weiwei Li, 2005; Tassa et al., 2012) method. However, guided policy search is, depending on the task, usually initialized from demonstrations since the exploration capabilities of the underlying optimization method (iLQR) are limited. Furthermore, the optimization method requires an applicable, engineered cost function which is able to guide the search procedure towards relevant solutions.
The benefits from combining a model-based method with model-free reinforcement learning has been highlighted in Renaudo et al. (2014). However their work focuses on discrete problems and the model-based method and the model-free algorithm are controlling the agent together whereas we address continuous RL problems where the planning method produces data for the policy learner.
Affine Quadratic Regulator (AQR)-RRT (Glassman & Tedrake, 2010) or LQR-RRT (Perez et al., 2012) are examples of RRT methods, which use a dynamics-based cost metric to guide the tree extension, making this methods able to deal with kinodynamic planning problems.
The problem of performance is also recognized in planning and work is being undertaken to make RRT faster, for example by Wolfslag et al. (2018).
3 METHOD
The evaluations are done on a simple, one-dimensional double integrator task where the goal is to move a point mass to a goal position. The environment is illustrated and described in more detail in Table 1 . The reason for this choice is that this environment is easy to visualize and problems present in a simple task are likely to manifest as well in a more complex setting.
The environment contains two distinct goal locations. The agent receives a reward based on the distance to the goal points, where the reward is dominated by the distance to the closer one of the two goal locations. The placement of the goal locations is chosen such that the agent starts at a position where the gradient of the goal with the smaller reward is nonzero. This implies that simply maximizing the reward from the starting position will lead to a suboptimal policy and a smaller overall return.
The goals are located at position −2.5 and 6.0, both at 0.0 velocity. The placement is chosen such that a random action selection is more likely to stumble upon the lower reward solution (−2.5) rather than the higher reward solution (6.0). Figure 2 illustrates both the state space coverage in the case of uniformly-random actions without exploring starts and the reward received at each point in the state space. Although simple, this task is still relevant for robotics where the situation of a closer goal of less interest than a farther one is quite common (e.g. two charging batteries where the closest charges the robot more slowly than the furthest). Simply following the gradient of the reward might
Table 1: Description of the 1D double-integrator test environment: a point mass M can be moved in a one-dimensional space X = (position , velocity) by applying a continuous-valued force. Reward is received based on the distance to two possible goal locations (G1, G2).
Dynamics
X =
[ x
ẋ
] ẋ = Ax+Bu
position
velocity
G1
Position wrapping
d1
G2
d2
M
A =
[ 0 1
0 0
] B = [ 0
1 ] Reward
max((1− tanh |X −G∗1|), 2(1− tanh |X −G∗2|)) G1 = [ −2.5 0.0 ] G2 = [ 6.0 0.0 ]
Limits u ∈ [−1; 1] x ∈ [−10; 10] ẋ ∈ [−2.5; 2.5]
also lead the robot to get stuck at obstacles and thus prevent it from successfully completing the task.
To have a broader baseline we included an exploring-starts variant of the environment where the initial state of the system is sampled uniformly from the state space. This is a sound algorithmic variant and easy to implement in this toy environment. However, in more complicated settings, such as a robot arm performing a pushing task, it may bring the agent into unreachable (disconnected) parts of the state space. Such unreachable parts could, for example, be locations that the robot cannot reach from its initial position, or position-velocity configurations that would be damaging to the robot. Furthermore, this would also imply randomizing the state of the objects the robot interacts with, which requires additional engineering effort. This is contrary to what we want to achieve by learning – which is why we assume that in many applications exploring starts are undesirable or impractical.
Unless otherwise noted, the algorithms are run for 105 environment steps; the D-RL algorithms use 100-step episodes.
3.1 PPS & PLANNER
The PPS implementation we present here consists of an RRT-based planner to generate data and the SAC method to learn policies from that data and perform additional fine-tuning.
The planning method derives from the implementation of LQR-RRT by Perez et al. (2012). An RRT method consists of three components: a) a sampling method that decides where tree extensions should be directed to, b) a distance metric that estimates the cost of going from points in the tree to a new target point, and c) a local steering method, to reach from a given point to a target point in the state space.
Following the algorithmic description of LQR-RRT, we use an LQR-based distance metric and uniform sampling of the target locations but use a quadratic programming-based solver for finitehorizon steering between tree points and the target point. We only use the RRT not the RRT∗ variant. That is, we do not reconnect trajectories to find shorter paths – this is left to the finetuning.
3.2 BASELINE ALGORITHMS
We compare the performance to state-of-the-art D-RL algorithms, in particular PPO (Schulman et al., 2017; Hill et al., 2018), DDPG (Lillicrap et al., 2015) and SAC (Haarnoja et al., 2018).
DDPG is an off-policy method that learns a deterministic policy using an actor-critic approach. Exploration is done by using the deterministic policy and adding exploration noise to the selected actions. In contrast, PPO works on-policy, uses a stochastic policy, is a policy-gradient method, and is related to Trust-Region Policy Optimization (TRPO) by Schulman et al. (2015). It tries to limit abrupt changes to the policy in order to keep generating reasonable data in the policy rollouts. PPO exploration works by sampling actions from its stochastic policy. Finally, SAC is an off-policy method, uses a stochastic policy and an actor-critic approach. Similarly to PPO, it explores by sampling actions from the stochastic policy. It adds an entropy term to the value-function loss to encourage more exploratory behaviour of the policy, that is, high entropy in the action selection is encouraged. Sec. 4.1 and Sec. 4.2 show how this affects the results.
We use the implementations provided by Hill et al. (2018) which are a tuned and improved version of the algorithms provided by Dhariwal et al. (2017). We use the default hyperparameters to investigate whether the algorithms are stable with respect to their hyperparameters. This is important, since either we view hyperparameter search as part of the policy search, or we require the algorithm to be robust to hyperparameter settings over a wide range of environments.
3.3 Q1 COMPARING DATA GENERATION
To compare the exploration, we collect the data the agents see during their learning phase. This data is then analysed for state-space coverage. The coverage is calculated as the percentage of nonempty bins. For simplicity we use uniformly-shaped bins. The number of bins is equal along each state-space dimension and is set to √ 105/5, i.e. such that, in the uniform case, we expect five data points in each bin on average. We calculate the coverage over time during the learning progress of the agent.
To evaluate the reward distributions of the algorithms, the data, (s, a, r, s′) tuples from 11 independent runs, are combined to form one dataset for each algorithm. We look at the distribution of the r values of this dataset.
3.4 Q2 SUSCEPTIBILITY TO LOCAL OPTIMA
Reinforcement learning agents collect experience and use that experience to learn a policy, either implicitly in on-policy algorithms such as PPO, or explicitly in algorithms such as DDPG or SAC which use a replay buffer. Thus, the exploration process should reach regions in the state space relevant for the task so that it can learn a well-performing policy. If it cannot reach high-reward areas of the state space, the learned policy will also not move the agent to these regions and therefore the achieved return will be lower.
We perform training runs with PPO, DDPG, SAC and our PPS method. After the agents have learned, we use their policies to generate evaluation returns, which we analyze to compare their performance. This is done on 11 independent learning runs.
While the PPO, DDPG, SAC agents learn directly on the environment, our PPS agent uses the RRT planner to generate data. The generated data is stored in an SAC replay buffer. The replay buffer is fixed – no experience is added, no experience is removed. As a baseline this experiment is also performed for data generated by an SAC agent and an SAC agent on an exploring-starts variant. The data of both is also used in fixed SAC replay buffers to learn policies.
3.5 Q3 REUSING THE COLLECTED DATA
Since our PPS agent uses RRT with uniform sampling for tree extension to generate data, the statespace coverage is independent of the reward. It is therefore interesting to investigate whether this data can be reused more easily.
Similar to the previous experiment, a SAC agent is initialized with a random-weight-policy, but instead of an empty replay buffer, its buffer is preloaded with 50000 data samples created by one of the methods RRT, SAC, SAC ex. These samples are randomly shuffled to remove the order of their temporal acquisition. The agent then continues to acquire new environment interaction samples, which gradually replace the prefilled data in a First-In-First-Out (FIFO) fashion, while the policy is gradually updated on the new buffer. The agent is evaluated for another 50000 steps, however, the task is changed by disabling the reward at position 1 (resp. 2). As the buffer contains data about the previous task, only part of the previous dataset is useful whereas many samples are now misleading. If the agent has explored more during the previous task, we expect that its knowledge is more relevant to this second task and it should perform better than an agent that uses data generated in a more exploitation-focused way.
4 RESULTS
4.1 Q1 COMPARING DATA GENERATION
How do D-RL methods compare in terms of exploration to a directed exploration approach, such as RRT? Can we cover a larger area of the state space? Figure 3 shows the coverage
of the visited state space. Note how the RRT algorithm keeps increasing the state-space coverage while the D-RL agents level out. The exploring-starts agents (DDPG, PPO, SAC) follow the uniform sampling curve quite closely in the beginning, even exceeding the exploration of RRT, before the RRT method surpasses their coverage. Note how the RRT method surpasses the exploration of the non-exploring-starts methods from the very beginning. The agents trained with exploring starts (PPO, SAC, DDPG) level out at approximately the same coverage. A possible explanation is that by the exploring starts the agent starts from a random initial state but approaches the same favored goal. Thus the increase in coverage is mostly due to the exploring starts rather than the exploration mechanism of the agent. From this we conclude that the exploration capabilities built into methods such as DDPG, PPO and SAC are insufficient and that they mostly depend on the environment (i.e. exploring starts) to generate sufficiently-diverse data.
Do the reward distributions differ? Figure 3 shows how the rewards are distributed in the datasets collected by RRT, DDPG and SAC ex. respectively. For each method, the union of 11 runs is taken and the probability of achieving reward r is calculated. The logarithm of the kernel density estimate of distribution of the rewards r is depicted.
SAC favors the goal 1 position and as such most of the probability mass is concentrated around that reward. SAC ex. favors the second goal and consequently has a higher probability mass on the second goal. RRT collects data independently of the reward, which results in more samples around the higher reward goal (2) than SAC, but less than SAC ex. As such, RRT is more directed in exploring and consequently beats SAC in reaching the second goal. Both SAC and SAC ex. show a peak around their respective favored goal location while the data generated by RRT is independent of the reward and thus shows no such peaks. This also hints at the higher generality of the data generated by RRT which could be reused to achieve different goals, but also shows that very little data is generated around the regions of interest in this task.
4.2 Q2 SUSCEPTIBILITY TO LOCAL OPTIMA
Do state-of-the-art D-RL methods get stuck in local optima? Figure 4 depicts boxplots of the evaluation returns achieved by the D-RL algorithms after training for 105 environment steps. Note how DDPG achieves higher rewards without exploring starts, and PPO appears to profit from exploring starts. SAC appears to profit from exploring starts, while otherwise achieving returns in a similar range to DDPG; in some cases it is able to achieve higher returns than both PPO and DDPG.
The figure also contains the evaluation returns of our PPS method, indicated by “RRT indirect SAC” and the direct baseline comparisons where data is generated by SAC and SAC ex., respectively, and is used indirectly to train an SAC policy.
The results show improved performance when training on the fixed replay buffer. They also show performance superior to the policy indirectly trained on SAC data as well as the directly-trained SAC policy. The policies trained on the RRT data even achieve performance comparable to the directly-trained SAC policy with exploring starts.
We use tanh activations in the indirectly-trained policies because we found them to produce results with smaller variance and to perform more robustly.
4.3 Q3 REUSING THE COLLECTED DATA
Can the data collected by PPS be reused more easily? In this experiment we train an SAC agent partially indirectly from a prefilled replay buffer but then continue with regular training, thereby phasing out the prefilled data. Figure 5 shows the evolution of the evaluation return distribution.
The data used for prefilling is generated on the two-goals enviroment, while the adaptation and evaluation is done on environments where one of the two goal locations is disabled. Therefore, only part of the prefilled data is accurate. In both cases the PPS method, denoted by RRT→SAC, is able to learn good policies, while the SAC→SAC agent has superior performance on the goal-”1”-only environment; it completely fails on the goal-”2”-only environment. The converse happens for the SAC ex.→SAC agent. It is interesting to note that part of the reused data is actually deceiving to the agent because it tries to get the agent to regions where reward can no longer be found. The learning-from-scratch agent is provided as a baseline. Since the modified environment is simpler – only one optimum – it achieves comparable results.
5 DISCUSSION
In this work, we highlighted that standard D-RL algorithms are not immune to getting stuck in suboptimal policies even in a toy problem with two local optima. The agent controlled by PPS explores a wider part of the state space than D-RL methods that focus on reward accumulation, even with exploring starts. The data gathered by RRT is not biased by reward accumulation and is thus more representative of the environment (goal 2 is farther away and thus incurs less reward).
We showed that the policy-learning agent trained on data from RRT performs better in the initial task than SAC but worse than SAC ex. However, on two variations of this task where only one source of reward is available, SAC ex. fails to adapt in half of the new tasks, whereas RRT achieves almost-optimal performance.
This method is thus relevant for robotics settings where the environment might dynamically change and some rewards might not be available after convergence of the robot policy (e.g. two sources of power are available in the environment at the beginning of the task and one becomes depleted during the robot’s life).
This method also has the potential of speeding up domain-randomized training: By randomizing the model and using planning to quickly discover new policies, the method can focus the training on relevant parts of the state space and reduce the number of necessary samples. This will be evaluated in future work.
One limitation is that this evaluation is done on a simple task. It needs to be evaluated in more realistic settings where the state space is more complex and where variations of the task also alter environment dynamics. | 1. What are the limitations of traditional deep reinforcement learning (D-RL) methods in solving problems with multiple local optima?
2. How does the proposed method utilize planning techniques, such as Rapidly Exploring Random Tree (RRT), to improve the search process?
3. What are the advantages of the proposed method in terms of adapting to dynamic environments?
4. What is the main concern regarding the proposed method's ability to address the issue of planning time? | Review | Review
This paper suggested that conventional deep-reinforcement learning (D-RL) methods struggle to find global optima in toy problem when two local optima exist. The authors proposed to tackle this problem using planning method (Rapidly Exploring Random Tree, RRT) to expand the search area. Since the collected data are not correlated with reward, it is more likely to find the global optima in toy problem with two local optima . As to the planning time problem, they proposed to synthesize the planning results into a policy.
The experiments proved that the proposed method performs better in the aforementioned toy problem, and has advantage in adapting dynamic environment. However, the authors failed to provide sufficient analyis and theoretical support for the proposed method, plus it did not address the weakness of the RRT method-the problem of planning time. |
ICLR | Title
Improving Exploration of Deep Reinforcement Learning using Planning for Policy Search
Abstract
Most Deep Reinforcement Learning methods perform local search and therefore are prone to get stuck on non-optimal solutions. Furthermore, in simulation based training, such as domain-randomized simulation training, the availability of a simulation model is not exploited, which potentially decreases efficiency. To overcome issues of local search and exploit access to simulation models, we propose the use of kinodynamic planning methods as part of a model-based reinforcement learning method and to learn in an off-policy fashion from solved planning instances. We show that, even on a simple toy domain, D-RL methods (DDPG, PPO, SAC) are not immune to local optima and require additional exploration mechanisms. We show that our planning method exhibits a better state space coverage, collects data that allows for better policies than D-RL methods without additional exploration mechanisms and that starting from the planner data and performing additional training results in as good as or better policies than vanilla D-RL methods, while also creating data that is more fit for re-use in modified tasks.
N/A
Most Deep Reinforcement Learning methods perform local search and therefore are prone to get stuck on non-optimal solutions. Furthermore, in simulation based training, such as domain-randomized simulation training, the availability of a simulation model is not exploited, which potentially decreases efficiency. To overcome issues of local search and exploit access to simulation models, we propose the use of kinodynamic planning methods as part of a model-based reinforcement learning method and to learn in an off-policy fashion from solved planning instances. We show that, even on a simple toy domain, D-RL methods (DDPG, PPO, SAC) are not immune to local optima and require additional exploration mechanisms. We show that our planning method exhibits a better state space coverage, collects data that allows for better policies than D-RL methods without additional exploration mechanisms and that starting from the planner data and performing additional training results in as good as or better policies than vanilla D-RL methods, while also creating data that is more fit for re-use in modified tasks.
1 INTRODUCTION
Robots in human-centric environments are confronted with less structured, more varied and more quickly changing situations than in typical automated manufacturing environments. Research in autonomous robots adresses these challenges using modern machine learning methods. However, learning and trying out actions directly on a real robot is time-consuming and potentially dangerous to the environment as well as to the robot. In contrast, physically-based simulation provides the benefit of faster, cheaper, and safer ways for robot learning.
If simulation models are available, they can be used by sampling-based planning methods that are able to directly plan robot behaviour using these models. However, the time required to perform planning can make this intractable for execution.
Finding policies that directly map from the current state to the next applicable action eliminates the need for planning. While Deep-Reinforcement Learning (D-RL) has shown promising results, for example those by OpenAI et al. (2018), D-RL training can be tedious and resource demanding. Plappert et al. (2017) report problems on the HalfCheetah environment where the algorithms converge to a local optimum corresponding to the cheetah wiggling on its back. They alleviated this problem by a different exploration scheme.
In preliminary experiments (not included in this paper) we found similar problems: D-RL algorithms were not able to learn a pushing task with a simulated 7-DoF robot arm. The algorithms we used were Deep Deterministic Policy Gradient (DDPG) Lillicrap et al. (2015) and Proximal Policy Gradient (PPO) Schulman et al. (2017) (from OpenAI Baselines by Dhariwal et al. (2017)).
The algorithms were also not reaching relevant parts of the state space. Consequently, and in line with the findings of Plappert et al. (2017) we assume that part of the problem of failing to learn good policies is related to insufficient exploration. To remedy this problem, one might increase search time while keeping exploration noise high, or use more principled exploration. While increasing search time will in the limit also yield acceptable solutions, directed exploration appears more promising to find good solutions more reliably and in less time.
We thus focus on the latter approach, as covering a more diverse area of the state space increases the chances of finding an optimal solution, and moving away from random or exhaustive search reduces the number of samples required to learn a good policy.
Model-based methods can use their models of the task in an efficient way to plan over multiple steps and explore the state space in a more directed way. Given an accurate model, optimal policies can be produced without interacting with the world and thus with fewer samples (Hester & Stone, 2012). In particular, Rapidly Exploring Random Tree (RRT) are planning methods that focus on maximizing state-space exploration.
We propose to take advantage of the benefits the aforementioned planning methods provide while tackling the problem of planning time by synthesizing the planning results into a policy. This essentially makes the proposed method a model-based method (Sutton & Barto, 2018). We will refer to this method as Planning for Policy Search (PPS). This is of particular interest in domain-randomized training, where simulation models are always available, to increase the data efficiency of exploration.
Here we investigate a preliminary version of this method that combines planning and policy search but does not perform randomizations yet. In particular, we investigate the following questions:
Q1 How do the data generated by RRT compare to those from D-RL methods? Do they cover a larger area of the state space? Do the reward distributions differ?
Q2 Are PPS methods less susceptible to local optima than D-RL methods? Q3 Can the data collected by PPS be reused more easily?
The experimental setup used to investigate these questions is described in Figure 1 . In a simulated environment, the planner and reinforcement learning agent are run – each separately – to generate environment interactions. In the case of the reinforcement learning agent a policy is learned, and its return is evaluated (Q2, Sec. 4.2 ). In both cases, the collected data are stored as a dataset. In a second step, these datasets are analyzed with respect to their state-space coverage (Q1, Sec. 4.1 ). Then the datasets are used to train an RL agent in an off-policy fashion. The returns of this agent’s policy are again evaluated (Q1, Sec. 4.1 ). In a further experiment an agent is trained partially from these datasets and partially from experience it generates (Q3, Sec. 4.3 ).
2 RELATED WORK
Using physically-based simulations for learning is limited by the necessity to approximate physical phenomena, causing discrepancies between simulated and real world results. This difference is called the reality gap and is a well-known problem in various fields of robotics. An important approach to cross the gap is Domain randomization (Tobin et al., 2017; Sadeghi & Levine, 2017;
James et al., 2017): instead of one simulated environment, learning is done using a distribution of models with varying properties – such as for example mass, friction, shape, position, force/torque noise, etc. The idea is to make the behavior policies learned by the reinforcement learning process more robust to the differences within this distribution, thereby increasing robustness against the difference between the training distribution and the target domain, i.e. against the reality gap.
The work from OpenAI has shown a successful use of domain randomization for learning in-handmanipulation, however the number of required training steps is raised by a factor of 33 (OpenAI et al., 2018) when domain randomizations are introduced. This increases the number of training steps to the magnitude of about 3.9·1010 from a magnitude of 1.2·109 – classical deep reinforcement learning approaches typically require 105 to 109 iterations of simulation steps – many algorithms are being tested on 106 timesteps, depending on the environment. The required amount of training data can make this method prohibitively expensive and typically the availability of a simulation model is not exploited.
Improving the efficiency of domain randomization is an active topic of research, for example by using adversarial randomizations (Mandlekar et al., 2017) or limiting the training to stop before overfitting to idiosyncrasies of the simulation (Muratore et al., 2018). There are also reinforcement learning methods that are more sample-efficient such as guided policy search (Levine & Koltun, 2013) which is a model-based deep reinforcement learning method. In Guided Policy Search (GPS), rollouts from a deep neural network controller are optimised by an optimal control method such as the iterative Linear Quadratic Regulator (iLQR) (Todorov & Weiwei Li, 2005; Tassa et al., 2012) method. However, guided policy search is, depending on the task, usually initialized from demonstrations since the exploration capabilities of the underlying optimization method (iLQR) are limited. Furthermore, the optimization method requires an applicable, engineered cost function which is able to guide the search procedure towards relevant solutions.
The benefits from combining a model-based method with model-free reinforcement learning has been highlighted in Renaudo et al. (2014). However their work focuses on discrete problems and the model-based method and the model-free algorithm are controlling the agent together whereas we address continuous RL problems where the planning method produces data for the policy learner.
Affine Quadratic Regulator (AQR)-RRT (Glassman & Tedrake, 2010) or LQR-RRT (Perez et al., 2012) are examples of RRT methods, which use a dynamics-based cost metric to guide the tree extension, making this methods able to deal with kinodynamic planning problems.
The problem of performance is also recognized in planning and work is being undertaken to make RRT faster, for example by Wolfslag et al. (2018).
3 METHOD
The evaluations are done on a simple, one-dimensional double integrator task where the goal is to move a point mass to a goal position. The environment is illustrated and described in more detail in Table 1 . The reason for this choice is that this environment is easy to visualize and problems present in a simple task are likely to manifest as well in a more complex setting.
The environment contains two distinct goal locations. The agent receives a reward based on the distance to the goal points, where the reward is dominated by the distance to the closer one of the two goal locations. The placement of the goal locations is chosen such that the agent starts at a position where the gradient of the goal with the smaller reward is nonzero. This implies that simply maximizing the reward from the starting position will lead to a suboptimal policy and a smaller overall return.
The goals are located at position −2.5 and 6.0, both at 0.0 velocity. The placement is chosen such that a random action selection is more likely to stumble upon the lower reward solution (−2.5) rather than the higher reward solution (6.0). Figure 2 illustrates both the state space coverage in the case of uniformly-random actions without exploring starts and the reward received at each point in the state space. Although simple, this task is still relevant for robotics where the situation of a closer goal of less interest than a farther one is quite common (e.g. two charging batteries where the closest charges the robot more slowly than the furthest). Simply following the gradient of the reward might
Table 1: Description of the 1D double-integrator test environment: a point mass M can be moved in a one-dimensional space X = (position , velocity) by applying a continuous-valued force. Reward is received based on the distance to two possible goal locations (G1, G2).
Dynamics
X =
[ x
ẋ
] ẋ = Ax+Bu
position
velocity
G1
Position wrapping
d1
G2
d2
M
A =
[ 0 1
0 0
] B = [ 0
1 ] Reward
max((1− tanh |X −G∗1|), 2(1− tanh |X −G∗2|)) G1 = [ −2.5 0.0 ] G2 = [ 6.0 0.0 ]
Limits u ∈ [−1; 1] x ∈ [−10; 10] ẋ ∈ [−2.5; 2.5]
also lead the robot to get stuck at obstacles and thus prevent it from successfully completing the task.
To have a broader baseline we included an exploring-starts variant of the environment where the initial state of the system is sampled uniformly from the state space. This is a sound algorithmic variant and easy to implement in this toy environment. However, in more complicated settings, such as a robot arm performing a pushing task, it may bring the agent into unreachable (disconnected) parts of the state space. Such unreachable parts could, for example, be locations that the robot cannot reach from its initial position, or position-velocity configurations that would be damaging to the robot. Furthermore, this would also imply randomizing the state of the objects the robot interacts with, which requires additional engineering effort. This is contrary to what we want to achieve by learning – which is why we assume that in many applications exploring starts are undesirable or impractical.
Unless otherwise noted, the algorithms are run for 105 environment steps; the D-RL algorithms use 100-step episodes.
3.1 PPS & PLANNER
The PPS implementation we present here consists of an RRT-based planner to generate data and the SAC method to learn policies from that data and perform additional fine-tuning.
The planning method derives from the implementation of LQR-RRT by Perez et al. (2012). An RRT method consists of three components: a) a sampling method that decides where tree extensions should be directed to, b) a distance metric that estimates the cost of going from points in the tree to a new target point, and c) a local steering method, to reach from a given point to a target point in the state space.
Following the algorithmic description of LQR-RRT, we use an LQR-based distance metric and uniform sampling of the target locations but use a quadratic programming-based solver for finitehorizon steering between tree points and the target point. We only use the RRT not the RRT∗ variant. That is, we do not reconnect trajectories to find shorter paths – this is left to the finetuning.
3.2 BASELINE ALGORITHMS
We compare the performance to state-of-the-art D-RL algorithms, in particular PPO (Schulman et al., 2017; Hill et al., 2018), DDPG (Lillicrap et al., 2015) and SAC (Haarnoja et al., 2018).
DDPG is an off-policy method that learns a deterministic policy using an actor-critic approach. Exploration is done by using the deterministic policy and adding exploration noise to the selected actions. In contrast, PPO works on-policy, uses a stochastic policy, is a policy-gradient method, and is related to Trust-Region Policy Optimization (TRPO) by Schulman et al. (2015). It tries to limit abrupt changes to the policy in order to keep generating reasonable data in the policy rollouts. PPO exploration works by sampling actions from its stochastic policy. Finally, SAC is an off-policy method, uses a stochastic policy and an actor-critic approach. Similarly to PPO, it explores by sampling actions from the stochastic policy. It adds an entropy term to the value-function loss to encourage more exploratory behaviour of the policy, that is, high entropy in the action selection is encouraged. Sec. 4.1 and Sec. 4.2 show how this affects the results.
We use the implementations provided by Hill et al. (2018) which are a tuned and improved version of the algorithms provided by Dhariwal et al. (2017). We use the default hyperparameters to investigate whether the algorithms are stable with respect to their hyperparameters. This is important, since either we view hyperparameter search as part of the policy search, or we require the algorithm to be robust to hyperparameter settings over a wide range of environments.
3.3 Q1 COMPARING DATA GENERATION
To compare the exploration, we collect the data the agents see during their learning phase. This data is then analysed for state-space coverage. The coverage is calculated as the percentage of nonempty bins. For simplicity we use uniformly-shaped bins. The number of bins is equal along each state-space dimension and is set to √ 105/5, i.e. such that, in the uniform case, we expect five data points in each bin on average. We calculate the coverage over time during the learning progress of the agent.
To evaluate the reward distributions of the algorithms, the data, (s, a, r, s′) tuples from 11 independent runs, are combined to form one dataset for each algorithm. We look at the distribution of the r values of this dataset.
3.4 Q2 SUSCEPTIBILITY TO LOCAL OPTIMA
Reinforcement learning agents collect experience and use that experience to learn a policy, either implicitly in on-policy algorithms such as PPO, or explicitly in algorithms such as DDPG or SAC which use a replay buffer. Thus, the exploration process should reach regions in the state space relevant for the task so that it can learn a well-performing policy. If it cannot reach high-reward areas of the state space, the learned policy will also not move the agent to these regions and therefore the achieved return will be lower.
We perform training runs with PPO, DDPG, SAC and our PPS method. After the agents have learned, we use their policies to generate evaluation returns, which we analyze to compare their performance. This is done on 11 independent learning runs.
While the PPO, DDPG, SAC agents learn directly on the environment, our PPS agent uses the RRT planner to generate data. The generated data is stored in an SAC replay buffer. The replay buffer is fixed – no experience is added, no experience is removed. As a baseline this experiment is also performed for data generated by an SAC agent and an SAC agent on an exploring-starts variant. The data of both is also used in fixed SAC replay buffers to learn policies.
3.5 Q3 REUSING THE COLLECTED DATA
Since our PPS agent uses RRT with uniform sampling for tree extension to generate data, the statespace coverage is independent of the reward. It is therefore interesting to investigate whether this data can be reused more easily.
Similar to the previous experiment, a SAC agent is initialized with a random-weight-policy, but instead of an empty replay buffer, its buffer is preloaded with 50000 data samples created by one of the methods RRT, SAC, SAC ex. These samples are randomly shuffled to remove the order of their temporal acquisition. The agent then continues to acquire new environment interaction samples, which gradually replace the prefilled data in a First-In-First-Out (FIFO) fashion, while the policy is gradually updated on the new buffer. The agent is evaluated for another 50000 steps, however, the task is changed by disabling the reward at position 1 (resp. 2). As the buffer contains data about the previous task, only part of the previous dataset is useful whereas many samples are now misleading. If the agent has explored more during the previous task, we expect that its knowledge is more relevant to this second task and it should perform better than an agent that uses data generated in a more exploitation-focused way.
4 RESULTS
4.1 Q1 COMPARING DATA GENERATION
How do D-RL methods compare in terms of exploration to a directed exploration approach, such as RRT? Can we cover a larger area of the state space? Figure 3 shows the coverage
of the visited state space. Note how the RRT algorithm keeps increasing the state-space coverage while the D-RL agents level out. The exploring-starts agents (DDPG, PPO, SAC) follow the uniform sampling curve quite closely in the beginning, even exceeding the exploration of RRT, before the RRT method surpasses their coverage. Note how the RRT method surpasses the exploration of the non-exploring-starts methods from the very beginning. The agents trained with exploring starts (PPO, SAC, DDPG) level out at approximately the same coverage. A possible explanation is that by the exploring starts the agent starts from a random initial state but approaches the same favored goal. Thus the increase in coverage is mostly due to the exploring starts rather than the exploration mechanism of the agent. From this we conclude that the exploration capabilities built into methods such as DDPG, PPO and SAC are insufficient and that they mostly depend on the environment (i.e. exploring starts) to generate sufficiently-diverse data.
Do the reward distributions differ? Figure 3 shows how the rewards are distributed in the datasets collected by RRT, DDPG and SAC ex. respectively. For each method, the union of 11 runs is taken and the probability of achieving reward r is calculated. The logarithm of the kernel density estimate of distribution of the rewards r is depicted.
SAC favors the goal 1 position and as such most of the probability mass is concentrated around that reward. SAC ex. favors the second goal and consequently has a higher probability mass on the second goal. RRT collects data independently of the reward, which results in more samples around the higher reward goal (2) than SAC, but less than SAC ex. As such, RRT is more directed in exploring and consequently beats SAC in reaching the second goal. Both SAC and SAC ex. show a peak around their respective favored goal location while the data generated by RRT is independent of the reward and thus shows no such peaks. This also hints at the higher generality of the data generated by RRT which could be reused to achieve different goals, but also shows that very little data is generated around the regions of interest in this task.
4.2 Q2 SUSCEPTIBILITY TO LOCAL OPTIMA
Do state-of-the-art D-RL methods get stuck in local optima? Figure 4 depicts boxplots of the evaluation returns achieved by the D-RL algorithms after training for 105 environment steps. Note how DDPG achieves higher rewards without exploring starts, and PPO appears to profit from exploring starts. SAC appears to profit from exploring starts, while otherwise achieving returns in a similar range to DDPG; in some cases it is able to achieve higher returns than both PPO and DDPG.
The figure also contains the evaluation returns of our PPS method, indicated by “RRT indirect SAC” and the direct baseline comparisons where data is generated by SAC and SAC ex., respectively, and is used indirectly to train an SAC policy.
The results show improved performance when training on the fixed replay buffer. They also show performance superior to the policy indirectly trained on SAC data as well as the directly-trained SAC policy. The policies trained on the RRT data even achieve performance comparable to the directly-trained SAC policy with exploring starts.
We use tanh activations in the indirectly-trained policies because we found them to produce results with smaller variance and to perform more robustly.
4.3 Q3 REUSING THE COLLECTED DATA
Can the data collected by PPS be reused more easily? In this experiment we train an SAC agent partially indirectly from a prefilled replay buffer but then continue with regular training, thereby phasing out the prefilled data. Figure 5 shows the evolution of the evaluation return distribution.
The data used for prefilling is generated on the two-goals enviroment, while the adaptation and evaluation is done on environments where one of the two goal locations is disabled. Therefore, only part of the prefilled data is accurate. In both cases the PPS method, denoted by RRT→SAC, is able to learn good policies, while the SAC→SAC agent has superior performance on the goal-”1”-only environment; it completely fails on the goal-”2”-only environment. The converse happens for the SAC ex.→SAC agent. It is interesting to note that part of the reused data is actually deceiving to the agent because it tries to get the agent to regions where reward can no longer be found. The learning-from-scratch agent is provided as a baseline. Since the modified environment is simpler – only one optimum – it achieves comparable results.
5 DISCUSSION
In this work, we highlighted that standard D-RL algorithms are not immune to getting stuck in suboptimal policies even in a toy problem with two local optima. The agent controlled by PPS explores a wider part of the state space than D-RL methods that focus on reward accumulation, even with exploring starts. The data gathered by RRT is not biased by reward accumulation and is thus more representative of the environment (goal 2 is farther away and thus incurs less reward).
We showed that the policy-learning agent trained on data from RRT performs better in the initial task than SAC but worse than SAC ex. However, on two variations of this task where only one source of reward is available, SAC ex. fails to adapt in half of the new tasks, whereas RRT achieves almost-optimal performance.
This method is thus relevant for robotics settings where the environment might dynamically change and some rewards might not be available after convergence of the robot policy (e.g. two sources of power are available in the environment at the beginning of the task and one becomes depleted during the robot’s life).
This method also has the potential of speeding up domain-randomized training: By randomizing the model and using planning to quickly discover new policies, the method can focus the training on relevant parts of the state space and reduce the number of necessary samples. This will be evaluated in future work.
One limitation is that this evaluation is done on a simple task. It needs to be evaluated in more realistic settings where the state space is more complex and where variations of the task also alter environment dynamics. | 1. What is the main contribution of the paper in improving exploration in deep reinforcement learning (DRL)?
2. What are the weaknesses of the paper regarding its experimental results and lack of theoretical arguments?
3. How does the reviewer assess the relevance of the paper's focus on Rapidly-exploring Random Tree (RRT) in the broader context of DRL research?
4. What are some potential improvements or alternative approaches that could be explored in future research building upon this work? | Review | Review
The paper aims to improve exploration in DRL through the use of planning. This is claimed to increase state space coverage in exploration and yield better final policies than methods not augmented with planner derived data.
The current landscape of DRL research is very broad, but RRT can only directly be applied in certain continuous domains with continuous action spaces. With learned embedding functions, RRT can be applied more broadly (see "Taking the Scenic Route: Automatic Exploration for Videogames" Zhan 2019). The leap from RRT-like motion planning to the general topic of "planning" for policy search is not well motivated explained with respect to the literature. Uses of Monte Carlo Tree Search (as in AlphaGo) seem obviously related here.
This reviewer moves to reject the paper primarily on the grounds of overinterpreting experimental results from a single, extremely simple example RL task. In a domain so small, we can't tease out the role of exploration, we aren't engaging with the "deep" of DRL, and we are only considering one specific kind of planning. The implicit claims of general improvement to exploration and improved downstream policies are not supported by the experimental results. At the same time, no theoretical argument is attempted that would make up for the very narrow nature of the experiments.
Questions for the authors:
- If HalfCheetah is used to motivate the work, and it is so easily available in the open source offerings from OpenAI, why isn't one (or many more) tasks of *at least* this complexity considered? MountainCar is one of the gym environments with a 2D phasespace compatible with the kinds of plots used in this paper.
- Could the authors taxonomize the landscape of planning and provide a specific argument for focusing on RRT? (RRT is a fun algorithm, but how will you draw the attention of other researchers who are currently focused on Atari games?) |
ICLR | Title
Improving Exploration of Deep Reinforcement Learning using Planning for Policy Search
Abstract
Most Deep Reinforcement Learning methods perform local search and therefore are prone to get stuck on non-optimal solutions. Furthermore, in simulation based training, such as domain-randomized simulation training, the availability of a simulation model is not exploited, which potentially decreases efficiency. To overcome issues of local search and exploit access to simulation models, we propose the use of kinodynamic planning methods as part of a model-based reinforcement learning method and to learn in an off-policy fashion from solved planning instances. We show that, even on a simple toy domain, D-RL methods (DDPG, PPO, SAC) are not immune to local optima and require additional exploration mechanisms. We show that our planning method exhibits a better state space coverage, collects data that allows for better policies than D-RL methods without additional exploration mechanisms and that starting from the planner data and performing additional training results in as good as or better policies than vanilla D-RL methods, while also creating data that is more fit for re-use in modified tasks.
N/A
Most Deep Reinforcement Learning methods perform local search and therefore are prone to get stuck on non-optimal solutions. Furthermore, in simulation based training, such as domain-randomized simulation training, the availability of a simulation model is not exploited, which potentially decreases efficiency. To overcome issues of local search and exploit access to simulation models, we propose the use of kinodynamic planning methods as part of a model-based reinforcement learning method and to learn in an off-policy fashion from solved planning instances. We show that, even on a simple toy domain, D-RL methods (DDPG, PPO, SAC) are not immune to local optima and require additional exploration mechanisms. We show that our planning method exhibits a better state space coverage, collects data that allows for better policies than D-RL methods without additional exploration mechanisms and that starting from the planner data and performing additional training results in as good as or better policies than vanilla D-RL methods, while also creating data that is more fit for re-use in modified tasks.
1 INTRODUCTION
Robots in human-centric environments are confronted with less structured, more varied and more quickly changing situations than in typical automated manufacturing environments. Research in autonomous robots adresses these challenges using modern machine learning methods. However, learning and trying out actions directly on a real robot is time-consuming and potentially dangerous to the environment as well as to the robot. In contrast, physically-based simulation provides the benefit of faster, cheaper, and safer ways for robot learning.
If simulation models are available, they can be used by sampling-based planning methods that are able to directly plan robot behaviour using these models. However, the time required to perform planning can make this intractable for execution.
Finding policies that directly map from the current state to the next applicable action eliminates the need for planning. While Deep-Reinforcement Learning (D-RL) has shown promising results, for example those by OpenAI et al. (2018), D-RL training can be tedious and resource demanding. Plappert et al. (2017) report problems on the HalfCheetah environment where the algorithms converge to a local optimum corresponding to the cheetah wiggling on its back. They alleviated this problem by a different exploration scheme.
In preliminary experiments (not included in this paper) we found similar problems: D-RL algorithms were not able to learn a pushing task with a simulated 7-DoF robot arm. The algorithms we used were Deep Deterministic Policy Gradient (DDPG) Lillicrap et al. (2015) and Proximal Policy Gradient (PPO) Schulman et al. (2017) (from OpenAI Baselines by Dhariwal et al. (2017)).
The algorithms were also not reaching relevant parts of the state space. Consequently, and in line with the findings of Plappert et al. (2017) we assume that part of the problem of failing to learn good policies is related to insufficient exploration. To remedy this problem, one might increase search time while keeping exploration noise high, or use more principled exploration. While increasing search time will in the limit also yield acceptable solutions, directed exploration appears more promising to find good solutions more reliably and in less time.
We thus focus on the latter approach, as covering a more diverse area of the state space increases the chances of finding an optimal solution, and moving away from random or exhaustive search reduces the number of samples required to learn a good policy.
Model-based methods can use their models of the task in an efficient way to plan over multiple steps and explore the state space in a more directed way. Given an accurate model, optimal policies can be produced without interacting with the world and thus with fewer samples (Hester & Stone, 2012). In particular, Rapidly Exploring Random Tree (RRT) are planning methods that focus on maximizing state-space exploration.
We propose to take advantage of the benefits the aforementioned planning methods provide while tackling the problem of planning time by synthesizing the planning results into a policy. This essentially makes the proposed method a model-based method (Sutton & Barto, 2018). We will refer to this method as Planning for Policy Search (PPS). This is of particular interest in domain-randomized training, where simulation models are always available, to increase the data efficiency of exploration.
Here we investigate a preliminary version of this method that combines planning and policy search but does not perform randomizations yet. In particular, we investigate the following questions:
Q1 How do the data generated by RRT compare to those from D-RL methods? Do they cover a larger area of the state space? Do the reward distributions differ?
Q2 Are PPS methods less susceptible to local optima than D-RL methods? Q3 Can the data collected by PPS be reused more easily?
The experimental setup used to investigate these questions is described in Figure 1 . In a simulated environment, the planner and reinforcement learning agent are run – each separately – to generate environment interactions. In the case of the reinforcement learning agent a policy is learned, and its return is evaluated (Q2, Sec. 4.2 ). In both cases, the collected data are stored as a dataset. In a second step, these datasets are analyzed with respect to their state-space coverage (Q1, Sec. 4.1 ). Then the datasets are used to train an RL agent in an off-policy fashion. The returns of this agent’s policy are again evaluated (Q1, Sec. 4.1 ). In a further experiment an agent is trained partially from these datasets and partially from experience it generates (Q3, Sec. 4.3 ).
2 RELATED WORK
Using physically-based simulations for learning is limited by the necessity to approximate physical phenomena, causing discrepancies between simulated and real world results. This difference is called the reality gap and is a well-known problem in various fields of robotics. An important approach to cross the gap is Domain randomization (Tobin et al., 2017; Sadeghi & Levine, 2017;
James et al., 2017): instead of one simulated environment, learning is done using a distribution of models with varying properties – such as for example mass, friction, shape, position, force/torque noise, etc. The idea is to make the behavior policies learned by the reinforcement learning process more robust to the differences within this distribution, thereby increasing robustness against the difference between the training distribution and the target domain, i.e. against the reality gap.
The work from OpenAI has shown a successful use of domain randomization for learning in-handmanipulation, however the number of required training steps is raised by a factor of 33 (OpenAI et al., 2018) when domain randomizations are introduced. This increases the number of training steps to the magnitude of about 3.9·1010 from a magnitude of 1.2·109 – classical deep reinforcement learning approaches typically require 105 to 109 iterations of simulation steps – many algorithms are being tested on 106 timesteps, depending on the environment. The required amount of training data can make this method prohibitively expensive and typically the availability of a simulation model is not exploited.
Improving the efficiency of domain randomization is an active topic of research, for example by using adversarial randomizations (Mandlekar et al., 2017) or limiting the training to stop before overfitting to idiosyncrasies of the simulation (Muratore et al., 2018). There are also reinforcement learning methods that are more sample-efficient such as guided policy search (Levine & Koltun, 2013) which is a model-based deep reinforcement learning method. In Guided Policy Search (GPS), rollouts from a deep neural network controller are optimised by an optimal control method such as the iterative Linear Quadratic Regulator (iLQR) (Todorov & Weiwei Li, 2005; Tassa et al., 2012) method. However, guided policy search is, depending on the task, usually initialized from demonstrations since the exploration capabilities of the underlying optimization method (iLQR) are limited. Furthermore, the optimization method requires an applicable, engineered cost function which is able to guide the search procedure towards relevant solutions.
The benefits from combining a model-based method with model-free reinforcement learning has been highlighted in Renaudo et al. (2014). However their work focuses on discrete problems and the model-based method and the model-free algorithm are controlling the agent together whereas we address continuous RL problems where the planning method produces data for the policy learner.
Affine Quadratic Regulator (AQR)-RRT (Glassman & Tedrake, 2010) or LQR-RRT (Perez et al., 2012) are examples of RRT methods, which use a dynamics-based cost metric to guide the tree extension, making this methods able to deal with kinodynamic planning problems.
The problem of performance is also recognized in planning and work is being undertaken to make RRT faster, for example by Wolfslag et al. (2018).
3 METHOD
The evaluations are done on a simple, one-dimensional double integrator task where the goal is to move a point mass to a goal position. The environment is illustrated and described in more detail in Table 1 . The reason for this choice is that this environment is easy to visualize and problems present in a simple task are likely to manifest as well in a more complex setting.
The environment contains two distinct goal locations. The agent receives a reward based on the distance to the goal points, where the reward is dominated by the distance to the closer one of the two goal locations. The placement of the goal locations is chosen such that the agent starts at a position where the gradient of the goal with the smaller reward is nonzero. This implies that simply maximizing the reward from the starting position will lead to a suboptimal policy and a smaller overall return.
The goals are located at position −2.5 and 6.0, both at 0.0 velocity. The placement is chosen such that a random action selection is more likely to stumble upon the lower reward solution (−2.5) rather than the higher reward solution (6.0). Figure 2 illustrates both the state space coverage in the case of uniformly-random actions without exploring starts and the reward received at each point in the state space. Although simple, this task is still relevant for robotics where the situation of a closer goal of less interest than a farther one is quite common (e.g. two charging batteries where the closest charges the robot more slowly than the furthest). Simply following the gradient of the reward might
Table 1: Description of the 1D double-integrator test environment: a point mass M can be moved in a one-dimensional space X = (position , velocity) by applying a continuous-valued force. Reward is received based on the distance to two possible goal locations (G1, G2).
Dynamics
X =
[ x
ẋ
] ẋ = Ax+Bu
position
velocity
G1
Position wrapping
d1
G2
d2
M
A =
[ 0 1
0 0
] B = [ 0
1 ] Reward
max((1− tanh |X −G∗1|), 2(1− tanh |X −G∗2|)) G1 = [ −2.5 0.0 ] G2 = [ 6.0 0.0 ]
Limits u ∈ [−1; 1] x ∈ [−10; 10] ẋ ∈ [−2.5; 2.5]
also lead the robot to get stuck at obstacles and thus prevent it from successfully completing the task.
To have a broader baseline we included an exploring-starts variant of the environment where the initial state of the system is sampled uniformly from the state space. This is a sound algorithmic variant and easy to implement in this toy environment. However, in more complicated settings, such as a robot arm performing a pushing task, it may bring the agent into unreachable (disconnected) parts of the state space. Such unreachable parts could, for example, be locations that the robot cannot reach from its initial position, or position-velocity configurations that would be damaging to the robot. Furthermore, this would also imply randomizing the state of the objects the robot interacts with, which requires additional engineering effort. This is contrary to what we want to achieve by learning – which is why we assume that in many applications exploring starts are undesirable or impractical.
Unless otherwise noted, the algorithms are run for 105 environment steps; the D-RL algorithms use 100-step episodes.
3.1 PPS & PLANNER
The PPS implementation we present here consists of an RRT-based planner to generate data and the SAC method to learn policies from that data and perform additional fine-tuning.
The planning method derives from the implementation of LQR-RRT by Perez et al. (2012). An RRT method consists of three components: a) a sampling method that decides where tree extensions should be directed to, b) a distance metric that estimates the cost of going from points in the tree to a new target point, and c) a local steering method, to reach from a given point to a target point in the state space.
Following the algorithmic description of LQR-RRT, we use an LQR-based distance metric and uniform sampling of the target locations but use a quadratic programming-based solver for finitehorizon steering between tree points and the target point. We only use the RRT not the RRT∗ variant. That is, we do not reconnect trajectories to find shorter paths – this is left to the finetuning.
3.2 BASELINE ALGORITHMS
We compare the performance to state-of-the-art D-RL algorithms, in particular PPO (Schulman et al., 2017; Hill et al., 2018), DDPG (Lillicrap et al., 2015) and SAC (Haarnoja et al., 2018).
DDPG is an off-policy method that learns a deterministic policy using an actor-critic approach. Exploration is done by using the deterministic policy and adding exploration noise to the selected actions. In contrast, PPO works on-policy, uses a stochastic policy, is a policy-gradient method, and is related to Trust-Region Policy Optimization (TRPO) by Schulman et al. (2015). It tries to limit abrupt changes to the policy in order to keep generating reasonable data in the policy rollouts. PPO exploration works by sampling actions from its stochastic policy. Finally, SAC is an off-policy method, uses a stochastic policy and an actor-critic approach. Similarly to PPO, it explores by sampling actions from the stochastic policy. It adds an entropy term to the value-function loss to encourage more exploratory behaviour of the policy, that is, high entropy in the action selection is encouraged. Sec. 4.1 and Sec. 4.2 show how this affects the results.
We use the implementations provided by Hill et al. (2018) which are a tuned and improved version of the algorithms provided by Dhariwal et al. (2017). We use the default hyperparameters to investigate whether the algorithms are stable with respect to their hyperparameters. This is important, since either we view hyperparameter search as part of the policy search, or we require the algorithm to be robust to hyperparameter settings over a wide range of environments.
3.3 Q1 COMPARING DATA GENERATION
To compare the exploration, we collect the data the agents see during their learning phase. This data is then analysed for state-space coverage. The coverage is calculated as the percentage of nonempty bins. For simplicity we use uniformly-shaped bins. The number of bins is equal along each state-space dimension and is set to √ 105/5, i.e. such that, in the uniform case, we expect five data points in each bin on average. We calculate the coverage over time during the learning progress of the agent.
To evaluate the reward distributions of the algorithms, the data, (s, a, r, s′) tuples from 11 independent runs, are combined to form one dataset for each algorithm. We look at the distribution of the r values of this dataset.
3.4 Q2 SUSCEPTIBILITY TO LOCAL OPTIMA
Reinforcement learning agents collect experience and use that experience to learn a policy, either implicitly in on-policy algorithms such as PPO, or explicitly in algorithms such as DDPG or SAC which use a replay buffer. Thus, the exploration process should reach regions in the state space relevant for the task so that it can learn a well-performing policy. If it cannot reach high-reward areas of the state space, the learned policy will also not move the agent to these regions and therefore the achieved return will be lower.
We perform training runs with PPO, DDPG, SAC and our PPS method. After the agents have learned, we use their policies to generate evaluation returns, which we analyze to compare their performance. This is done on 11 independent learning runs.
While the PPO, DDPG, SAC agents learn directly on the environment, our PPS agent uses the RRT planner to generate data. The generated data is stored in an SAC replay buffer. The replay buffer is fixed – no experience is added, no experience is removed. As a baseline this experiment is also performed for data generated by an SAC agent and an SAC agent on an exploring-starts variant. The data of both is also used in fixed SAC replay buffers to learn policies.
3.5 Q3 REUSING THE COLLECTED DATA
Since our PPS agent uses RRT with uniform sampling for tree extension to generate data, the statespace coverage is independent of the reward. It is therefore interesting to investigate whether this data can be reused more easily.
Similar to the previous experiment, a SAC agent is initialized with a random-weight-policy, but instead of an empty replay buffer, its buffer is preloaded with 50000 data samples created by one of the methods RRT, SAC, SAC ex. These samples are randomly shuffled to remove the order of their temporal acquisition. The agent then continues to acquire new environment interaction samples, which gradually replace the prefilled data in a First-In-First-Out (FIFO) fashion, while the policy is gradually updated on the new buffer. The agent is evaluated for another 50000 steps, however, the task is changed by disabling the reward at position 1 (resp. 2). As the buffer contains data about the previous task, only part of the previous dataset is useful whereas many samples are now misleading. If the agent has explored more during the previous task, we expect that its knowledge is more relevant to this second task and it should perform better than an agent that uses data generated in a more exploitation-focused way.
4 RESULTS
4.1 Q1 COMPARING DATA GENERATION
How do D-RL methods compare in terms of exploration to a directed exploration approach, such as RRT? Can we cover a larger area of the state space? Figure 3 shows the coverage
of the visited state space. Note how the RRT algorithm keeps increasing the state-space coverage while the D-RL agents level out. The exploring-starts agents (DDPG, PPO, SAC) follow the uniform sampling curve quite closely in the beginning, even exceeding the exploration of RRT, before the RRT method surpasses their coverage. Note how the RRT method surpasses the exploration of the non-exploring-starts methods from the very beginning. The agents trained with exploring starts (PPO, SAC, DDPG) level out at approximately the same coverage. A possible explanation is that by the exploring starts the agent starts from a random initial state but approaches the same favored goal. Thus the increase in coverage is mostly due to the exploring starts rather than the exploration mechanism of the agent. From this we conclude that the exploration capabilities built into methods such as DDPG, PPO and SAC are insufficient and that they mostly depend on the environment (i.e. exploring starts) to generate sufficiently-diverse data.
Do the reward distributions differ? Figure 3 shows how the rewards are distributed in the datasets collected by RRT, DDPG and SAC ex. respectively. For each method, the union of 11 runs is taken and the probability of achieving reward r is calculated. The logarithm of the kernel density estimate of distribution of the rewards r is depicted.
SAC favors the goal 1 position and as such most of the probability mass is concentrated around that reward. SAC ex. favors the second goal and consequently has a higher probability mass on the second goal. RRT collects data independently of the reward, which results in more samples around the higher reward goal (2) than SAC, but less than SAC ex. As such, RRT is more directed in exploring and consequently beats SAC in reaching the second goal. Both SAC and SAC ex. show a peak around their respective favored goal location while the data generated by RRT is independent of the reward and thus shows no such peaks. This also hints at the higher generality of the data generated by RRT which could be reused to achieve different goals, but also shows that very little data is generated around the regions of interest in this task.
4.2 Q2 SUSCEPTIBILITY TO LOCAL OPTIMA
Do state-of-the-art D-RL methods get stuck in local optima? Figure 4 depicts boxplots of the evaluation returns achieved by the D-RL algorithms after training for 105 environment steps. Note how DDPG achieves higher rewards without exploring starts, and PPO appears to profit from exploring starts. SAC appears to profit from exploring starts, while otherwise achieving returns in a similar range to DDPG; in some cases it is able to achieve higher returns than both PPO and DDPG.
The figure also contains the evaluation returns of our PPS method, indicated by “RRT indirect SAC” and the direct baseline comparisons where data is generated by SAC and SAC ex., respectively, and is used indirectly to train an SAC policy.
The results show improved performance when training on the fixed replay buffer. They also show performance superior to the policy indirectly trained on SAC data as well as the directly-trained SAC policy. The policies trained on the RRT data even achieve performance comparable to the directly-trained SAC policy with exploring starts.
We use tanh activations in the indirectly-trained policies because we found them to produce results with smaller variance and to perform more robustly.
4.3 Q3 REUSING THE COLLECTED DATA
Can the data collected by PPS be reused more easily? In this experiment we train an SAC agent partially indirectly from a prefilled replay buffer but then continue with regular training, thereby phasing out the prefilled data. Figure 5 shows the evolution of the evaluation return distribution.
The data used for prefilling is generated on the two-goals enviroment, while the adaptation and evaluation is done on environments where one of the two goal locations is disabled. Therefore, only part of the prefilled data is accurate. In both cases the PPS method, denoted by RRT→SAC, is able to learn good policies, while the SAC→SAC agent has superior performance on the goal-”1”-only environment; it completely fails on the goal-”2”-only environment. The converse happens for the SAC ex.→SAC agent. It is interesting to note that part of the reused data is actually deceiving to the agent because it tries to get the agent to regions where reward can no longer be found. The learning-from-scratch agent is provided as a baseline. Since the modified environment is simpler – only one optimum – it achieves comparable results.
5 DISCUSSION
In this work, we highlighted that standard D-RL algorithms are not immune to getting stuck in suboptimal policies even in a toy problem with two local optima. The agent controlled by PPS explores a wider part of the state space than D-RL methods that focus on reward accumulation, even with exploring starts. The data gathered by RRT is not biased by reward accumulation and is thus more representative of the environment (goal 2 is farther away and thus incurs less reward).
We showed that the policy-learning agent trained on data from RRT performs better in the initial task than SAC but worse than SAC ex. However, on two variations of this task where only one source of reward is available, SAC ex. fails to adapt in half of the new tasks, whereas RRT achieves almost-optimal performance.
This method is thus relevant for robotics settings where the environment might dynamically change and some rewards might not be available after convergence of the robot policy (e.g. two sources of power are available in the environment at the beginning of the task and one becomes depleted during the robot’s life).
This method also has the potential of speeding up domain-randomized training: By randomizing the model and using planning to quickly discover new policies, the method can focus the training on relevant parts of the state space and reduce the number of necessary samples. This will be evaluated in future work.
One limitation is that this evaluation is done on a simple task. It needs to be evaluated in more realistic settings where the state space is more complex and where variations of the task also alter environment dynamics. | 1. How does the reviewer assess the effectiveness of the proposed method in addressing the exploration issue in reinforcement learning?
2. What are the reviewer's concerns regarding the choice of evaluation metric for exploration?
3. How does the reviewer evaluate the performance of the proposed method compared to other approaches, especially SAC from scratch?
4. What are the limitations of the paper regarding its focus on a single task and the implications for generalization to other problems? | Review | Review
The paper is mostly easy to read and I enjoyed reading it. The authors address an important issue of exploration in reinforcement learning and the used of a model-based planner is certainly a promising direction. However, I do have a number of concerns.
1. On Q1. I think the key question here is this -- should state-space coverage be the only measure for effective exploration? The classical dilemma of explore-or-exploit in reinforcement learning is relevant here. From Figure 3, it seems that RRT tends to explore uniformly rather than "intelligently". For problems where there is absolutely no information guiding the exploration process this might be desirable, but then the search complexity will suffer from the curse of dimensionality and there is no evidence in this work that this is a good strategy. Perhaps switching from RRT to RRT* helps but the authors chose not to do it.
2. On Q2. Perhaps I missed something here but other than special cases (e.g. convex problems) almost all gradient-based algorithms suffer from local optimality. I am not sure Q2 is a good question to ask here.
3. On Q3. It seems that SAC from scratch is the best-performing approach here. This particular setting is hardly convincing in motivating the re-use of examples across tasks.
The above concerns, plus the fact that only one particularly simple task is being investigated here, prevent me from recommending acceptance. |
ICLR | Title
POI-Transformers: POI Entity Matching through POI Embeddings by Incorporating Semantic and Geographic Information
Abstract
Point of Interest (POI) data is crucial to location-based applications and various user-oriented services. However, three problems are existing in POI entity matching. First, traditional approaches to general entity matching are designed without geographic location information, which ignores the geographic features when performing POI entity matching. Second, existing POI matching methods for feature design are heavily dependent on the experts’ knowledge. Third, current deep learning-based entity matching approaches require a high computational complexity since all the potential POI entity pairs need input to the network. A general and robust POI embedding framework, the POI-Transformers, is initially proposed in this study to address these problems of POI entity matching. The POI-Transformers can generate semantically meaningful POI embeddings through aggregating the text attributes and geographic location, and minimize the inconsistency of a POI entity by measuring the distance between the newly generated POI embeddings. Moreover, the POI entities are matched by the similarity of POI embeddings instead of directly comparing the POI entities, which can greatly reduce the complexity of computation. The implementation of the POI-Transformers achieves a high F1 score of 95.8% on natural scenes data sets (from the Gaode Map and the Tencent Map) in POI entity matching and is comparable to the state-of-the-art (SOTA) entity matching methods of DeepER, DeepMatcher, and Ditto (in entity matching benchmark data set). Compared with the existing deep learning methods, our method reduces the effort for identifying one million pairs from about 20 hours to 228 seconds. These demonstrate that the proposed POI-Transformers framework significantly outstrips traditional methods both in accuracy and efficiency.
1 INTRODUCTION
A Point of Interest (POI) is a dedicated geographic entity that people may be interested in, such as a university, an institute, or a corporate office, and is fundamental to the majority of location-based services (LBS) applications. Generally, a POI entity contains multiple attributes, such as name, category, geographic location. A collection of comprehensive, reliable, and up-to-date POI data is important to LBS, service capability and user experience (Rae et al., 2012; Zhao et al., 2019a). Therefore, updating the POI database in timely is substantial significant. In general, POI database updating is comparing the newly generated POI entities with the existing POI entities and adding the new ones into the database. In this process, POI entity matching is crucial since it needs to discriminate the new POI entities from the old ones based on their attributes.
Traditional POI entity matching algorithms usually involve numerous artificial matching rules associated with attributes (Fu et al., 2011; Safra et al., 2010). The most common idea of POI entity matching is calculating the similarity of attributes between two POI entities and obtaining the final score of all the similarities of attributes with weights. Nevertheless, most of the existing algorithms involve simple string similarity algorithms, such as Levenshtein distance, to calculate the similarity of attributes. This largely neglects the semantic information of the text attributes. Considering this problem, some studies introduced semantic models to the POI-related tasks as the semantic model can achieve state-of-art performance in natural language processing tasks (NLP) (Zhao et al.,
2019a;b). However, most of these POI-related models are heavily dependent on the experts’ knowledge.
Entity matching has been researched for decades (Barlaug & Gulla, 2021). Current entity matching methods such as Ditto (Li et al., 2020), DeepMatcher (Mudgal et al., 2018), and DeepER (Ebraheem et al., 2018), can compare the similarity between attributes and extract the features of entities through deep learning, and then compare the similarities between potential pairs of entities. Previous studies state that the geographic similarity calculated from the geographic location is a substantially important element in POI entity matching (Almeida et al., 2018; Novack et al., 2018). However, current entity matching methods are mostly designed for general entities without geographic location information. Most of the entity matching methods, in general, learn the features from the attributes equally but the geographic location features are always ignored.
The pre-trained transformer network, such as BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019), and XLNet (Yang et al., 2019), can achieve state-of-the-art performances in various NLP tasks. A sentence embedding model was proposed by Reimers & Gurevych (2019) for solving the huge computational overhead in semantic similarity search. Meanwhile, Ebraheem et al. (2018) proposed a simple deep learning method, namely DeepER, to directly compare the similarity between entities by learning and tuning the distributed representations of entities. These studies demonstrated that a simple deep learning method can be utilized to translate the POI entities into POI embeddings through fully involving both the semantic of text attributes and geographic location information. Based on the similarity between embeddings, it can be efficiently carried out in many potential matching pairs in POI entity matching.
In this study, we propose a POI-Transformers framework to generate POI embeddings by completely involving the text attributes and geographical locations of POI entities. Experiments show that after training by the Siamese network architecture, the simple model POI-Transformers can well integrate semantic and geographical features, and the newly generated embeddings can fully represent POI entities. The proposed model achieves good performance in entity matching benchmark and SOTA performance in POI entity matching task, and reduces the effort for comparing many POI pairs.
The main contributions of this study can be summarized as follows:
• We propose a simple model, POI-Transformers, for generating POI entity embeddings, which can fully learn the representation embeddings of POI entities by the transformerbased model and geographic location encoding module. Since POI-Transformers use the transformer-based network to process text attributes, this proposed model can seamlessly switch to different transformer-based networks and support different languages.
• The POI embeddings generated from POI-Transformers can be used for POI entity matching task in real-world data. These fully learned embeddings can largely reduce the effort for finding the most similar pair from all POI entities.
• We compare the proposed POI-transformers with the traditional POI entity matching methods and entity matching methods. The results show that our model achieves comparable performance to the DeepER, DeepMatcher, and Ditto in the entity matching tasks. In the POI entity matching task, the proposed POI-Transformers achieves better performance than traditional POI matching methods (e.g. rule-based, weighted). These results demonstrate that this proposed framework can fully learn the text attributes and geographic location information in POI entity matching. Meanwhile, it further implies that adding a domain knowledge module to the original entity matching model might achieve a better performance in the field of entity matching.
2 RELATED WORK
2.1 ENTITY MATCHING
Here, we summarize the entity matching methods used for the entity matching task, which aims to solve the problem of identifying entities from the real world (Barlaug & Gulla, 2021).
The attributed-aligned comparison strategy is commonly employed in entity methods. This strategy compares attributes in a one-to-one, and further combines the similarity representation on the record level (Barlaug & Gulla, 2021). Specifically, the rule-based method associated with the attributedaligned comparison strategy is the most classic entity matching method since it is easy to understand and develop (Hernández & Stolfo, 1998; Lim et al., 1996; Wang & Madnick, 1989). Nevertheless, owing to much expert experience required for modifying rules in the rule-based methods, methods based on machine learning (especially deep learning) are gradually developed to automatically learn the features of entities. For example, DeepMatcher (Mudgal et al., 2018), Kasai et al. (2019) and Auto-EM (Zhao & He, 2019) utilized the deep learning method to compare attributes one-to-one before comparing the similarity of records.
To capture better language understanding, some studies introduced the cross-record attention for entity matching (Barlaug & Gulla, 2021). Seq2SeqMatcher (Nie et al., 2019), Ditto (Li et al., 2020) and Brunner & Stockinger (2020) used attention mechanisms to capture semantic features of all words across the compared records. They treat the entity matching task to a sequence-to-sequence matching task by processing the entity pairs into sentences and inputting these sentences into transformer networks. At present, by combine cross-record and multiple optimization techniques (domain knowledge, etc.), Ditto has achieved SOTA performance in the entity matching benchmark.
Both attribute-aligned and cross-record attention methods need to input entity pairs into the model simultaneously, which comes out a large amount of computation effort in the entity matching. Therefore, some studies have proposed approaches to alleviating this problem by comparing the representation of entities. For entity representation methods, it is possible to generate a representation of each record and directly obtain similarity between entity pairs (Barlaug & Gulla, 2021). DeepER (Ebraheem et al., 2018) and AutoBlock (Zhang et al., 2020) applied bidirectional LSTM and selfattention to get the record-level embedding representations, which can achieve good performance in entity matching tasks with low time complexity.
2.2 POI ENTITY MATCHING
POI entity matching can be regarded as a special case of entity matching on POI. As far as we know, the majority of the current methods are dependent on the attribute-aligned comparison (including rule-based and machine learning-based). McKenzie et al. (2014) proposed a weighted combination model on multiple attributes (e.g., name, type, and geographic location) of POI, and achieved high accuracy in the Foursquare and Yelp dataset. Li et al. (2016) proposed an entropy-weighted method to POI matching by integrating attributes with allocation weights via information entropy. This entropy-weighted method is applied to Baidu and Sina POI matching and achieved good performance. Meanwhile, some studies applied the weighted summation based on the graph method, and the weights of different attributes can further be obtained by an unsupervised method. For example, Novack et al. (2018) presented a graph-based POI matching method with two matching strategies. In which, POIs are regard as nodes and matching possibilities regarded as edges. Almeida et al. (2018) first proposed a data-driven learning method for automatic POI matching based on an outlier detection algorithm. However, these methods for feature design are heavily dependent on the experts’ knowledge.
To improve the accuracy, the methods of text semantic are also applied to the POI matching task. Dalvi et al. (2014) considered both domain knowledge and geographical knowledge and presented an unsupervised POI matching model based on a language model. They assign weights to different words in POI names and their method can achieve an accuracy of about 90% in POI deduplication. Yu et al. (2018) proposed an approach based on semantic technologies to automate the POI matching and conflation, which achieved a conflation accuracy of 98% in shopping center POIs. However, as far as we know, employing POI embedding for POI matching task, which this paper aims to explore, has not been covered by existing studies.
3 MODEL ARCHITECTURE
The architecture of the POI-Transformers is shown in Figure 2. It is a combination of the transformer-based model and geographic location embedding module, which is an extension of the general entity matching. In this work, we aim to achieve the POI entity matching by incorporating semantic and geographic information. Firstly, the semantic feature vectors are extracted from the text attributes (name, category, address, etc.) of the POI entity by using the Transformer-based model (BERT, etc.) and further trained to be fixed-sized attribute embeddings by pooling strategies. Meanwhile, we design a geographic location embedding module to translate the two-dimensional geographic location (longitude and latitude) to meaningful embeddings. Secondly, a transformer encoder layer is employed to encode the text embeddings and location embeddings by a multi-head attention mechanism. Finally, a pooling layer and a fully connected layer are adopted to obtain POI entity embeddings.
Figure 2(B) describes a specific POI-Transformers for evaluation in this study. In this framework, we consider the text attributes of name, category and address in the Transformer-based model as these attributes are most important to POI entities. Combining with the geographic information (longitude and latitude), the three text attributes, in turn, can be used for identifying a POI entity in the nature world. In the training process, the Siamese networks are adopted to update the weights of semantic and geographic attributes to ensure the newly generated POI embedding meaningful semantically and geographically and valid in similarity metrics (such as cosine, Euclidean).
3.1 TEXT EMBEDDING MODULE
The text embedding module designed in our study attempts to translate the multiple text attributes of POI entities into semantic embeddings through the transformer-based network. Transformer-based pre-trained models, such as BERT and RoBERTa, can achieve the state-of-the-art performance, which in turn makes the transformer-based models widely used. The SOTA entity matching method Ditto has proved that transformer-based networks can fully learn the knowledge from entity attributes by treating entity-pair as sequence-pair (Li et al., 2020).
Here, we consider each POI text attribute as a sentence and generate a corresponding embedding that can represent this text attribute. In this study, a transformer-based network is employed to extract the semantic text embeddings of POI text attributes, such as name, category, address. After the transformer-based network, we further utilize a pooling layer to derive fix-sized semantic vectors of POI text attributes. In the pooling layer, the output of a special CLS token is not used to represent the text since there is no evidence showing the embedding of the CLS token is semantically meaningful (Reimers & Gurevych, 2019). Instead, mean-strategy polling is utilized in the pooling layer of the POI-Transformers framework. This means an average value of embeddings of all tokens is set as the embedding of the POI text attributes. In addition, to simplify the model and maintain the consistency of POI embeddings, only one transformer-based network is utilized for extracting the text attributes.
3.2 GEOGRAPHIC LOCATION EMBEDDING MODULE
The geographic location of POI is two-dimensional spatial information, involving longitude and latitude. To obtain the geographic information on POI, we design a geographic location embedding module in the POI-Transformers framework to translate the two-dimensional location into geographically meaningful embeddings, which can easier to identify the difference between the input geographic locations.
Here, we generate meaningful geographic vectors for the longitude and latitude of POI by utilized a location encoding method, the GeoHash (Liu et al., 2014) algorithm, which can encode the numerical longitude and latitude of a specific region on the Earth into strings. In this study, the primary purpose of the GeoHash in the POI-Transformers framework is to convert the longitude and latitude into binary vectors. To be specific, for a given geographic location (lon1, lat1), the location encoding layer in the GeoHash algorithm recursively can divide the longitude into intervals and mark the longitude code with 0 if the lon1 belongs to the left interval. If the lon1 belongs to the right interval, the longitude code is marked by 1. When the number of divisions reaches the set conditions, a code similar to 1101001 is obtained. The binary code of latitude can be also obtained as the way of longitude code. A longer binary array implies a more precise geographic location. When the times of dichotomies reach 30, the maximum error is approximately at 0.0186 meters. Therefore, the code ’0’ can represent the longitude range (-180, 0), code ’00’ can represent the latitude range (-180, -90). Similarly, the code ’0’ represent the latitude range (-90, 0) while code ’00’ represent the latitude range (-90, -45).
After we get the binary array of longitude and latitude, we can generate a geographic binary array with longitude bits occupied in even digits and latitude bits occupied in odd digits. For instance, the longitude binary code ’0’ represent the longitude range -180 to 0, and the latitude binary code ’1’ represent the latitude range 0 to 90, then the geographic binary array ’01’ can represent a region that longitude range from -180 to 0, and latitude range from 0 to 90. More details can be found in the appendix A.2. After the location encoding layer, a fully connected layer is added for obtaining location embeddings with the same dimension as the semantic vectors.
3.3 EMBEDDING FUSION MODULE
We then incorporate all the embeddings at the POI entity level in the embedding fusion module to ensure the matched POI entities have a large cosine similarity.
The linkages between different attributes of each POI entity possibly facilitate measuring the cosine similarity between POI entities. Specifically, the linkages between (i) category and name. For one thing, each category, such as hospital, university and shopping mass, may contain various names of POI entities. For another, one name of POI entity is likely to belong to different categories. (ii) geographic location and address. The address of POI entities can be obtained by the longitude and
latitude from the interface of the electronic map. In turn, the longitude and latitude can also be searched by the address of POI entities through the interface of the electronic map. Hence, these linkages between the attributes can help discriminate against various POI entities. Moreover, a rulebased method with a weighted average of the similarity scores of all the attributes is generally used for measuring the similarity of two POI entities. Nevertheless, the weights of all the attributes of POI entities, in the traditional POI entity matching, are manually set with prior experience.
To learn the linkages knowledge between the attributes of POI entities, we introduce a transformer encode layer with multi-head self-attention in the embedding fusion module. The self-attention mechanism can link the different parts of a single sequence to obtain the representation of the sequence. This means that we can get the representation of linkages between different attributes when inputting the attributes of POI entities into the self-attention. Hence, the linkages between the attributes of each POI entity can be fully learned by using the multi-head self-attention mechanism. In addition, the attention mechanism can adjust automatically the weights of all attributes in POI entities. In natural language processing, the core function of the attention mechanism is to weigh the input attributes by learning the importance of different parts of a sentence (Vaswani et al., 2017). Compared with the fixing weights set by manual, the weights based on the importance of attributes are much reasonable. Furthermore, in order to obtain fix-sized POI embeddings, we introduce a pooling layer after the transformer encoder layer.
4 EXPERIMENTS
4.1 DATA SETS
We experimented with all the 12 publicly available entity matching data sets used for evaluating Ditto (Li et al., 2020) and DeepMatcher (Mudgal et al., 2018) and a POI entity matching data set generated by ourselves.
For entity matching data sets, each of them consists of the candidate pairs sampled and labeled from two structured entity record tables. In addition, similar to the Ditto and DeepMatcher, we also use the dirty version of the DBLP-ACM, DBLP-Scholar, iTunes-Amazon, and Walmart-Amazon data sets to evaluate the robustness of the proposed model. These dirty data sets are generated by randomly moving each attribute value to the attribute title with a 50% probability. The Abt-Buy data set is dominated by texts and is characterized by the long text attribute. The overview of all the entity matching data sets can be found in appendix A.1.
In this study, we annotated a POI entity matching data set QM-GD-POI generated by a POI dataset of the Tencent Map (QM POI, https://map.qq.com/) and a POI data set of the Gaode Map (GD POI, https://www.amap.com/). All POI entities contain five attributes: name, category, address, longitude and latitude. We use the open POI query API of Tencent Map and Gaode Map to obtain 7,103 and 6,868 POI entities respectively. Then, we sampled and labeled 9,606 candidate pairs from these two newly generated POI data sets. We also generated the dirty version of QMGD-POI. Since the attributes of name, longitude and latitude in the POI data set are generally not missing, we remove the type and addresses with a 50% probability to generate a dirty data set.
The training, validation, and test sets of 12 publicly entity matching data sets are set at the ratio of 3:1:1. In the structured and dirty QM-GD-POI data sets, we used the ratio 6:1:3 to construct training, validation and test sets.
4.2 EXPERIMENT SETUP
We implemented POI-Transformers in PyTorch (Paszke et al., 2019) and the Transformers (Wolf et al., 2020) library. We currently use the BERT-Base Chinese model as the base model to extract text semantic features. Further, the BERT-Base Chinese model can replace with other transformerbased pre-training models. We conducted all experiments on a server with Intel i9-10850K CPU @ 3.6GHZ, 64GB memory, NVIDIA GeForce RTX 3090 GPU.
We compared the proposed POI-Transformers with the existing entity matching methods, such as DeepMatcher, Ditto, Magellan, and DeepER and POI entity matching methods Rule-based,
Weighted, iForest on POI entity matching dataset. We also compared variants of POI-Transformers without the Geographic Location Embedding Module (POI-Transformers*).
DeepMatcher: DeepMatcher (Mudgal et al., 2018) is one of the SOTA deep learning-based entity matching approaches. DeepMatcher customizes the RNN to conduct attribute-aligned similarity representation of attributes, and then aggregates the representations of attributes to obtain entity similarity representation between entities.
Ditto: Ditto (Li et al., 2020) is the SOTA entity matching system based on pre-trained Transformerbased language models. Ditto considers the entity matching task as a sequence classification task by splicing entity pairs into sequences. Meanwhile, Ditto developed three optimization techniques (domain knowledge, TF-IDF summarization, and data augmentation) to improve the performance. We use the full version of Ditto with all 3 optimizations in this study.
Magellan: Magellan (Konda, 2018) is a SOTA traditional non-neural entity matching system. This system calculates the similarity features between attributes (Levenshtein distance, etc.), and then uses these features to build a random forest, logistic regression and other traditional machine learning models for entity matching identifying. After model selection, the random forest in Magellan performs best in our POI entity matching dataset, so we report the F1 score in POI entity matching of Magellan obtained by random forest.
DeepER: DeepER (Ebraheem et al., 2018) uses bidirectional RNN with LSTM hidden units on word embeddings to translate each entity to a representation vector. It achieves good accuracy and high efficiency in entity matching tasks.
POI-Transformers: The full version of our proposed model with Geographic Location Embedding Module. In POI entity matching, we used the cosine similarity and SentEval toolkit (Conneau & Kiela, 2018) to evaluate the POI embeddings obtain by the POI-Transformers. When evaluated by the cosine similarity, we set a matching threshold. The entity pairs with embedding cosine similarity higher than the threshold are considered the positive matching pair. SentEval is an evaluation toolkit for evaluating the quality of POI embeddings. We utilized the logistic regression classifier in the SentEval to evaluate the POI embeddings for POI entity matching and entity embeddings for entity matching. To train the POI-Transformers framework, we utilize the softmax objective function to update the weights of POI embeddings as that in Sentence-BERT (Reimers & Gurevych, 2019).
POI-Transformers*: In this version, the Geographic Location Embedding Module is deleted, and the longitude and latitude are directly input into the Text Embedding Module to obtained the representation embeddings.
Rule-based: We designed a rule-based method for POI entity matching. In this method, we first calculated the similarity of the name, category, address and distance between the POI entity pairs. Then, we performed a weighted summation of the similarity between each attribute to obtain the similarity between POI entity pairs. The weights of name, category, address and distance similarity were set to 0.65, 0.1, 0.1, 0.15, respectively according to the experts’ knowledge.
Weighted: Li et al. (2016) proposed a Entropy-Weighted method for POI entity matching. This method first calculates the similarity of attributes between POI entity pairs, and then allocates weights of similarity of each attribute by information entropy.
iForest: Almeida et al. (2018) proposed an outlier detection based approach to POI entity matching. This method first computes the similarity of the name, website, address, category and geographic coordinates between POI entity pairs. Further, it obtains similarity between POI entity pairs by using the iForest method.
BERT: Fine-tuning the pre-trained BERT (Devlin et al., 2018) model by our POI matching dataset to do a classification task. We construct POI sentences by concatenating name, category, address, longitude, and latitude for input and get the similarity of two POI sentences.
4.3 RESULTS
All the models run with 20 epochs in the training process and returned the checkpoint with the highest F1 score on the validation set. Table 3 and Table 4 show the results of the entity matching data sets and POI entity matching data sets respectively. We reported the F1 scores of DeepMatcher,
Ditto, Magellan, and DeepER in entity matching data sets from Li et al. (2020) and Barlaug & Gulla (2021).
As shown in Table 1, due to the powerful learning ability of deep learning, the models based on deep learning (Ditto, BS, and POI-Transformers) can achieve better performance in entity matching. Meanwhile, we found that the attributed-aligned comparison methods (DeepMatcher) and cross-record interactive methods (Ditto, BS) based on deep learning are generally achieved better performance than the methods based on entity representation. In addition, The POI-Transformers proposed by this study achieved a better performance than the existing entity representation method (DeepER) and the traditional method (Magellan). In some data sets, the POI-Transformers can achieve better performance than the DeepMatcher. These results suggest that in entity matching, the entity representation methods currently have no advantage in accuracy over the POI-Transformers. However, the reduction in computation effort by entity representation methods cannot be ignored, especially in the POI entity matching task with a large number of real-world data set.
We can also find that Ditto and BS outperform other models in the textual data set Abt-Buy. This is possible because attributed-aligned methods and entity representation methods require to transform attribute text to other forms. When the training set is not enough, the learned features cannot fully represent the features of an attribute. The cross-record interactive model can directly interact with the original attributes across records, so as to obtain more meaningful features. In actual, there is no long text in the POI entity, and all attributes contain short text.
Table 2 shows the result of the POI entity matching data sets. We can find that the transformer-based models (Ditto, Magellan, and POI-Transformers) can achieve better performance than other models in the POI entity matching task. The POI-Transformers* has no advantage over traditional models since we remove the Geographic Location Embedding Module. The cosine similarity between the embeddings of POI entities is directly used for POI entity matching task, which is better than the traditional Rule-based, Weighted, and iForest, but perform worse than other deep learning models. When we used the SentEval to evaluate the embedding generated by POI-Transformers, we find that its performance is a little better than the SOTA entity matching method Ditto, but it is a little worse in the dirty data version. This indicates that the POI-Transformers proposed in this study has achieved SOTA performance in POI entity matching after adding the Geographic Location Embedding Module. Meanwhile, these results also suggest that POI-Transformers has more advantages in dealing with structured data and needs to be improved in dealing with dirty data. However, as far as we know, there are generally not many dirty data in the real-world POI data set.
In order to evaluate the computational efficiency for different models, we selected 100, 500, and 1,000 records from Tencent Map POI and Gaode Map POI respectively to from 10,000, 250,000, and 1,000,000 matching pairs. Table 3 shows the computation time of different numbers of matching pairs in POI entity matching. We can see that the computation effort of traditional POI entity matching methods is very low, but it can be seen from Table 3 that the accuracy is the worst. When the cosine similarity of POI embeddings is directly used for POI entity matching, the computation amount is lower than Magellan when the size of data set increases. The three deep learning models, especially the transformer-based modes (Ditto and BERT), have a large amount of computation (approximately 20 hours and 17 hours, respectively). When using SentEval to evaluate the embeddings generated by POI-Transformers, it takes less than 500 seconds to calculate one million matching pairs. These results demonstrate that our POI-Transformers have advantages both in accuracy and computational efficiency in the task of POI entity matching, and have the significance of practical deployment.
5 CONCLUSIONS
In this paper, we propose a novel model, the POI-Transformers, for the POI matching task based on pre-trained Transformer-based language models. This model uses a simple architecture to effectively incorporate the semantic features and geographic features to obtain meaningful POI entity embeddings. The POI entities are matched by the similarity of POI embeddings instead of directly comparing the POI entities, which can greatly reduce the complexity of computation. The experimental results show that our proposed POI-Transformers is comparable to SOTA entity matching models (DeepER, DeepMatcher, and Ditto) in entity matching tasks. Moreover, our model achieves the highest F1 score on natural scenes data sets in POI entity matching, and reduces the computation effort for identifying one million pairs from about 20 hours to 228 seconds. The high accuracy and efficiency of the POI-Transformers can help to deploy and use in real-world data set. In addition, our results also demonstrate that domain knowledge fusion in the deep learning model can achieve better results in specific entity matching tasks.
A APPENDIX
A.1 OVERVIEW OF DATA SETS
Table 4 shows the overview of publicly available entity matching data sets (from Barlaug & Gulla (2021) and Li et al. (2020)) and a POI entity matching data set (QM-GD-POI) generate by ourselves.
A.2 DETAIL OF GEOGRAPHIC LOCATION EMBEDDING MODULE
As illustrated in Figure 3, the left interval of longitude is set as (-180, 0) by the GeoHash algorithm and the right interval is set as (0, 180). Similarly, the left interval of latitude is divided into (-90, 0) while the corresponding right interval is (0, 90). As a result, “01” represents the area where longitude is from -180 to 0 degrees and latitude is from 0 to 90 degrees. As for the “01” area, the GeoHash algorithm continues to bisect the latitude and longitude of this region and the “0101” denotes the area where the longitude ranges from (-180, -90), and the latitude ranges from (45, 90).
Through the continuous dichotomy in the GeoHash algorithm, any geographic location on the Earth can be encoded into a unique binary array. A longer binary array implies a more precise geographic location. When the time of dichotomies reaches 30, the maximum error is approximately 0.0186 meters. After we get the binary array of longitude and latitude, we can generate a geographic binary array with longitude bits occupied in even digits and latitude bits occupied in odd digits. For instance, in Figure 3(A) the longitude binary code of the left top region is ’0’, and the latitude binary is ’1’, then the geographic binary array ’01’ can represent the left top region. More examples can be found in the appendix A.2. After the location encoding layer, a fully connected layer is added for obtaining location embeddings with the same dimension as the semantic vectors.
Let’s give an example from Wikipedia (https://en.wikipedia.org/wiki/Geohash), the encoded longitude “0111 1100 0000” represents an area with the longitude from -5.625to -5.449 degree with a maximum error 0.044 degree (about 4,400 meters) after 12 times binary divisions (Table 6), and the encode latitude “1011 1100 1001” represents an area with the latitude from 42.539 to 42.627 (Table 5). Then we can generate a geographic binary array with longitude bits occupied in even digits and latitude bits occupied in odd digits. With this criterion, the geographic binary array of the above example can be depicted as “0110 1111 1111 0000 0100 0001”. | 1. What is the focus and contribution of the paper on POI entity matching?
2. What are the strengths of the proposed approach, particularly in terms of embedding both text attributes and geographic location?
3. What are the weaknesses of the paper, especially regarding the experiments and their relevance to POI matching?
4. Do you have any concerns about the effectiveness of the proposed method for POI matching, especially when using only latitude and longitude attributes without text embedding? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposed a POI embedding framework, the POI-Transformers, to address POI entity matching problem. The POI-Transformers semantically generate POI embeddings by aggregating the text attributes and geographic location. Then, the POI entities are matched by the similarity of POI embeddings.
Review
Strengths: The paper proposed a new model to achieve POI entity matching by embedding both its text attributes and geographic location.
Weaknesses:
Figure 1 is not explained or cited in the paper.
The paper is for POI entity matching, but most experiments, e.g. DBLP-ACM, DBLP-Scholar, and iTunes-Amazon, are irrelevant to POIs.
For the only POI matching experiment, the proposed model only has similar or even worse results.
I highly doubt that using only Latitude and Longitude attributes without text embedding can achieve very good results for POI matching. |
ICLR | Title
POI-Transformers: POI Entity Matching through POI Embeddings by Incorporating Semantic and Geographic Information
Abstract
Point of Interest (POI) data is crucial to location-based applications and various user-oriented services. However, three problems are existing in POI entity matching. First, traditional approaches to general entity matching are designed without geographic location information, which ignores the geographic features when performing POI entity matching. Second, existing POI matching methods for feature design are heavily dependent on the experts’ knowledge. Third, current deep learning-based entity matching approaches require a high computational complexity since all the potential POI entity pairs need input to the network. A general and robust POI embedding framework, the POI-Transformers, is initially proposed in this study to address these problems of POI entity matching. The POI-Transformers can generate semantically meaningful POI embeddings through aggregating the text attributes and geographic location, and minimize the inconsistency of a POI entity by measuring the distance between the newly generated POI embeddings. Moreover, the POI entities are matched by the similarity of POI embeddings instead of directly comparing the POI entities, which can greatly reduce the complexity of computation. The implementation of the POI-Transformers achieves a high F1 score of 95.8% on natural scenes data sets (from the Gaode Map and the Tencent Map) in POI entity matching and is comparable to the state-of-the-art (SOTA) entity matching methods of DeepER, DeepMatcher, and Ditto (in entity matching benchmark data set). Compared with the existing deep learning methods, our method reduces the effort for identifying one million pairs from about 20 hours to 228 seconds. These demonstrate that the proposed POI-Transformers framework significantly outstrips traditional methods both in accuracy and efficiency.
1 INTRODUCTION
A Point of Interest (POI) is a dedicated geographic entity that people may be interested in, such as a university, an institute, or a corporate office, and is fundamental to the majority of location-based services (LBS) applications. Generally, a POI entity contains multiple attributes, such as name, category, geographic location. A collection of comprehensive, reliable, and up-to-date POI data is important to LBS, service capability and user experience (Rae et al., 2012; Zhao et al., 2019a). Therefore, updating the POI database in timely is substantial significant. In general, POI database updating is comparing the newly generated POI entities with the existing POI entities and adding the new ones into the database. In this process, POI entity matching is crucial since it needs to discriminate the new POI entities from the old ones based on their attributes.
Traditional POI entity matching algorithms usually involve numerous artificial matching rules associated with attributes (Fu et al., 2011; Safra et al., 2010). The most common idea of POI entity matching is calculating the similarity of attributes between two POI entities and obtaining the final score of all the similarities of attributes with weights. Nevertheless, most of the existing algorithms involve simple string similarity algorithms, such as Levenshtein distance, to calculate the similarity of attributes. This largely neglects the semantic information of the text attributes. Considering this problem, some studies introduced semantic models to the POI-related tasks as the semantic model can achieve state-of-art performance in natural language processing tasks (NLP) (Zhao et al.,
2019a;b). However, most of these POI-related models are heavily dependent on the experts’ knowledge.
Entity matching has been researched for decades (Barlaug & Gulla, 2021). Current entity matching methods such as Ditto (Li et al., 2020), DeepMatcher (Mudgal et al., 2018), and DeepER (Ebraheem et al., 2018), can compare the similarity between attributes and extract the features of entities through deep learning, and then compare the similarities between potential pairs of entities. Previous studies state that the geographic similarity calculated from the geographic location is a substantially important element in POI entity matching (Almeida et al., 2018; Novack et al., 2018). However, current entity matching methods are mostly designed for general entities without geographic location information. Most of the entity matching methods, in general, learn the features from the attributes equally but the geographic location features are always ignored.
The pre-trained transformer network, such as BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019), and XLNet (Yang et al., 2019), can achieve state-of-the-art performances in various NLP tasks. A sentence embedding model was proposed by Reimers & Gurevych (2019) for solving the huge computational overhead in semantic similarity search. Meanwhile, Ebraheem et al. (2018) proposed a simple deep learning method, namely DeepER, to directly compare the similarity between entities by learning and tuning the distributed representations of entities. These studies demonstrated that a simple deep learning method can be utilized to translate the POI entities into POI embeddings through fully involving both the semantic of text attributes and geographic location information. Based on the similarity between embeddings, it can be efficiently carried out in many potential matching pairs in POI entity matching.
In this study, we propose a POI-Transformers framework to generate POI embeddings by completely involving the text attributes and geographical locations of POI entities. Experiments show that after training by the Siamese network architecture, the simple model POI-Transformers can well integrate semantic and geographical features, and the newly generated embeddings can fully represent POI entities. The proposed model achieves good performance in entity matching benchmark and SOTA performance in POI entity matching task, and reduces the effort for comparing many POI pairs.
The main contributions of this study can be summarized as follows:
• We propose a simple model, POI-Transformers, for generating POI entity embeddings, which can fully learn the representation embeddings of POI entities by the transformerbased model and geographic location encoding module. Since POI-Transformers use the transformer-based network to process text attributes, this proposed model can seamlessly switch to different transformer-based networks and support different languages.
• The POI embeddings generated from POI-Transformers can be used for POI entity matching task in real-world data. These fully learned embeddings can largely reduce the effort for finding the most similar pair from all POI entities.
• We compare the proposed POI-transformers with the traditional POI entity matching methods and entity matching methods. The results show that our model achieves comparable performance to the DeepER, DeepMatcher, and Ditto in the entity matching tasks. In the POI entity matching task, the proposed POI-Transformers achieves better performance than traditional POI matching methods (e.g. rule-based, weighted). These results demonstrate that this proposed framework can fully learn the text attributes and geographic location information in POI entity matching. Meanwhile, it further implies that adding a domain knowledge module to the original entity matching model might achieve a better performance in the field of entity matching.
2 RELATED WORK
2.1 ENTITY MATCHING
Here, we summarize the entity matching methods used for the entity matching task, which aims to solve the problem of identifying entities from the real world (Barlaug & Gulla, 2021).
The attributed-aligned comparison strategy is commonly employed in entity methods. This strategy compares attributes in a one-to-one, and further combines the similarity representation on the record level (Barlaug & Gulla, 2021). Specifically, the rule-based method associated with the attributedaligned comparison strategy is the most classic entity matching method since it is easy to understand and develop (Hernández & Stolfo, 1998; Lim et al., 1996; Wang & Madnick, 1989). Nevertheless, owing to much expert experience required for modifying rules in the rule-based methods, methods based on machine learning (especially deep learning) are gradually developed to automatically learn the features of entities. For example, DeepMatcher (Mudgal et al., 2018), Kasai et al. (2019) and Auto-EM (Zhao & He, 2019) utilized the deep learning method to compare attributes one-to-one before comparing the similarity of records.
To capture better language understanding, some studies introduced the cross-record attention for entity matching (Barlaug & Gulla, 2021). Seq2SeqMatcher (Nie et al., 2019), Ditto (Li et al., 2020) and Brunner & Stockinger (2020) used attention mechanisms to capture semantic features of all words across the compared records. They treat the entity matching task to a sequence-to-sequence matching task by processing the entity pairs into sentences and inputting these sentences into transformer networks. At present, by combine cross-record and multiple optimization techniques (domain knowledge, etc.), Ditto has achieved SOTA performance in the entity matching benchmark.
Both attribute-aligned and cross-record attention methods need to input entity pairs into the model simultaneously, which comes out a large amount of computation effort in the entity matching. Therefore, some studies have proposed approaches to alleviating this problem by comparing the representation of entities. For entity representation methods, it is possible to generate a representation of each record and directly obtain similarity between entity pairs (Barlaug & Gulla, 2021). DeepER (Ebraheem et al., 2018) and AutoBlock (Zhang et al., 2020) applied bidirectional LSTM and selfattention to get the record-level embedding representations, which can achieve good performance in entity matching tasks with low time complexity.
2.2 POI ENTITY MATCHING
POI entity matching can be regarded as a special case of entity matching on POI. As far as we know, the majority of the current methods are dependent on the attribute-aligned comparison (including rule-based and machine learning-based). McKenzie et al. (2014) proposed a weighted combination model on multiple attributes (e.g., name, type, and geographic location) of POI, and achieved high accuracy in the Foursquare and Yelp dataset. Li et al. (2016) proposed an entropy-weighted method to POI matching by integrating attributes with allocation weights via information entropy. This entropy-weighted method is applied to Baidu and Sina POI matching and achieved good performance. Meanwhile, some studies applied the weighted summation based on the graph method, and the weights of different attributes can further be obtained by an unsupervised method. For example, Novack et al. (2018) presented a graph-based POI matching method with two matching strategies. In which, POIs are regard as nodes and matching possibilities regarded as edges. Almeida et al. (2018) first proposed a data-driven learning method for automatic POI matching based on an outlier detection algorithm. However, these methods for feature design are heavily dependent on the experts’ knowledge.
To improve the accuracy, the methods of text semantic are also applied to the POI matching task. Dalvi et al. (2014) considered both domain knowledge and geographical knowledge and presented an unsupervised POI matching model based on a language model. They assign weights to different words in POI names and their method can achieve an accuracy of about 90% in POI deduplication. Yu et al. (2018) proposed an approach based on semantic technologies to automate the POI matching and conflation, which achieved a conflation accuracy of 98% in shopping center POIs. However, as far as we know, employing POI embedding for POI matching task, which this paper aims to explore, has not been covered by existing studies.
3 MODEL ARCHITECTURE
The architecture of the POI-Transformers is shown in Figure 2. It is a combination of the transformer-based model and geographic location embedding module, which is an extension of the general entity matching. In this work, we aim to achieve the POI entity matching by incorporating semantic and geographic information. Firstly, the semantic feature vectors are extracted from the text attributes (name, category, address, etc.) of the POI entity by using the Transformer-based model (BERT, etc.) and further trained to be fixed-sized attribute embeddings by pooling strategies. Meanwhile, we design a geographic location embedding module to translate the two-dimensional geographic location (longitude and latitude) to meaningful embeddings. Secondly, a transformer encoder layer is employed to encode the text embeddings and location embeddings by a multi-head attention mechanism. Finally, a pooling layer and a fully connected layer are adopted to obtain POI entity embeddings.
Figure 2(B) describes a specific POI-Transformers for evaluation in this study. In this framework, we consider the text attributes of name, category and address in the Transformer-based model as these attributes are most important to POI entities. Combining with the geographic information (longitude and latitude), the three text attributes, in turn, can be used for identifying a POI entity in the nature world. In the training process, the Siamese networks are adopted to update the weights of semantic and geographic attributes to ensure the newly generated POI embedding meaningful semantically and geographically and valid in similarity metrics (such as cosine, Euclidean).
3.1 TEXT EMBEDDING MODULE
The text embedding module designed in our study attempts to translate the multiple text attributes of POI entities into semantic embeddings through the transformer-based network. Transformer-based pre-trained models, such as BERT and RoBERTa, can achieve the state-of-the-art performance, which in turn makes the transformer-based models widely used. The SOTA entity matching method Ditto has proved that transformer-based networks can fully learn the knowledge from entity attributes by treating entity-pair as sequence-pair (Li et al., 2020).
Here, we consider each POI text attribute as a sentence and generate a corresponding embedding that can represent this text attribute. In this study, a transformer-based network is employed to extract the semantic text embeddings of POI text attributes, such as name, category, address. After the transformer-based network, we further utilize a pooling layer to derive fix-sized semantic vectors of POI text attributes. In the pooling layer, the output of a special CLS token is not used to represent the text since there is no evidence showing the embedding of the CLS token is semantically meaningful (Reimers & Gurevych, 2019). Instead, mean-strategy polling is utilized in the pooling layer of the POI-Transformers framework. This means an average value of embeddings of all tokens is set as the embedding of the POI text attributes. In addition, to simplify the model and maintain the consistency of POI embeddings, only one transformer-based network is utilized for extracting the text attributes.
3.2 GEOGRAPHIC LOCATION EMBEDDING MODULE
The geographic location of POI is two-dimensional spatial information, involving longitude and latitude. To obtain the geographic information on POI, we design a geographic location embedding module in the POI-Transformers framework to translate the two-dimensional location into geographically meaningful embeddings, which can easier to identify the difference between the input geographic locations.
Here, we generate meaningful geographic vectors for the longitude and latitude of POI by utilized a location encoding method, the GeoHash (Liu et al., 2014) algorithm, which can encode the numerical longitude and latitude of a specific region on the Earth into strings. In this study, the primary purpose of the GeoHash in the POI-Transformers framework is to convert the longitude and latitude into binary vectors. To be specific, for a given geographic location (lon1, lat1), the location encoding layer in the GeoHash algorithm recursively can divide the longitude into intervals and mark the longitude code with 0 if the lon1 belongs to the left interval. If the lon1 belongs to the right interval, the longitude code is marked by 1. When the number of divisions reaches the set conditions, a code similar to 1101001 is obtained. The binary code of latitude can be also obtained as the way of longitude code. A longer binary array implies a more precise geographic location. When the times of dichotomies reach 30, the maximum error is approximately at 0.0186 meters. Therefore, the code ’0’ can represent the longitude range (-180, 0), code ’00’ can represent the latitude range (-180, -90). Similarly, the code ’0’ represent the latitude range (-90, 0) while code ’00’ represent the latitude range (-90, -45).
After we get the binary array of longitude and latitude, we can generate a geographic binary array with longitude bits occupied in even digits and latitude bits occupied in odd digits. For instance, the longitude binary code ’0’ represent the longitude range -180 to 0, and the latitude binary code ’1’ represent the latitude range 0 to 90, then the geographic binary array ’01’ can represent a region that longitude range from -180 to 0, and latitude range from 0 to 90. More details can be found in the appendix A.2. After the location encoding layer, a fully connected layer is added for obtaining location embeddings with the same dimension as the semantic vectors.
3.3 EMBEDDING FUSION MODULE
We then incorporate all the embeddings at the POI entity level in the embedding fusion module to ensure the matched POI entities have a large cosine similarity.
The linkages between different attributes of each POI entity possibly facilitate measuring the cosine similarity between POI entities. Specifically, the linkages between (i) category and name. For one thing, each category, such as hospital, university and shopping mass, may contain various names of POI entities. For another, one name of POI entity is likely to belong to different categories. (ii) geographic location and address. The address of POI entities can be obtained by the longitude and
latitude from the interface of the electronic map. In turn, the longitude and latitude can also be searched by the address of POI entities through the interface of the electronic map. Hence, these linkages between the attributes can help discriminate against various POI entities. Moreover, a rulebased method with a weighted average of the similarity scores of all the attributes is generally used for measuring the similarity of two POI entities. Nevertheless, the weights of all the attributes of POI entities, in the traditional POI entity matching, are manually set with prior experience.
To learn the linkages knowledge between the attributes of POI entities, we introduce a transformer encode layer with multi-head self-attention in the embedding fusion module. The self-attention mechanism can link the different parts of a single sequence to obtain the representation of the sequence. This means that we can get the representation of linkages between different attributes when inputting the attributes of POI entities into the self-attention. Hence, the linkages between the attributes of each POI entity can be fully learned by using the multi-head self-attention mechanism. In addition, the attention mechanism can adjust automatically the weights of all attributes in POI entities. In natural language processing, the core function of the attention mechanism is to weigh the input attributes by learning the importance of different parts of a sentence (Vaswani et al., 2017). Compared with the fixing weights set by manual, the weights based on the importance of attributes are much reasonable. Furthermore, in order to obtain fix-sized POI embeddings, we introduce a pooling layer after the transformer encoder layer.
4 EXPERIMENTS
4.1 DATA SETS
We experimented with all the 12 publicly available entity matching data sets used for evaluating Ditto (Li et al., 2020) and DeepMatcher (Mudgal et al., 2018) and a POI entity matching data set generated by ourselves.
For entity matching data sets, each of them consists of the candidate pairs sampled and labeled from two structured entity record tables. In addition, similar to the Ditto and DeepMatcher, we also use the dirty version of the DBLP-ACM, DBLP-Scholar, iTunes-Amazon, and Walmart-Amazon data sets to evaluate the robustness of the proposed model. These dirty data sets are generated by randomly moving each attribute value to the attribute title with a 50% probability. The Abt-Buy data set is dominated by texts and is characterized by the long text attribute. The overview of all the entity matching data sets can be found in appendix A.1.
In this study, we annotated a POI entity matching data set QM-GD-POI generated by a POI dataset of the Tencent Map (QM POI, https://map.qq.com/) and a POI data set of the Gaode Map (GD POI, https://www.amap.com/). All POI entities contain five attributes: name, category, address, longitude and latitude. We use the open POI query API of Tencent Map and Gaode Map to obtain 7,103 and 6,868 POI entities respectively. Then, we sampled and labeled 9,606 candidate pairs from these two newly generated POI data sets. We also generated the dirty version of QMGD-POI. Since the attributes of name, longitude and latitude in the POI data set are generally not missing, we remove the type and addresses with a 50% probability to generate a dirty data set.
The training, validation, and test sets of 12 publicly entity matching data sets are set at the ratio of 3:1:1. In the structured and dirty QM-GD-POI data sets, we used the ratio 6:1:3 to construct training, validation and test sets.
4.2 EXPERIMENT SETUP
We implemented POI-Transformers in PyTorch (Paszke et al., 2019) and the Transformers (Wolf et al., 2020) library. We currently use the BERT-Base Chinese model as the base model to extract text semantic features. Further, the BERT-Base Chinese model can replace with other transformerbased pre-training models. We conducted all experiments on a server with Intel i9-10850K CPU @ 3.6GHZ, 64GB memory, NVIDIA GeForce RTX 3090 GPU.
We compared the proposed POI-Transformers with the existing entity matching methods, such as DeepMatcher, Ditto, Magellan, and DeepER and POI entity matching methods Rule-based,
Weighted, iForest on POI entity matching dataset. We also compared variants of POI-Transformers without the Geographic Location Embedding Module (POI-Transformers*).
DeepMatcher: DeepMatcher (Mudgal et al., 2018) is one of the SOTA deep learning-based entity matching approaches. DeepMatcher customizes the RNN to conduct attribute-aligned similarity representation of attributes, and then aggregates the representations of attributes to obtain entity similarity representation between entities.
Ditto: Ditto (Li et al., 2020) is the SOTA entity matching system based on pre-trained Transformerbased language models. Ditto considers the entity matching task as a sequence classification task by splicing entity pairs into sequences. Meanwhile, Ditto developed three optimization techniques (domain knowledge, TF-IDF summarization, and data augmentation) to improve the performance. We use the full version of Ditto with all 3 optimizations in this study.
Magellan: Magellan (Konda, 2018) is a SOTA traditional non-neural entity matching system. This system calculates the similarity features between attributes (Levenshtein distance, etc.), and then uses these features to build a random forest, logistic regression and other traditional machine learning models for entity matching identifying. After model selection, the random forest in Magellan performs best in our POI entity matching dataset, so we report the F1 score in POI entity matching of Magellan obtained by random forest.
DeepER: DeepER (Ebraheem et al., 2018) uses bidirectional RNN with LSTM hidden units on word embeddings to translate each entity to a representation vector. It achieves good accuracy and high efficiency in entity matching tasks.
POI-Transformers: The full version of our proposed model with Geographic Location Embedding Module. In POI entity matching, we used the cosine similarity and SentEval toolkit (Conneau & Kiela, 2018) to evaluate the POI embeddings obtain by the POI-Transformers. When evaluated by the cosine similarity, we set a matching threshold. The entity pairs with embedding cosine similarity higher than the threshold are considered the positive matching pair. SentEval is an evaluation toolkit for evaluating the quality of POI embeddings. We utilized the logistic regression classifier in the SentEval to evaluate the POI embeddings for POI entity matching and entity embeddings for entity matching. To train the POI-Transformers framework, we utilize the softmax objective function to update the weights of POI embeddings as that in Sentence-BERT (Reimers & Gurevych, 2019).
POI-Transformers*: In this version, the Geographic Location Embedding Module is deleted, and the longitude and latitude are directly input into the Text Embedding Module to obtained the representation embeddings.
Rule-based: We designed a rule-based method for POI entity matching. In this method, we first calculated the similarity of the name, category, address and distance between the POI entity pairs. Then, we performed a weighted summation of the similarity between each attribute to obtain the similarity between POI entity pairs. The weights of name, category, address and distance similarity were set to 0.65, 0.1, 0.1, 0.15, respectively according to the experts’ knowledge.
Weighted: Li et al. (2016) proposed a Entropy-Weighted method for POI entity matching. This method first calculates the similarity of attributes between POI entity pairs, and then allocates weights of similarity of each attribute by information entropy.
iForest: Almeida et al. (2018) proposed an outlier detection based approach to POI entity matching. This method first computes the similarity of the name, website, address, category and geographic coordinates between POI entity pairs. Further, it obtains similarity between POI entity pairs by using the iForest method.
BERT: Fine-tuning the pre-trained BERT (Devlin et al., 2018) model by our POI matching dataset to do a classification task. We construct POI sentences by concatenating name, category, address, longitude, and latitude for input and get the similarity of two POI sentences.
4.3 RESULTS
All the models run with 20 epochs in the training process and returned the checkpoint with the highest F1 score on the validation set. Table 3 and Table 4 show the results of the entity matching data sets and POI entity matching data sets respectively. We reported the F1 scores of DeepMatcher,
Ditto, Magellan, and DeepER in entity matching data sets from Li et al. (2020) and Barlaug & Gulla (2021).
As shown in Table 1, due to the powerful learning ability of deep learning, the models based on deep learning (Ditto, BS, and POI-Transformers) can achieve better performance in entity matching. Meanwhile, we found that the attributed-aligned comparison methods (DeepMatcher) and cross-record interactive methods (Ditto, BS) based on deep learning are generally achieved better performance than the methods based on entity representation. In addition, The POI-Transformers proposed by this study achieved a better performance than the existing entity representation method (DeepER) and the traditional method (Magellan). In some data sets, the POI-Transformers can achieve better performance than the DeepMatcher. These results suggest that in entity matching, the entity representation methods currently have no advantage in accuracy over the POI-Transformers. However, the reduction in computation effort by entity representation methods cannot be ignored, especially in the POI entity matching task with a large number of real-world data set.
We can also find that Ditto and BS outperform other models in the textual data set Abt-Buy. This is possible because attributed-aligned methods and entity representation methods require to transform attribute text to other forms. When the training set is not enough, the learned features cannot fully represent the features of an attribute. The cross-record interactive model can directly interact with the original attributes across records, so as to obtain more meaningful features. In actual, there is no long text in the POI entity, and all attributes contain short text.
Table 2 shows the result of the POI entity matching data sets. We can find that the transformer-based models (Ditto, Magellan, and POI-Transformers) can achieve better performance than other models in the POI entity matching task. The POI-Transformers* has no advantage over traditional models since we remove the Geographic Location Embedding Module. The cosine similarity between the embeddings of POI entities is directly used for POI entity matching task, which is better than the traditional Rule-based, Weighted, and iForest, but perform worse than other deep learning models. When we used the SentEval to evaluate the embedding generated by POI-Transformers, we find that its performance is a little better than the SOTA entity matching method Ditto, but it is a little worse in the dirty data version. This indicates that the POI-Transformers proposed in this study has achieved SOTA performance in POI entity matching after adding the Geographic Location Embedding Module. Meanwhile, these results also suggest that POI-Transformers has more advantages in dealing with structured data and needs to be improved in dealing with dirty data. However, as far as we know, there are generally not many dirty data in the real-world POI data set.
In order to evaluate the computational efficiency for different models, we selected 100, 500, and 1,000 records from Tencent Map POI and Gaode Map POI respectively to from 10,000, 250,000, and 1,000,000 matching pairs. Table 3 shows the computation time of different numbers of matching pairs in POI entity matching. We can see that the computation effort of traditional POI entity matching methods is very low, but it can be seen from Table 3 that the accuracy is the worst. When the cosine similarity of POI embeddings is directly used for POI entity matching, the computation amount is lower than Magellan when the size of data set increases. The three deep learning models, especially the transformer-based modes (Ditto and BERT), have a large amount of computation (approximately 20 hours and 17 hours, respectively). When using SentEval to evaluate the embeddings generated by POI-Transformers, it takes less than 500 seconds to calculate one million matching pairs. These results demonstrate that our POI-Transformers have advantages both in accuracy and computational efficiency in the task of POI entity matching, and have the significance of practical deployment.
5 CONCLUSIONS
In this paper, we propose a novel model, the POI-Transformers, for the POI matching task based on pre-trained Transformer-based language models. This model uses a simple architecture to effectively incorporate the semantic features and geographic features to obtain meaningful POI entity embeddings. The POI entities are matched by the similarity of POI embeddings instead of directly comparing the POI entities, which can greatly reduce the complexity of computation. The experimental results show that our proposed POI-Transformers is comparable to SOTA entity matching models (DeepER, DeepMatcher, and Ditto) in entity matching tasks. Moreover, our model achieves the highest F1 score on natural scenes data sets in POI entity matching, and reduces the computation effort for identifying one million pairs from about 20 hours to 228 seconds. The high accuracy and efficiency of the POI-Transformers can help to deploy and use in real-world data set. In addition, our results also demonstrate that domain knowledge fusion in the deep learning model can achieve better results in specific entity matching tasks.
A APPENDIX
A.1 OVERVIEW OF DATA SETS
Table 4 shows the overview of publicly available entity matching data sets (from Barlaug & Gulla (2021) and Li et al. (2020)) and a POI entity matching data set (QM-GD-POI) generate by ourselves.
A.2 DETAIL OF GEOGRAPHIC LOCATION EMBEDDING MODULE
As illustrated in Figure 3, the left interval of longitude is set as (-180, 0) by the GeoHash algorithm and the right interval is set as (0, 180). Similarly, the left interval of latitude is divided into (-90, 0) while the corresponding right interval is (0, 90). As a result, “01” represents the area where longitude is from -180 to 0 degrees and latitude is from 0 to 90 degrees. As for the “01” area, the GeoHash algorithm continues to bisect the latitude and longitude of this region and the “0101” denotes the area where the longitude ranges from (-180, -90), and the latitude ranges from (45, 90).
Through the continuous dichotomy in the GeoHash algorithm, any geographic location on the Earth can be encoded into a unique binary array. A longer binary array implies a more precise geographic location. When the time of dichotomies reaches 30, the maximum error is approximately 0.0186 meters. After we get the binary array of longitude and latitude, we can generate a geographic binary array with longitude bits occupied in even digits and latitude bits occupied in odd digits. For instance, in Figure 3(A) the longitude binary code of the left top region is ’0’, and the latitude binary is ’1’, then the geographic binary array ’01’ can represent the left top region. More examples can be found in the appendix A.2. After the location encoding layer, a fully connected layer is added for obtaining location embeddings with the same dimension as the semantic vectors.
Let’s give an example from Wikipedia (https://en.wikipedia.org/wiki/Geohash), the encoded longitude “0111 1100 0000” represents an area with the longitude from -5.625to -5.449 degree with a maximum error 0.044 degree (about 4,400 meters) after 12 times binary divisions (Table 6), and the encode latitude “1011 1100 1001” represents an area with the latitude from 42.539 to 42.627 (Table 5). Then we can generate a geographic binary array with longitude bits occupied in even digits and latitude bits occupied in odd digits. With this criterion, the geographic binary array of the above example can be depicted as “0110 1111 1111 0000 0100 0001”. | 1. What is the focus of the paper, and what are the authors' contributions to the field?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its technical contribution and performance compared to other works?
3. How does the reviewer assess the clarity and quality of the paper's content?
4. Are there any concerns or questions regarding the method's ability to integrate textual and geolocation information effectively? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, the authors proposed a transformer-based model for integrating both the textual and geolocation information.
Review
Strengths:
The studied topic is important.
The writing is easy to follow.
Weaknesses:
The technical contribution is limited. All the techniques introduced in this paper, especially the transformer-based entity matching method, have been well-studied in other works. The key claimed contribution is integrating geolocation information. But the integration of textual and geolocation information is directly applying attention mechanisms over them. The concern here is textual embedding and geolocation embedding are from two modalities, which should be in two embedding spaces. Attention without space transformation may not be propriated.
The performance of the proposed method cannot beat the literature in some cases. The results may not support the effectiveness of the method. |
ICLR | Title
POI-Transformers: POI Entity Matching through POI Embeddings by Incorporating Semantic and Geographic Information
Abstract
Point of Interest (POI) data is crucial to location-based applications and various user-oriented services. However, three problems are existing in POI entity matching. First, traditional approaches to general entity matching are designed without geographic location information, which ignores the geographic features when performing POI entity matching. Second, existing POI matching methods for feature design are heavily dependent on the experts’ knowledge. Third, current deep learning-based entity matching approaches require a high computational complexity since all the potential POI entity pairs need input to the network. A general and robust POI embedding framework, the POI-Transformers, is initially proposed in this study to address these problems of POI entity matching. The POI-Transformers can generate semantically meaningful POI embeddings through aggregating the text attributes and geographic location, and minimize the inconsistency of a POI entity by measuring the distance between the newly generated POI embeddings. Moreover, the POI entities are matched by the similarity of POI embeddings instead of directly comparing the POI entities, which can greatly reduce the complexity of computation. The implementation of the POI-Transformers achieves a high F1 score of 95.8% on natural scenes data sets (from the Gaode Map and the Tencent Map) in POI entity matching and is comparable to the state-of-the-art (SOTA) entity matching methods of DeepER, DeepMatcher, and Ditto (in entity matching benchmark data set). Compared with the existing deep learning methods, our method reduces the effort for identifying one million pairs from about 20 hours to 228 seconds. These demonstrate that the proposed POI-Transformers framework significantly outstrips traditional methods both in accuracy and efficiency.
1 INTRODUCTION
A Point of Interest (POI) is a dedicated geographic entity that people may be interested in, such as a university, an institute, or a corporate office, and is fundamental to the majority of location-based services (LBS) applications. Generally, a POI entity contains multiple attributes, such as name, category, geographic location. A collection of comprehensive, reliable, and up-to-date POI data is important to LBS, service capability and user experience (Rae et al., 2012; Zhao et al., 2019a). Therefore, updating the POI database in timely is substantial significant. In general, POI database updating is comparing the newly generated POI entities with the existing POI entities and adding the new ones into the database. In this process, POI entity matching is crucial since it needs to discriminate the new POI entities from the old ones based on their attributes.
Traditional POI entity matching algorithms usually involve numerous artificial matching rules associated with attributes (Fu et al., 2011; Safra et al., 2010). The most common idea of POI entity matching is calculating the similarity of attributes between two POI entities and obtaining the final score of all the similarities of attributes with weights. Nevertheless, most of the existing algorithms involve simple string similarity algorithms, such as Levenshtein distance, to calculate the similarity of attributes. This largely neglects the semantic information of the text attributes. Considering this problem, some studies introduced semantic models to the POI-related tasks as the semantic model can achieve state-of-art performance in natural language processing tasks (NLP) (Zhao et al.,
2019a;b). However, most of these POI-related models are heavily dependent on the experts’ knowledge.
Entity matching has been researched for decades (Barlaug & Gulla, 2021). Current entity matching methods such as Ditto (Li et al., 2020), DeepMatcher (Mudgal et al., 2018), and DeepER (Ebraheem et al., 2018), can compare the similarity between attributes and extract the features of entities through deep learning, and then compare the similarities between potential pairs of entities. Previous studies state that the geographic similarity calculated from the geographic location is a substantially important element in POI entity matching (Almeida et al., 2018; Novack et al., 2018). However, current entity matching methods are mostly designed for general entities without geographic location information. Most of the entity matching methods, in general, learn the features from the attributes equally but the geographic location features are always ignored.
The pre-trained transformer network, such as BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019), and XLNet (Yang et al., 2019), can achieve state-of-the-art performances in various NLP tasks. A sentence embedding model was proposed by Reimers & Gurevych (2019) for solving the huge computational overhead in semantic similarity search. Meanwhile, Ebraheem et al. (2018) proposed a simple deep learning method, namely DeepER, to directly compare the similarity between entities by learning and tuning the distributed representations of entities. These studies demonstrated that a simple deep learning method can be utilized to translate the POI entities into POI embeddings through fully involving both the semantic of text attributes and geographic location information. Based on the similarity between embeddings, it can be efficiently carried out in many potential matching pairs in POI entity matching.
In this study, we propose a POI-Transformers framework to generate POI embeddings by completely involving the text attributes and geographical locations of POI entities. Experiments show that after training by the Siamese network architecture, the simple model POI-Transformers can well integrate semantic and geographical features, and the newly generated embeddings can fully represent POI entities. The proposed model achieves good performance in entity matching benchmark and SOTA performance in POI entity matching task, and reduces the effort for comparing many POI pairs.
The main contributions of this study can be summarized as follows:
• We propose a simple model, POI-Transformers, for generating POI entity embeddings, which can fully learn the representation embeddings of POI entities by the transformerbased model and geographic location encoding module. Since POI-Transformers use the transformer-based network to process text attributes, this proposed model can seamlessly switch to different transformer-based networks and support different languages.
• The POI embeddings generated from POI-Transformers can be used for POI entity matching task in real-world data. These fully learned embeddings can largely reduce the effort for finding the most similar pair from all POI entities.
• We compare the proposed POI-transformers with the traditional POI entity matching methods and entity matching methods. The results show that our model achieves comparable performance to the DeepER, DeepMatcher, and Ditto in the entity matching tasks. In the POI entity matching task, the proposed POI-Transformers achieves better performance than traditional POI matching methods (e.g. rule-based, weighted). These results demonstrate that this proposed framework can fully learn the text attributes and geographic location information in POI entity matching. Meanwhile, it further implies that adding a domain knowledge module to the original entity matching model might achieve a better performance in the field of entity matching.
2 RELATED WORK
2.1 ENTITY MATCHING
Here, we summarize the entity matching methods used for the entity matching task, which aims to solve the problem of identifying entities from the real world (Barlaug & Gulla, 2021).
The attributed-aligned comparison strategy is commonly employed in entity methods. This strategy compares attributes in a one-to-one, and further combines the similarity representation on the record level (Barlaug & Gulla, 2021). Specifically, the rule-based method associated with the attributedaligned comparison strategy is the most classic entity matching method since it is easy to understand and develop (Hernández & Stolfo, 1998; Lim et al., 1996; Wang & Madnick, 1989). Nevertheless, owing to much expert experience required for modifying rules in the rule-based methods, methods based on machine learning (especially deep learning) are gradually developed to automatically learn the features of entities. For example, DeepMatcher (Mudgal et al., 2018), Kasai et al. (2019) and Auto-EM (Zhao & He, 2019) utilized the deep learning method to compare attributes one-to-one before comparing the similarity of records.
To capture better language understanding, some studies introduced the cross-record attention for entity matching (Barlaug & Gulla, 2021). Seq2SeqMatcher (Nie et al., 2019), Ditto (Li et al., 2020) and Brunner & Stockinger (2020) used attention mechanisms to capture semantic features of all words across the compared records. They treat the entity matching task to a sequence-to-sequence matching task by processing the entity pairs into sentences and inputting these sentences into transformer networks. At present, by combine cross-record and multiple optimization techniques (domain knowledge, etc.), Ditto has achieved SOTA performance in the entity matching benchmark.
Both attribute-aligned and cross-record attention methods need to input entity pairs into the model simultaneously, which comes out a large amount of computation effort in the entity matching. Therefore, some studies have proposed approaches to alleviating this problem by comparing the representation of entities. For entity representation methods, it is possible to generate a representation of each record and directly obtain similarity between entity pairs (Barlaug & Gulla, 2021). DeepER (Ebraheem et al., 2018) and AutoBlock (Zhang et al., 2020) applied bidirectional LSTM and selfattention to get the record-level embedding representations, which can achieve good performance in entity matching tasks with low time complexity.
2.2 POI ENTITY MATCHING
POI entity matching can be regarded as a special case of entity matching on POI. As far as we know, the majority of the current methods are dependent on the attribute-aligned comparison (including rule-based and machine learning-based). McKenzie et al. (2014) proposed a weighted combination model on multiple attributes (e.g., name, type, and geographic location) of POI, and achieved high accuracy in the Foursquare and Yelp dataset. Li et al. (2016) proposed an entropy-weighted method to POI matching by integrating attributes with allocation weights via information entropy. This entropy-weighted method is applied to Baidu and Sina POI matching and achieved good performance. Meanwhile, some studies applied the weighted summation based on the graph method, and the weights of different attributes can further be obtained by an unsupervised method. For example, Novack et al. (2018) presented a graph-based POI matching method with two matching strategies. In which, POIs are regard as nodes and matching possibilities regarded as edges. Almeida et al. (2018) first proposed a data-driven learning method for automatic POI matching based on an outlier detection algorithm. However, these methods for feature design are heavily dependent on the experts’ knowledge.
To improve the accuracy, the methods of text semantic are also applied to the POI matching task. Dalvi et al. (2014) considered both domain knowledge and geographical knowledge and presented an unsupervised POI matching model based on a language model. They assign weights to different words in POI names and their method can achieve an accuracy of about 90% in POI deduplication. Yu et al. (2018) proposed an approach based on semantic technologies to automate the POI matching and conflation, which achieved a conflation accuracy of 98% in shopping center POIs. However, as far as we know, employing POI embedding for POI matching task, which this paper aims to explore, has not been covered by existing studies.
3 MODEL ARCHITECTURE
The architecture of the POI-Transformers is shown in Figure 2. It is a combination of the transformer-based model and geographic location embedding module, which is an extension of the general entity matching. In this work, we aim to achieve the POI entity matching by incorporating semantic and geographic information. Firstly, the semantic feature vectors are extracted from the text attributes (name, category, address, etc.) of the POI entity by using the Transformer-based model (BERT, etc.) and further trained to be fixed-sized attribute embeddings by pooling strategies. Meanwhile, we design a geographic location embedding module to translate the two-dimensional geographic location (longitude and latitude) to meaningful embeddings. Secondly, a transformer encoder layer is employed to encode the text embeddings and location embeddings by a multi-head attention mechanism. Finally, a pooling layer and a fully connected layer are adopted to obtain POI entity embeddings.
Figure 2(B) describes a specific POI-Transformers for evaluation in this study. In this framework, we consider the text attributes of name, category and address in the Transformer-based model as these attributes are most important to POI entities. Combining with the geographic information (longitude and latitude), the three text attributes, in turn, can be used for identifying a POI entity in the nature world. In the training process, the Siamese networks are adopted to update the weights of semantic and geographic attributes to ensure the newly generated POI embedding meaningful semantically and geographically and valid in similarity metrics (such as cosine, Euclidean).
3.1 TEXT EMBEDDING MODULE
The text embedding module designed in our study attempts to translate the multiple text attributes of POI entities into semantic embeddings through the transformer-based network. Transformer-based pre-trained models, such as BERT and RoBERTa, can achieve the state-of-the-art performance, which in turn makes the transformer-based models widely used. The SOTA entity matching method Ditto has proved that transformer-based networks can fully learn the knowledge from entity attributes by treating entity-pair as sequence-pair (Li et al., 2020).
Here, we consider each POI text attribute as a sentence and generate a corresponding embedding that can represent this text attribute. In this study, a transformer-based network is employed to extract the semantic text embeddings of POI text attributes, such as name, category, address. After the transformer-based network, we further utilize a pooling layer to derive fix-sized semantic vectors of POI text attributes. In the pooling layer, the output of a special CLS token is not used to represent the text since there is no evidence showing the embedding of the CLS token is semantically meaningful (Reimers & Gurevych, 2019). Instead, mean-strategy polling is utilized in the pooling layer of the POI-Transformers framework. This means an average value of embeddings of all tokens is set as the embedding of the POI text attributes. In addition, to simplify the model and maintain the consistency of POI embeddings, only one transformer-based network is utilized for extracting the text attributes.
3.2 GEOGRAPHIC LOCATION EMBEDDING MODULE
The geographic location of POI is two-dimensional spatial information, involving longitude and latitude. To obtain the geographic information on POI, we design a geographic location embedding module in the POI-Transformers framework to translate the two-dimensional location into geographically meaningful embeddings, which can easier to identify the difference between the input geographic locations.
Here, we generate meaningful geographic vectors for the longitude and latitude of POI by utilized a location encoding method, the GeoHash (Liu et al., 2014) algorithm, which can encode the numerical longitude and latitude of a specific region on the Earth into strings. In this study, the primary purpose of the GeoHash in the POI-Transformers framework is to convert the longitude and latitude into binary vectors. To be specific, for a given geographic location (lon1, lat1), the location encoding layer in the GeoHash algorithm recursively can divide the longitude into intervals and mark the longitude code with 0 if the lon1 belongs to the left interval. If the lon1 belongs to the right interval, the longitude code is marked by 1. When the number of divisions reaches the set conditions, a code similar to 1101001 is obtained. The binary code of latitude can be also obtained as the way of longitude code. A longer binary array implies a more precise geographic location. When the times of dichotomies reach 30, the maximum error is approximately at 0.0186 meters. Therefore, the code ’0’ can represent the longitude range (-180, 0), code ’00’ can represent the latitude range (-180, -90). Similarly, the code ’0’ represent the latitude range (-90, 0) while code ’00’ represent the latitude range (-90, -45).
After we get the binary array of longitude and latitude, we can generate a geographic binary array with longitude bits occupied in even digits and latitude bits occupied in odd digits. For instance, the longitude binary code ’0’ represent the longitude range -180 to 0, and the latitude binary code ’1’ represent the latitude range 0 to 90, then the geographic binary array ’01’ can represent a region that longitude range from -180 to 0, and latitude range from 0 to 90. More details can be found in the appendix A.2. After the location encoding layer, a fully connected layer is added for obtaining location embeddings with the same dimension as the semantic vectors.
3.3 EMBEDDING FUSION MODULE
We then incorporate all the embeddings at the POI entity level in the embedding fusion module to ensure the matched POI entities have a large cosine similarity.
The linkages between different attributes of each POI entity possibly facilitate measuring the cosine similarity between POI entities. Specifically, the linkages between (i) category and name. For one thing, each category, such as hospital, university and shopping mass, may contain various names of POI entities. For another, one name of POI entity is likely to belong to different categories. (ii) geographic location and address. The address of POI entities can be obtained by the longitude and
latitude from the interface of the electronic map. In turn, the longitude and latitude can also be searched by the address of POI entities through the interface of the electronic map. Hence, these linkages between the attributes can help discriminate against various POI entities. Moreover, a rulebased method with a weighted average of the similarity scores of all the attributes is generally used for measuring the similarity of two POI entities. Nevertheless, the weights of all the attributes of POI entities, in the traditional POI entity matching, are manually set with prior experience.
To learn the linkages knowledge between the attributes of POI entities, we introduce a transformer encode layer with multi-head self-attention in the embedding fusion module. The self-attention mechanism can link the different parts of a single sequence to obtain the representation of the sequence. This means that we can get the representation of linkages between different attributes when inputting the attributes of POI entities into the self-attention. Hence, the linkages between the attributes of each POI entity can be fully learned by using the multi-head self-attention mechanism. In addition, the attention mechanism can adjust automatically the weights of all attributes in POI entities. In natural language processing, the core function of the attention mechanism is to weigh the input attributes by learning the importance of different parts of a sentence (Vaswani et al., 2017). Compared with the fixing weights set by manual, the weights based on the importance of attributes are much reasonable. Furthermore, in order to obtain fix-sized POI embeddings, we introduce a pooling layer after the transformer encoder layer.
4 EXPERIMENTS
4.1 DATA SETS
We experimented with all the 12 publicly available entity matching data sets used for evaluating Ditto (Li et al., 2020) and DeepMatcher (Mudgal et al., 2018) and a POI entity matching data set generated by ourselves.
For entity matching data sets, each of them consists of the candidate pairs sampled and labeled from two structured entity record tables. In addition, similar to the Ditto and DeepMatcher, we also use the dirty version of the DBLP-ACM, DBLP-Scholar, iTunes-Amazon, and Walmart-Amazon data sets to evaluate the robustness of the proposed model. These dirty data sets are generated by randomly moving each attribute value to the attribute title with a 50% probability. The Abt-Buy data set is dominated by texts and is characterized by the long text attribute. The overview of all the entity matching data sets can be found in appendix A.1.
In this study, we annotated a POI entity matching data set QM-GD-POI generated by a POI dataset of the Tencent Map (QM POI, https://map.qq.com/) and a POI data set of the Gaode Map (GD POI, https://www.amap.com/). All POI entities contain five attributes: name, category, address, longitude and latitude. We use the open POI query API of Tencent Map and Gaode Map to obtain 7,103 and 6,868 POI entities respectively. Then, we sampled and labeled 9,606 candidate pairs from these two newly generated POI data sets. We also generated the dirty version of QMGD-POI. Since the attributes of name, longitude and latitude in the POI data set are generally not missing, we remove the type and addresses with a 50% probability to generate a dirty data set.
The training, validation, and test sets of 12 publicly entity matching data sets are set at the ratio of 3:1:1. In the structured and dirty QM-GD-POI data sets, we used the ratio 6:1:3 to construct training, validation and test sets.
4.2 EXPERIMENT SETUP
We implemented POI-Transformers in PyTorch (Paszke et al., 2019) and the Transformers (Wolf et al., 2020) library. We currently use the BERT-Base Chinese model as the base model to extract text semantic features. Further, the BERT-Base Chinese model can replace with other transformerbased pre-training models. We conducted all experiments on a server with Intel i9-10850K CPU @ 3.6GHZ, 64GB memory, NVIDIA GeForce RTX 3090 GPU.
We compared the proposed POI-Transformers with the existing entity matching methods, such as DeepMatcher, Ditto, Magellan, and DeepER and POI entity matching methods Rule-based,
Weighted, iForest on POI entity matching dataset. We also compared variants of POI-Transformers without the Geographic Location Embedding Module (POI-Transformers*).
DeepMatcher: DeepMatcher (Mudgal et al., 2018) is one of the SOTA deep learning-based entity matching approaches. DeepMatcher customizes the RNN to conduct attribute-aligned similarity representation of attributes, and then aggregates the representations of attributes to obtain entity similarity representation between entities.
Ditto: Ditto (Li et al., 2020) is the SOTA entity matching system based on pre-trained Transformerbased language models. Ditto considers the entity matching task as a sequence classification task by splicing entity pairs into sequences. Meanwhile, Ditto developed three optimization techniques (domain knowledge, TF-IDF summarization, and data augmentation) to improve the performance. We use the full version of Ditto with all 3 optimizations in this study.
Magellan: Magellan (Konda, 2018) is a SOTA traditional non-neural entity matching system. This system calculates the similarity features between attributes (Levenshtein distance, etc.), and then uses these features to build a random forest, logistic regression and other traditional machine learning models for entity matching identifying. After model selection, the random forest in Magellan performs best in our POI entity matching dataset, so we report the F1 score in POI entity matching of Magellan obtained by random forest.
DeepER: DeepER (Ebraheem et al., 2018) uses bidirectional RNN with LSTM hidden units on word embeddings to translate each entity to a representation vector. It achieves good accuracy and high efficiency in entity matching tasks.
POI-Transformers: The full version of our proposed model with Geographic Location Embedding Module. In POI entity matching, we used the cosine similarity and SentEval toolkit (Conneau & Kiela, 2018) to evaluate the POI embeddings obtain by the POI-Transformers. When evaluated by the cosine similarity, we set a matching threshold. The entity pairs with embedding cosine similarity higher than the threshold are considered the positive matching pair. SentEval is an evaluation toolkit for evaluating the quality of POI embeddings. We utilized the logistic regression classifier in the SentEval to evaluate the POI embeddings for POI entity matching and entity embeddings for entity matching. To train the POI-Transformers framework, we utilize the softmax objective function to update the weights of POI embeddings as that in Sentence-BERT (Reimers & Gurevych, 2019).
POI-Transformers*: In this version, the Geographic Location Embedding Module is deleted, and the longitude and latitude are directly input into the Text Embedding Module to obtained the representation embeddings.
Rule-based: We designed a rule-based method for POI entity matching. In this method, we first calculated the similarity of the name, category, address and distance between the POI entity pairs. Then, we performed a weighted summation of the similarity between each attribute to obtain the similarity between POI entity pairs. The weights of name, category, address and distance similarity were set to 0.65, 0.1, 0.1, 0.15, respectively according to the experts’ knowledge.
Weighted: Li et al. (2016) proposed a Entropy-Weighted method for POI entity matching. This method first calculates the similarity of attributes between POI entity pairs, and then allocates weights of similarity of each attribute by information entropy.
iForest: Almeida et al. (2018) proposed an outlier detection based approach to POI entity matching. This method first computes the similarity of the name, website, address, category and geographic coordinates between POI entity pairs. Further, it obtains similarity between POI entity pairs by using the iForest method.
BERT: Fine-tuning the pre-trained BERT (Devlin et al., 2018) model by our POI matching dataset to do a classification task. We construct POI sentences by concatenating name, category, address, longitude, and latitude for input and get the similarity of two POI sentences.
4.3 RESULTS
All the models run with 20 epochs in the training process and returned the checkpoint with the highest F1 score on the validation set. Table 3 and Table 4 show the results of the entity matching data sets and POI entity matching data sets respectively. We reported the F1 scores of DeepMatcher,
Ditto, Magellan, and DeepER in entity matching data sets from Li et al. (2020) and Barlaug & Gulla (2021).
As shown in Table 1, due to the powerful learning ability of deep learning, the models based on deep learning (Ditto, BS, and POI-Transformers) can achieve better performance in entity matching. Meanwhile, we found that the attributed-aligned comparison methods (DeepMatcher) and cross-record interactive methods (Ditto, BS) based on deep learning are generally achieved better performance than the methods based on entity representation. In addition, The POI-Transformers proposed by this study achieved a better performance than the existing entity representation method (DeepER) and the traditional method (Magellan). In some data sets, the POI-Transformers can achieve better performance than the DeepMatcher. These results suggest that in entity matching, the entity representation methods currently have no advantage in accuracy over the POI-Transformers. However, the reduction in computation effort by entity representation methods cannot be ignored, especially in the POI entity matching task with a large number of real-world data set.
We can also find that Ditto and BS outperform other models in the textual data set Abt-Buy. This is possible because attributed-aligned methods and entity representation methods require to transform attribute text to other forms. When the training set is not enough, the learned features cannot fully represent the features of an attribute. The cross-record interactive model can directly interact with the original attributes across records, so as to obtain more meaningful features. In actual, there is no long text in the POI entity, and all attributes contain short text.
Table 2 shows the result of the POI entity matching data sets. We can find that the transformer-based models (Ditto, Magellan, and POI-Transformers) can achieve better performance than other models in the POI entity matching task. The POI-Transformers* has no advantage over traditional models since we remove the Geographic Location Embedding Module. The cosine similarity between the embeddings of POI entities is directly used for POI entity matching task, which is better than the traditional Rule-based, Weighted, and iForest, but perform worse than other deep learning models. When we used the SentEval to evaluate the embedding generated by POI-Transformers, we find that its performance is a little better than the SOTA entity matching method Ditto, but it is a little worse in the dirty data version. This indicates that the POI-Transformers proposed in this study has achieved SOTA performance in POI entity matching after adding the Geographic Location Embedding Module. Meanwhile, these results also suggest that POI-Transformers has more advantages in dealing with structured data and needs to be improved in dealing with dirty data. However, as far as we know, there are generally not many dirty data in the real-world POI data set.
In order to evaluate the computational efficiency for different models, we selected 100, 500, and 1,000 records from Tencent Map POI and Gaode Map POI respectively to from 10,000, 250,000, and 1,000,000 matching pairs. Table 3 shows the computation time of different numbers of matching pairs in POI entity matching. We can see that the computation effort of traditional POI entity matching methods is very low, but it can be seen from Table 3 that the accuracy is the worst. When the cosine similarity of POI embeddings is directly used for POI entity matching, the computation amount is lower than Magellan when the size of data set increases. The three deep learning models, especially the transformer-based modes (Ditto and BERT), have a large amount of computation (approximately 20 hours and 17 hours, respectively). When using SentEval to evaluate the embeddings generated by POI-Transformers, it takes less than 500 seconds to calculate one million matching pairs. These results demonstrate that our POI-Transformers have advantages both in accuracy and computational efficiency in the task of POI entity matching, and have the significance of practical deployment.
5 CONCLUSIONS
In this paper, we propose a novel model, the POI-Transformers, for the POI matching task based on pre-trained Transformer-based language models. This model uses a simple architecture to effectively incorporate the semantic features and geographic features to obtain meaningful POI entity embeddings. The POI entities are matched by the similarity of POI embeddings instead of directly comparing the POI entities, which can greatly reduce the complexity of computation. The experimental results show that our proposed POI-Transformers is comparable to SOTA entity matching models (DeepER, DeepMatcher, and Ditto) in entity matching tasks. Moreover, our model achieves the highest F1 score on natural scenes data sets in POI entity matching, and reduces the computation effort for identifying one million pairs from about 20 hours to 228 seconds. The high accuracy and efficiency of the POI-Transformers can help to deploy and use in real-world data set. In addition, our results also demonstrate that domain knowledge fusion in the deep learning model can achieve better results in specific entity matching tasks.
A APPENDIX
A.1 OVERVIEW OF DATA SETS
Table 4 shows the overview of publicly available entity matching data sets (from Barlaug & Gulla (2021) and Li et al. (2020)) and a POI entity matching data set (QM-GD-POI) generate by ourselves.
A.2 DETAIL OF GEOGRAPHIC LOCATION EMBEDDING MODULE
As illustrated in Figure 3, the left interval of longitude is set as (-180, 0) by the GeoHash algorithm and the right interval is set as (0, 180). Similarly, the left interval of latitude is divided into (-90, 0) while the corresponding right interval is (0, 90). As a result, “01” represents the area where longitude is from -180 to 0 degrees and latitude is from 0 to 90 degrees. As for the “01” area, the GeoHash algorithm continues to bisect the latitude and longitude of this region and the “0101” denotes the area where the longitude ranges from (-180, -90), and the latitude ranges from (45, 90).
Through the continuous dichotomy in the GeoHash algorithm, any geographic location on the Earth can be encoded into a unique binary array. A longer binary array implies a more precise geographic location. When the time of dichotomies reaches 30, the maximum error is approximately 0.0186 meters. After we get the binary array of longitude and latitude, we can generate a geographic binary array with longitude bits occupied in even digits and latitude bits occupied in odd digits. For instance, in Figure 3(A) the longitude binary code of the left top region is ’0’, and the latitude binary is ’1’, then the geographic binary array ’01’ can represent the left top region. More examples can be found in the appendix A.2. After the location encoding layer, a fully connected layer is added for obtaining location embeddings with the same dimension as the semantic vectors.
Let’s give an example from Wikipedia (https://en.wikipedia.org/wiki/Geohash), the encoded longitude “0111 1100 0000” represents an area with the longitude from -5.625to -5.449 degree with a maximum error 0.044 degree (about 4,400 meters) after 12 times binary divisions (Table 6), and the encode latitude “1011 1100 1001” represents an area with the latitude from 42.539 to 42.627 (Table 5). Then we can generate a geographic binary array with longitude bits occupied in even digits and latitude bits occupied in odd digits. With this criterion, the geographic binary array of the above example can be depicted as “0110 1111 1111 0000 0100 0001”. | 1. What is the focus and contribution of the paper on POI matching?
2. What are the strengths of the proposed approach, particularly in terms of encoding geographic information?
3. What are the weaknesses of the paper, especially regarding the usage of existing methods and domain-specific embeddings?
4. Do you have any concerns about the performance of the POI transformer model in entity matching datasets?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The paper introduces a transformer based POI matching model and the model achieves high scores on several datasets. The model encodes both text information and geographic information of the POI.
Review
Strengths:
The model encodes geographic information for POI matching. Geographic information is an important component in POIs and should be considered and is proven to be very helpful in this task
The results on POI entity matching datasets look promising.
Weaknesses:
There are other existing methods in encoding geographic information (coordinates) into vector spaces (e.g., https://openreview.net/forum?id=wAiAsCNMJea), what about using them in the model and compare?
POI names, categories, and addresses usually contain a lot of proper nouns and very domain specific, it seems that it would be better to have a pretrained domain specific embedding for them. Instead of using BERT-Base Chinese model, it might be better to use a domain specific model? or is there any comparison among them when used to extract semantic features?
It looks like POI transformer model does not perform very well in entity matching datasets, do these datasets have geographic information? how is geographic information handled if they don't have them? what about POI transformers*? is POI transformer* doing well in entity matching datasets?
It would be great if there could be more analysis and understanding (e.g., visualizations, proofs, etc) to explain why the model is working, especially the geographic information encoding part, for example, how is the encoding preserving geospatial semantics. |
ICLR | Title
POI-Transformers: POI Entity Matching through POI Embeddings by Incorporating Semantic and Geographic Information
Abstract
Point of Interest (POI) data is crucial to location-based applications and various user-oriented services. However, three problems are existing in POI entity matching. First, traditional approaches to general entity matching are designed without geographic location information, which ignores the geographic features when performing POI entity matching. Second, existing POI matching methods for feature design are heavily dependent on the experts’ knowledge. Third, current deep learning-based entity matching approaches require a high computational complexity since all the potential POI entity pairs need input to the network. A general and robust POI embedding framework, the POI-Transformers, is initially proposed in this study to address these problems of POI entity matching. The POI-Transformers can generate semantically meaningful POI embeddings through aggregating the text attributes and geographic location, and minimize the inconsistency of a POI entity by measuring the distance between the newly generated POI embeddings. Moreover, the POI entities are matched by the similarity of POI embeddings instead of directly comparing the POI entities, which can greatly reduce the complexity of computation. The implementation of the POI-Transformers achieves a high F1 score of 95.8% on natural scenes data sets (from the Gaode Map and the Tencent Map) in POI entity matching and is comparable to the state-of-the-art (SOTA) entity matching methods of DeepER, DeepMatcher, and Ditto (in entity matching benchmark data set). Compared with the existing deep learning methods, our method reduces the effort for identifying one million pairs from about 20 hours to 228 seconds. These demonstrate that the proposed POI-Transformers framework significantly outstrips traditional methods both in accuracy and efficiency.
1 INTRODUCTION
A Point of Interest (POI) is a dedicated geographic entity that people may be interested in, such as a university, an institute, or a corporate office, and is fundamental to the majority of location-based services (LBS) applications. Generally, a POI entity contains multiple attributes, such as name, category, geographic location. A collection of comprehensive, reliable, and up-to-date POI data is important to LBS, service capability and user experience (Rae et al., 2012; Zhao et al., 2019a). Therefore, updating the POI database in timely is substantial significant. In general, POI database updating is comparing the newly generated POI entities with the existing POI entities and adding the new ones into the database. In this process, POI entity matching is crucial since it needs to discriminate the new POI entities from the old ones based on their attributes.
Traditional POI entity matching algorithms usually involve numerous artificial matching rules associated with attributes (Fu et al., 2011; Safra et al., 2010). The most common idea of POI entity matching is calculating the similarity of attributes between two POI entities and obtaining the final score of all the similarities of attributes with weights. Nevertheless, most of the existing algorithms involve simple string similarity algorithms, such as Levenshtein distance, to calculate the similarity of attributes. This largely neglects the semantic information of the text attributes. Considering this problem, some studies introduced semantic models to the POI-related tasks as the semantic model can achieve state-of-art performance in natural language processing tasks (NLP) (Zhao et al.,
2019a;b). However, most of these POI-related models are heavily dependent on the experts’ knowledge.
Entity matching has been researched for decades (Barlaug & Gulla, 2021). Current entity matching methods such as Ditto (Li et al., 2020), DeepMatcher (Mudgal et al., 2018), and DeepER (Ebraheem et al., 2018), can compare the similarity between attributes and extract the features of entities through deep learning, and then compare the similarities between potential pairs of entities. Previous studies state that the geographic similarity calculated from the geographic location is a substantially important element in POI entity matching (Almeida et al., 2018; Novack et al., 2018). However, current entity matching methods are mostly designed for general entities without geographic location information. Most of the entity matching methods, in general, learn the features from the attributes equally but the geographic location features are always ignored.
The pre-trained transformer network, such as BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019), and XLNet (Yang et al., 2019), can achieve state-of-the-art performances in various NLP tasks. A sentence embedding model was proposed by Reimers & Gurevych (2019) for solving the huge computational overhead in semantic similarity search. Meanwhile, Ebraheem et al. (2018) proposed a simple deep learning method, namely DeepER, to directly compare the similarity between entities by learning and tuning the distributed representations of entities. These studies demonstrated that a simple deep learning method can be utilized to translate the POI entities into POI embeddings through fully involving both the semantic of text attributes and geographic location information. Based on the similarity between embeddings, it can be efficiently carried out in many potential matching pairs in POI entity matching.
In this study, we propose a POI-Transformers framework to generate POI embeddings by completely involving the text attributes and geographical locations of POI entities. Experiments show that after training by the Siamese network architecture, the simple model POI-Transformers can well integrate semantic and geographical features, and the newly generated embeddings can fully represent POI entities. The proposed model achieves good performance in entity matching benchmark and SOTA performance in POI entity matching task, and reduces the effort for comparing many POI pairs.
The main contributions of this study can be summarized as follows:
• We propose a simple model, POI-Transformers, for generating POI entity embeddings, which can fully learn the representation embeddings of POI entities by the transformerbased model and geographic location encoding module. Since POI-Transformers use the transformer-based network to process text attributes, this proposed model can seamlessly switch to different transformer-based networks and support different languages.
• The POI embeddings generated from POI-Transformers can be used for POI entity matching task in real-world data. These fully learned embeddings can largely reduce the effort for finding the most similar pair from all POI entities.
• We compare the proposed POI-transformers with the traditional POI entity matching methods and entity matching methods. The results show that our model achieves comparable performance to the DeepER, DeepMatcher, and Ditto in the entity matching tasks. In the POI entity matching task, the proposed POI-Transformers achieves better performance than traditional POI matching methods (e.g. rule-based, weighted). These results demonstrate that this proposed framework can fully learn the text attributes and geographic location information in POI entity matching. Meanwhile, it further implies that adding a domain knowledge module to the original entity matching model might achieve a better performance in the field of entity matching.
2 RELATED WORK
2.1 ENTITY MATCHING
Here, we summarize the entity matching methods used for the entity matching task, which aims to solve the problem of identifying entities from the real world (Barlaug & Gulla, 2021).
The attributed-aligned comparison strategy is commonly employed in entity methods. This strategy compares attributes in a one-to-one, and further combines the similarity representation on the record level (Barlaug & Gulla, 2021). Specifically, the rule-based method associated with the attributedaligned comparison strategy is the most classic entity matching method since it is easy to understand and develop (Hernández & Stolfo, 1998; Lim et al., 1996; Wang & Madnick, 1989). Nevertheless, owing to much expert experience required for modifying rules in the rule-based methods, methods based on machine learning (especially deep learning) are gradually developed to automatically learn the features of entities. For example, DeepMatcher (Mudgal et al., 2018), Kasai et al. (2019) and Auto-EM (Zhao & He, 2019) utilized the deep learning method to compare attributes one-to-one before comparing the similarity of records.
To capture better language understanding, some studies introduced the cross-record attention for entity matching (Barlaug & Gulla, 2021). Seq2SeqMatcher (Nie et al., 2019), Ditto (Li et al., 2020) and Brunner & Stockinger (2020) used attention mechanisms to capture semantic features of all words across the compared records. They treat the entity matching task to a sequence-to-sequence matching task by processing the entity pairs into sentences and inputting these sentences into transformer networks. At present, by combine cross-record and multiple optimization techniques (domain knowledge, etc.), Ditto has achieved SOTA performance in the entity matching benchmark.
Both attribute-aligned and cross-record attention methods need to input entity pairs into the model simultaneously, which comes out a large amount of computation effort in the entity matching. Therefore, some studies have proposed approaches to alleviating this problem by comparing the representation of entities. For entity representation methods, it is possible to generate a representation of each record and directly obtain similarity between entity pairs (Barlaug & Gulla, 2021). DeepER (Ebraheem et al., 2018) and AutoBlock (Zhang et al., 2020) applied bidirectional LSTM and selfattention to get the record-level embedding representations, which can achieve good performance in entity matching tasks with low time complexity.
2.2 POI ENTITY MATCHING
POI entity matching can be regarded as a special case of entity matching on POI. As far as we know, the majority of the current methods are dependent on the attribute-aligned comparison (including rule-based and machine learning-based). McKenzie et al. (2014) proposed a weighted combination model on multiple attributes (e.g., name, type, and geographic location) of POI, and achieved high accuracy in the Foursquare and Yelp dataset. Li et al. (2016) proposed an entropy-weighted method to POI matching by integrating attributes with allocation weights via information entropy. This entropy-weighted method is applied to Baidu and Sina POI matching and achieved good performance. Meanwhile, some studies applied the weighted summation based on the graph method, and the weights of different attributes can further be obtained by an unsupervised method. For example, Novack et al. (2018) presented a graph-based POI matching method with two matching strategies. In which, POIs are regard as nodes and matching possibilities regarded as edges. Almeida et al. (2018) first proposed a data-driven learning method for automatic POI matching based on an outlier detection algorithm. However, these methods for feature design are heavily dependent on the experts’ knowledge.
To improve the accuracy, the methods of text semantic are also applied to the POI matching task. Dalvi et al. (2014) considered both domain knowledge and geographical knowledge and presented an unsupervised POI matching model based on a language model. They assign weights to different words in POI names and their method can achieve an accuracy of about 90% in POI deduplication. Yu et al. (2018) proposed an approach based on semantic technologies to automate the POI matching and conflation, which achieved a conflation accuracy of 98% in shopping center POIs. However, as far as we know, employing POI embedding for POI matching task, which this paper aims to explore, has not been covered by existing studies.
3 MODEL ARCHITECTURE
The architecture of the POI-Transformers is shown in Figure 2. It is a combination of the transformer-based model and geographic location embedding module, which is an extension of the general entity matching. In this work, we aim to achieve the POI entity matching by incorporating semantic and geographic information. Firstly, the semantic feature vectors are extracted from the text attributes (name, category, address, etc.) of the POI entity by using the Transformer-based model (BERT, etc.) and further trained to be fixed-sized attribute embeddings by pooling strategies. Meanwhile, we design a geographic location embedding module to translate the two-dimensional geographic location (longitude and latitude) to meaningful embeddings. Secondly, a transformer encoder layer is employed to encode the text embeddings and location embeddings by a multi-head attention mechanism. Finally, a pooling layer and a fully connected layer are adopted to obtain POI entity embeddings.
Figure 2(B) describes a specific POI-Transformers for evaluation in this study. In this framework, we consider the text attributes of name, category and address in the Transformer-based model as these attributes are most important to POI entities. Combining with the geographic information (longitude and latitude), the three text attributes, in turn, can be used for identifying a POI entity in the nature world. In the training process, the Siamese networks are adopted to update the weights of semantic and geographic attributes to ensure the newly generated POI embedding meaningful semantically and geographically and valid in similarity metrics (such as cosine, Euclidean).
3.1 TEXT EMBEDDING MODULE
The text embedding module designed in our study attempts to translate the multiple text attributes of POI entities into semantic embeddings through the transformer-based network. Transformer-based pre-trained models, such as BERT and RoBERTa, can achieve the state-of-the-art performance, which in turn makes the transformer-based models widely used. The SOTA entity matching method Ditto has proved that transformer-based networks can fully learn the knowledge from entity attributes by treating entity-pair as sequence-pair (Li et al., 2020).
Here, we consider each POI text attribute as a sentence and generate a corresponding embedding that can represent this text attribute. In this study, a transformer-based network is employed to extract the semantic text embeddings of POI text attributes, such as name, category, address. After the transformer-based network, we further utilize a pooling layer to derive fix-sized semantic vectors of POI text attributes. In the pooling layer, the output of a special CLS token is not used to represent the text since there is no evidence showing the embedding of the CLS token is semantically meaningful (Reimers & Gurevych, 2019). Instead, mean-strategy polling is utilized in the pooling layer of the POI-Transformers framework. This means an average value of embeddings of all tokens is set as the embedding of the POI text attributes. In addition, to simplify the model and maintain the consistency of POI embeddings, only one transformer-based network is utilized for extracting the text attributes.
3.2 GEOGRAPHIC LOCATION EMBEDDING MODULE
The geographic location of POI is two-dimensional spatial information, involving longitude and latitude. To obtain the geographic information on POI, we design a geographic location embedding module in the POI-Transformers framework to translate the two-dimensional location into geographically meaningful embeddings, which can easier to identify the difference between the input geographic locations.
Here, we generate meaningful geographic vectors for the longitude and latitude of POI by utilized a location encoding method, the GeoHash (Liu et al., 2014) algorithm, which can encode the numerical longitude and latitude of a specific region on the Earth into strings. In this study, the primary purpose of the GeoHash in the POI-Transformers framework is to convert the longitude and latitude into binary vectors. To be specific, for a given geographic location (lon1, lat1), the location encoding layer in the GeoHash algorithm recursively can divide the longitude into intervals and mark the longitude code with 0 if the lon1 belongs to the left interval. If the lon1 belongs to the right interval, the longitude code is marked by 1. When the number of divisions reaches the set conditions, a code similar to 1101001 is obtained. The binary code of latitude can be also obtained as the way of longitude code. A longer binary array implies a more precise geographic location. When the times of dichotomies reach 30, the maximum error is approximately at 0.0186 meters. Therefore, the code ’0’ can represent the longitude range (-180, 0), code ’00’ can represent the latitude range (-180, -90). Similarly, the code ’0’ represent the latitude range (-90, 0) while code ’00’ represent the latitude range (-90, -45).
After we get the binary array of longitude and latitude, we can generate a geographic binary array with longitude bits occupied in even digits and latitude bits occupied in odd digits. For instance, the longitude binary code ’0’ represent the longitude range -180 to 0, and the latitude binary code ’1’ represent the latitude range 0 to 90, then the geographic binary array ’01’ can represent a region that longitude range from -180 to 0, and latitude range from 0 to 90. More details can be found in the appendix A.2. After the location encoding layer, a fully connected layer is added for obtaining location embeddings with the same dimension as the semantic vectors.
3.3 EMBEDDING FUSION MODULE
We then incorporate all the embeddings at the POI entity level in the embedding fusion module to ensure the matched POI entities have a large cosine similarity.
The linkages between different attributes of each POI entity possibly facilitate measuring the cosine similarity between POI entities. Specifically, the linkages between (i) category and name. For one thing, each category, such as hospital, university and shopping mass, may contain various names of POI entities. For another, one name of POI entity is likely to belong to different categories. (ii) geographic location and address. The address of POI entities can be obtained by the longitude and
latitude from the interface of the electronic map. In turn, the longitude and latitude can also be searched by the address of POI entities through the interface of the electronic map. Hence, these linkages between the attributes can help discriminate against various POI entities. Moreover, a rulebased method with a weighted average of the similarity scores of all the attributes is generally used for measuring the similarity of two POI entities. Nevertheless, the weights of all the attributes of POI entities, in the traditional POI entity matching, are manually set with prior experience.
To learn the linkages knowledge between the attributes of POI entities, we introduce a transformer encode layer with multi-head self-attention in the embedding fusion module. The self-attention mechanism can link the different parts of a single sequence to obtain the representation of the sequence. This means that we can get the representation of linkages between different attributes when inputting the attributes of POI entities into the self-attention. Hence, the linkages between the attributes of each POI entity can be fully learned by using the multi-head self-attention mechanism. In addition, the attention mechanism can adjust automatically the weights of all attributes in POI entities. In natural language processing, the core function of the attention mechanism is to weigh the input attributes by learning the importance of different parts of a sentence (Vaswani et al., 2017). Compared with the fixing weights set by manual, the weights based on the importance of attributes are much reasonable. Furthermore, in order to obtain fix-sized POI embeddings, we introduce a pooling layer after the transformer encoder layer.
4 EXPERIMENTS
4.1 DATA SETS
We experimented with all the 12 publicly available entity matching data sets used for evaluating Ditto (Li et al., 2020) and DeepMatcher (Mudgal et al., 2018) and a POI entity matching data set generated by ourselves.
For entity matching data sets, each of them consists of the candidate pairs sampled and labeled from two structured entity record tables. In addition, similar to the Ditto and DeepMatcher, we also use the dirty version of the DBLP-ACM, DBLP-Scholar, iTunes-Amazon, and Walmart-Amazon data sets to evaluate the robustness of the proposed model. These dirty data sets are generated by randomly moving each attribute value to the attribute title with a 50% probability. The Abt-Buy data set is dominated by texts and is characterized by the long text attribute. The overview of all the entity matching data sets can be found in appendix A.1.
In this study, we annotated a POI entity matching data set QM-GD-POI generated by a POI dataset of the Tencent Map (QM POI, https://map.qq.com/) and a POI data set of the Gaode Map (GD POI, https://www.amap.com/). All POI entities contain five attributes: name, category, address, longitude and latitude. We use the open POI query API of Tencent Map and Gaode Map to obtain 7,103 and 6,868 POI entities respectively. Then, we sampled and labeled 9,606 candidate pairs from these two newly generated POI data sets. We also generated the dirty version of QMGD-POI. Since the attributes of name, longitude and latitude in the POI data set are generally not missing, we remove the type and addresses with a 50% probability to generate a dirty data set.
The training, validation, and test sets of 12 publicly entity matching data sets are set at the ratio of 3:1:1. In the structured and dirty QM-GD-POI data sets, we used the ratio 6:1:3 to construct training, validation and test sets.
4.2 EXPERIMENT SETUP
We implemented POI-Transformers in PyTorch (Paszke et al., 2019) and the Transformers (Wolf et al., 2020) library. We currently use the BERT-Base Chinese model as the base model to extract text semantic features. Further, the BERT-Base Chinese model can replace with other transformerbased pre-training models. We conducted all experiments on a server with Intel i9-10850K CPU @ 3.6GHZ, 64GB memory, NVIDIA GeForce RTX 3090 GPU.
We compared the proposed POI-Transformers with the existing entity matching methods, such as DeepMatcher, Ditto, Magellan, and DeepER and POI entity matching methods Rule-based,
Weighted, iForest on POI entity matching dataset. We also compared variants of POI-Transformers without the Geographic Location Embedding Module (POI-Transformers*).
DeepMatcher: DeepMatcher (Mudgal et al., 2018) is one of the SOTA deep learning-based entity matching approaches. DeepMatcher customizes the RNN to conduct attribute-aligned similarity representation of attributes, and then aggregates the representations of attributes to obtain entity similarity representation between entities.
Ditto: Ditto (Li et al., 2020) is the SOTA entity matching system based on pre-trained Transformerbased language models. Ditto considers the entity matching task as a sequence classification task by splicing entity pairs into sequences. Meanwhile, Ditto developed three optimization techniques (domain knowledge, TF-IDF summarization, and data augmentation) to improve the performance. We use the full version of Ditto with all 3 optimizations in this study.
Magellan: Magellan (Konda, 2018) is a SOTA traditional non-neural entity matching system. This system calculates the similarity features between attributes (Levenshtein distance, etc.), and then uses these features to build a random forest, logistic regression and other traditional machine learning models for entity matching identifying. After model selection, the random forest in Magellan performs best in our POI entity matching dataset, so we report the F1 score in POI entity matching of Magellan obtained by random forest.
DeepER: DeepER (Ebraheem et al., 2018) uses bidirectional RNN with LSTM hidden units on word embeddings to translate each entity to a representation vector. It achieves good accuracy and high efficiency in entity matching tasks.
POI-Transformers: The full version of our proposed model with Geographic Location Embedding Module. In POI entity matching, we used the cosine similarity and SentEval toolkit (Conneau & Kiela, 2018) to evaluate the POI embeddings obtain by the POI-Transformers. When evaluated by the cosine similarity, we set a matching threshold. The entity pairs with embedding cosine similarity higher than the threshold are considered the positive matching pair. SentEval is an evaluation toolkit for evaluating the quality of POI embeddings. We utilized the logistic regression classifier in the SentEval to evaluate the POI embeddings for POI entity matching and entity embeddings for entity matching. To train the POI-Transformers framework, we utilize the softmax objective function to update the weights of POI embeddings as that in Sentence-BERT (Reimers & Gurevych, 2019).
POI-Transformers*: In this version, the Geographic Location Embedding Module is deleted, and the longitude and latitude are directly input into the Text Embedding Module to obtained the representation embeddings.
Rule-based: We designed a rule-based method for POI entity matching. In this method, we first calculated the similarity of the name, category, address and distance between the POI entity pairs. Then, we performed a weighted summation of the similarity between each attribute to obtain the similarity between POI entity pairs. The weights of name, category, address and distance similarity were set to 0.65, 0.1, 0.1, 0.15, respectively according to the experts’ knowledge.
Weighted: Li et al. (2016) proposed a Entropy-Weighted method for POI entity matching. This method first calculates the similarity of attributes between POI entity pairs, and then allocates weights of similarity of each attribute by information entropy.
iForest: Almeida et al. (2018) proposed an outlier detection based approach to POI entity matching. This method first computes the similarity of the name, website, address, category and geographic coordinates between POI entity pairs. Further, it obtains similarity between POI entity pairs by using the iForest method.
BERT: Fine-tuning the pre-trained BERT (Devlin et al., 2018) model by our POI matching dataset to do a classification task. We construct POI sentences by concatenating name, category, address, longitude, and latitude for input and get the similarity of two POI sentences.
4.3 RESULTS
All the models run with 20 epochs in the training process and returned the checkpoint with the highest F1 score on the validation set. Table 3 and Table 4 show the results of the entity matching data sets and POI entity matching data sets respectively. We reported the F1 scores of DeepMatcher,
Ditto, Magellan, and DeepER in entity matching data sets from Li et al. (2020) and Barlaug & Gulla (2021).
As shown in Table 1, due to the powerful learning ability of deep learning, the models based on deep learning (Ditto, BS, and POI-Transformers) can achieve better performance in entity matching. Meanwhile, we found that the attributed-aligned comparison methods (DeepMatcher) and cross-record interactive methods (Ditto, BS) based on deep learning are generally achieved better performance than the methods based on entity representation. In addition, The POI-Transformers proposed by this study achieved a better performance than the existing entity representation method (DeepER) and the traditional method (Magellan). In some data sets, the POI-Transformers can achieve better performance than the DeepMatcher. These results suggest that in entity matching, the entity representation methods currently have no advantage in accuracy over the POI-Transformers. However, the reduction in computation effort by entity representation methods cannot be ignored, especially in the POI entity matching task with a large number of real-world data set.
We can also find that Ditto and BS outperform other models in the textual data set Abt-Buy. This is possible because attributed-aligned methods and entity representation methods require to transform attribute text to other forms. When the training set is not enough, the learned features cannot fully represent the features of an attribute. The cross-record interactive model can directly interact with the original attributes across records, so as to obtain more meaningful features. In actual, there is no long text in the POI entity, and all attributes contain short text.
Table 2 shows the result of the POI entity matching data sets. We can find that the transformer-based models (Ditto, Magellan, and POI-Transformers) can achieve better performance than other models in the POI entity matching task. The POI-Transformers* has no advantage over traditional models since we remove the Geographic Location Embedding Module. The cosine similarity between the embeddings of POI entities is directly used for POI entity matching task, which is better than the traditional Rule-based, Weighted, and iForest, but perform worse than other deep learning models. When we used the SentEval to evaluate the embedding generated by POI-Transformers, we find that its performance is a little better than the SOTA entity matching method Ditto, but it is a little worse in the dirty data version. This indicates that the POI-Transformers proposed in this study has achieved SOTA performance in POI entity matching after adding the Geographic Location Embedding Module. Meanwhile, these results also suggest that POI-Transformers has more advantages in dealing with structured data and needs to be improved in dealing with dirty data. However, as far as we know, there are generally not many dirty data in the real-world POI data set.
In order to evaluate the computational efficiency for different models, we selected 100, 500, and 1,000 records from Tencent Map POI and Gaode Map POI respectively to from 10,000, 250,000, and 1,000,000 matching pairs. Table 3 shows the computation time of different numbers of matching pairs in POI entity matching. We can see that the computation effort of traditional POI entity matching methods is very low, but it can be seen from Table 3 that the accuracy is the worst. When the cosine similarity of POI embeddings is directly used for POI entity matching, the computation amount is lower than Magellan when the size of data set increases. The three deep learning models, especially the transformer-based modes (Ditto and BERT), have a large amount of computation (approximately 20 hours and 17 hours, respectively). When using SentEval to evaluate the embeddings generated by POI-Transformers, it takes less than 500 seconds to calculate one million matching pairs. These results demonstrate that our POI-Transformers have advantages both in accuracy and computational efficiency in the task of POI entity matching, and have the significance of practical deployment.
5 CONCLUSIONS
In this paper, we propose a novel model, the POI-Transformers, for the POI matching task based on pre-trained Transformer-based language models. This model uses a simple architecture to effectively incorporate the semantic features and geographic features to obtain meaningful POI entity embeddings. The POI entities are matched by the similarity of POI embeddings instead of directly comparing the POI entities, which can greatly reduce the complexity of computation. The experimental results show that our proposed POI-Transformers is comparable to SOTA entity matching models (DeepER, DeepMatcher, and Ditto) in entity matching tasks. Moreover, our model achieves the highest F1 score on natural scenes data sets in POI entity matching, and reduces the computation effort for identifying one million pairs from about 20 hours to 228 seconds. The high accuracy and efficiency of the POI-Transformers can help to deploy and use in real-world data set. In addition, our results also demonstrate that domain knowledge fusion in the deep learning model can achieve better results in specific entity matching tasks.
A APPENDIX
A.1 OVERVIEW OF DATA SETS
Table 4 shows the overview of publicly available entity matching data sets (from Barlaug & Gulla (2021) and Li et al. (2020)) and a POI entity matching data set (QM-GD-POI) generate by ourselves.
A.2 DETAIL OF GEOGRAPHIC LOCATION EMBEDDING MODULE
As illustrated in Figure 3, the left interval of longitude is set as (-180, 0) by the GeoHash algorithm and the right interval is set as (0, 180). Similarly, the left interval of latitude is divided into (-90, 0) while the corresponding right interval is (0, 90). As a result, “01” represents the area where longitude is from -180 to 0 degrees and latitude is from 0 to 90 degrees. As for the “01” area, the GeoHash algorithm continues to bisect the latitude and longitude of this region and the “0101” denotes the area where the longitude ranges from (-180, -90), and the latitude ranges from (45, 90).
Through the continuous dichotomy in the GeoHash algorithm, any geographic location on the Earth can be encoded into a unique binary array. A longer binary array implies a more precise geographic location. When the time of dichotomies reaches 30, the maximum error is approximately 0.0186 meters. After we get the binary array of longitude and latitude, we can generate a geographic binary array with longitude bits occupied in even digits and latitude bits occupied in odd digits. For instance, in Figure 3(A) the longitude binary code of the left top region is ’0’, and the latitude binary is ’1’, then the geographic binary array ’01’ can represent the left top region. More examples can be found in the appendix A.2. After the location encoding layer, a fully connected layer is added for obtaining location embeddings with the same dimension as the semantic vectors.
Let’s give an example from Wikipedia (https://en.wikipedia.org/wiki/Geohash), the encoded longitude “0111 1100 0000” represents an area with the longitude from -5.625to -5.449 degree with a maximum error 0.044 degree (about 4,400 meters) after 12 times binary divisions (Table 6), and the encode latitude “1011 1100 1001” represents an area with the latitude from 42.539 to 42.627 (Table 5). Then we can generate a geographic binary array with longitude bits occupied in even digits and latitude bits occupied in odd digits. With this criterion, the geographic binary array of the above example can be depicted as “0110 1111 1111 0000 0100 0001”. | 1. How does the reviewer assess the contribution and novelty of the proposed PoI entity matching method?
2. What are the strengths and weaknesses of the method regarding its performance and efficiency compared to other approaches?
3. What are the suggestions for improving the method, particularly in considering the context of location embeddings?
4. How does the reviewer evaluate the writing quality of the paper, and what are their suggestions for improvement? | Summary Of The Paper
Review | Summary Of The Paper
The authors propose a geographic Point-of-Interest (PoI) entity matching method based on the embedding of both text attributes and the geographic location of the PoI (i.e. PoI transformer). The proposed approach is shown to perform near SotA but in significantly shorter computation time.
Review
[Methodology]
Currently, the text (semantic) embedding and the location embedding are performed individually. However, when considering PoI, the range of the location should be considered depending on the context (e.g. Considering the location of "USA" registered at the center of the country does not make much sense, while the location of "Times square" makes sense; the range of the location changes according to the context.) Accordingly, if both text and location are embedded together considering the context, the proposed method would become very interesting, but in the current form, the contribution is limited.
[Evaluation]
Although the proposed method performs better than simple entity matching methods like iForest, the computation time is significantly longer than them (more than 100 times slow). Similar to the proposed method being more than 100 times fast than deep learning-based embedding methods, there should be cases that simple but fast methods are needed. It is difficult to simply claim that the proposed approach is the best, since the simple methods do not perform so bad. Thus, it is needed to explain the merit of each type of methods based on the application and use-cases, and to make clear what the proposed approach is targeting.
[Writing]
There are grammatical errors throughout the paper, so the authors should proofread the paper again.
Section 4.3: What does "BS" indicate? Does it indicate "DeepMatcher"? Please define or correct. |
ICLR | Title
Learning transferable motor skills with hierarchical latent mixture policies
Abstract
For robots operating in the real world, it is desirable to learn reusable behaviours that can effectively be transferred and adapted to numerous tasks and scenarios. We propose an approach to learn abstract motor skills from data using a hierarchical mixture latent variable model. In contrast to existing work, our method exploits a three-level hierarchy of both discrete and continuous latent variables, to capture a set of high-level behaviours while allowing for variance in how they are executed. We demonstrate in manipulation domains that the method can effectively cluster offline data into distinct, executable behaviours, while retaining the flexibility of a continuous latent variable model. The resulting skills can be transferred and fine-tuned on new tasks, unseen objects, and from state to vision-based policies, yielding better sample efficiency and asymptotic performance compared to existing skilland imitation-based methods. We further analyse how and when the skills are most beneficial: they encourage directed exploration to cover large regions of the state space relevant to the task, making them most effective in challenging sparse-reward settings.
1 INTRODUCTION
Reinforcement learning is a powerful and flexible paradigm to train embodied agents, but relies on large amounts of agent experience, computation, and time, on each individual task. Learning each task from scratch is inefficient: it is desirable to learn a set of skills that can efficiently be reused and adapted to related downstream tasks. This is particularly pertinent for real-world robots, where interaction is expensive and data-efficiency is crucial. There are numerous existing approaches to learn transferable embodied skills, usually formulated as a two-level hierarchy with a high-level controller and low-level skills. These methods predominantly represent skills as being either continuous, such as goal-conditioned (Lynch et al., 2019; Pertsch et al., 2020b) or latent space policies (Haarnoja et al., 2018; Merel et al., 2019; Singh et al., 2021); or discrete, such as mixture or option-based methods (Sutton et al., 1999; Daniel et al., 2012; Florensa et al., 2017; Wulfmeier et al., 2021). Our goal is to combine these perspectives to leverage their complementary advantages.
We propose an approach to learn a three-level skill hierarchy from an offline dataset, capturing both discrete and continuous variations at multiple levels of behavioural abstraction. The model comprises a low-level latent-conditioned controller that can learn motor primitives, a set of continuous latent mid-level skills, and a discrete high-level controller that can compose and select among these abstract mid-level behaviours. Since the mid- and high-level form a mixture, we call our method Hierarchical Latent Mixtures of Skills (HeLMS). We demonstrate on challenging object manipulation tasks that our method can decompose a dataset into distinct, intuitive, and reusable behaviours. We show that these skills lead to improved sample efficiency and performance in numerous transfer scenarios: reusing skills for new tasks, generalising across unseen objects, and transferring from state to vision-based policies. Further analysis and ablations reveal that both continuous and discrete components are beneficial, and that the learned hierarchical skills are most useful in sparse-reward settings, as they encourage directed exploration of task-relevant parts of the state space. ∗Corresponding author. Email: [email protected] †Work done while at DeepMind
Our main contributions are as follows:
• We propose a novel approach to learn skills at different levels of abstraction from an offline dataset. The method captures both discrete behavioural modes and continuous variation using a hierarchical mixture latent variable model.
• We present two techniques to reuse and adapt the learned skill hierarchy via reinforcement learning in downstream tasks, and perform extensive evaluation and benchmarking in different transfer settings: to new tasks and objects, and from state to vision-based policies.
• We present a detailed analysis to interpret the learned skills, understand when they are most beneficial, and evaluate the utility of both continuous and discrete skill representations.
2 RELATED WORK
A long-standing challenge in reinforcement learning is the ability to learn reusable motor skills that can be transferred efficiently to related settings. One way to learn such skills is via multi-task reinforcement learning (Heess et al., 2016; James et al., 2018; Hausman et al., 2018; Riedmiller et al., 2018), with the intuition that behaviors useful for a given task should aid the learning of related tasks. However, this often requires careful curation of the task set, where each skill represents a separate task. Some approaches avoid this by learning skills in an unsupervised manner using intrinsic objectives that often maximize the entropy of visited states while keeping skills distinguishable (Gregor et al., 2017; Eysenbach et al., 2019; Sharma et al., 2019; Zhang et al., 2020).
A large body of work explores skills from the perspective of unsupervised segmentation of repeatable behaviours in temporal data (Niekum & Barto, 2011; Ranchod et al., 2015; Krüger et al., 2016; Lioutikov et al., 2017; Shiarlis et al., 2018; Kipf et al., 2019; Tanneberg et al., 2021). Other works investigate movement or motor primitives that can be selected or sequenced together to solve complex manipulation or locomotion tasks (Mülling et al., 2013; Rueckert et al., 2015; Lioutikov et al., 2015; Paraschos et al., 2018; Merel et al., 2020; Tosatto et al., 2021; Dalal et al., 2021). Some of these methods also employ mixture models to jointly model low-level motion primitives and a high-level primitive controller (Muelling et al., 2010; Colomé & Torras, 2018; Pervez & Lee, 2018); the high-level controller can also be implicit and decentralised over the low-level primitives (Goyal et al., 2019).
Several existing approaches employ architectures in which the policy is comprised of two (or more) levels of hierarchy. Typically, a low-level controller represents the learned set of skills, and a high-level policy instructs the low-level controller via a latent variable or goal. Such latent variables can be discrete (Florensa et al., 2017; Wulfmeier et al., 2020) or continuous (Nachum et al., 2018; Haarnoja et al., 2018) and regularization of the latent space is often crucial (Tirumala et al., 2019). The latent variable can represent the behaviour for one timestep, for a fixed number of timesteps (Ajay et al., 2021), or options with different durations (Sutton et al., 1999; Bacon et al., 2017; Wulfmeier et al., 2021). One such approach that is particularly relevant (Florensa et al., 2017) learns a diverse set of skills, via a discrete latent variable that interacts multiplicatively with the state to enable continuous variation in a Stochastic Neural Network policy; this skill space is then transferred to locomotion tasks by learning a new categorical controller. Our method differs in a few key aspects: our proposed three-level hierarchical architecture explicitly models abstract discrete skills while allowing for temporal dependence and lower-level latent variation in their execution, enabling diverse object-centric behaviours in challenging manipulation tasks.
Our work is related to methods that learn robot policies from demonstrations (LfD, e.g. (Rajeswaran et al., 2018; Shiarlis et al., 2018; Strudel et al., 2020)) or more broadly from logged data (offline RL, e.g. (Wu et al., 2019; Kumar et al., 2020; Wang et al., 2020)). While many of these focus on learning single-task policies, several approaches learn skills offline that can be transferred online to new tasks (Merel et al., 2019; Lynch et al., 2019; Pertsch et al., 2020a; Ajay et al., 2021; Singh et al., 2021). These all train a two-level hierarchical model, with a high-level encoder that maps to a continuous latent space, and a low-level latent-conditioned controller. The high-level encoder can encode a whole trajectory (Pertsch et al., 2020a; 2021; Ajay et al., 2021); a short look-ahead state sequence (Merel et al., 2019); the current and final goal state (Lynch et al., 2019); or can even be simple isotropic Gaussian noise (Singh et al., 2021) that can be flexibly transformed by a flow-based low-level controller. At transfer time, a new high-level policy is learned from scratch: this can be more efficient with skill priors (Pertsch et al., 2020a) or temporal abstraction (Ajay et al., 2021).
HeLMS builds on this large body of work by explicitly modelling both discrete and continuous behavioural structure via a three-level skill hierarchy. We use similar information asymmetry to Neural Probabilistic Motor Primitives (NPMP) (Merel et al., 2019; 2020), conditioning the highlevel encoder on a short look-ahead trajectory. However HeLMS explicitly captures discrete modes of behaviour via the high-level controller, and learns an additional mid-level which is able to transfer abstract skills to downstream tasks, rather than learning a continuous latent policy from scratch.
3 METHOD
This paper examines a two-stage problem setup: an offline stage where a hierarchical skill space is learned from a dataset, and an online stage where these skills are transferred to a reinforcement learning setting. The dataset D comprises a set of trajectories, each a sequence of state-action pairs {xt,at}Tt=0. The model incorporates a discrete latent variable yt ∈ {1, . . . ,K} as a high-level skill selector (for a fixed number of skills K), and a mid-level continuous variable zt ∈ Rnz conditioned on yt which parameterises each skill. Marginally, zt is then a latent mixture distribution representing both a discrete set of skills and the variation in their execution. A sample of zt represents an abstract behaviour, which is then executed by a low-level controller p(at | zt,xt). The learned skill space can then be transferred to a reinforcement learning agent π in a Markov Decision Process defined by tuple {S,A, T ,R, γ}: these represent the state, action, and transition distributions, reward function, and discount factor respectively. When transferring, we train a new high-level controller that acts either at the level of discrete skills yt or continuous zt, and freeze lower levels of the policy.
We explain our method in detail in the following sections.
3.1 LATENT MIXTURE SKILL SPACES FROM OFFLINE DATA
Our method employs the generative model in Figure 1a. As shown, the state inputs can be different for each level of the hierarchy, but to keep notation uncluttered, we refer to all state inputs as xt and the specific input can be inferred from context. The joint distribution of actions and latents over a trajectory is decomposed into a latent prior p(y0:T , z1:T ) and a low-level controller p(at | zt,xt):
p(a1:T ,y0:T , z1:T |x1:T ) = p(y0:T , z1:T ) T∏ t=1 p(at | zt,xt)
p(y0:T , z1:T ) = p(y0) T∏ t=1 p(yt |yt−1)p(zt |yt). (1)
Intuitively, the categorical variable yt can capture discrete modes of behaviour, and the continuous latent zt is conditioned on this to vary the execution of each behaviour. Thus, zt follows a mixture
distribution, encoding all the relevant information on desired abstract behaviour for the low-level controller p(at | zt,xt). Since each categorical latent yt is dependent on yt−1, and zt is only dependent on yt, this prior can be thought of as a Hidden Markov model over the sequence of z1:T .
To perform inference over the latent variables, we introduce the variational approximation:
q(y0:T , z1:T |x1:T ) = p(y0) T∏ t=1 q(yt |yt−1,xt)q(zt |yt,xt) (2)
Here, the selection of a skill yt ∼ q(yt |yt−1,xt) is dependent on that of the previous timestep (allowing for temporal consistency), as well as the input. The mid-level skill is then parameterised by zt ∼ q(zt |yt,xt) based on the chosen skill and current input. p(y0) and p(yt |yt−1) model a skill prior and skill transition prior respectively, while p(zt |yt) represents a skill parameterisation prior to regularise each mid-level skill. While all of these priors can be learned in practice, we only found it necessary to learn the transition prior, with a uniform categorical for the initial skill prior and a simple fixed N (0, I) prior for p(zt |yt).
Training via the Evidence Lower Bound The proposed model contains a number of components with trainable parameters: the prior parameters ψ = {ψa, ψy} for the low-level controller and categorical transition prior respectively; and posterior parameters φ = {φy, φz} for the high-level controller and mid-level skills. For a trajectory {x1:T ,a1:T } ∼ D, we can compute the Evidence Lower Bound for the state-conditional action distribution, ELBO ≤ p(a1:T |x1:T ), as follows:
ELBO = Eqφ(y0:T ,z1:T |x1:T ) [ log pψ(a1:T ,y0:T , z1:T |x1:T )− log qφ(y0:T , z1:T |x1:T ) ]
≈ T∑ t=1 [∑ yt q(yt |x1:t) ( per-component action recon︷ ︸︸ ︷ log pψa(at | z̃ {yt} t ,xt)−βz per-component KL regulariser︷ ︸︸ ︷ KL(qφz (zt |yt,xt) || p(zt |yt)) )]
−βy T∑ t=1 [ ∑ yt−1 q(yt−1 |x1:t−1)KL ( qφy (yt |yt−1,xt) || pψy (yt |yt−1) )︸ ︷︷ ︸ categorical regulariser ] (3)
where z̃{yt}t ∼ q(zt |yt,xt). The coefficients βy and βz can be used to weight the KL terms, and the cumulative component probability q(yt |x1:t) can be computed iteratively as q(yt |x1:t) =∑
yt−1 qφy (yt |yt−1,xt)q(yt−1 |x1:t−1). In other words, for each timestep t and each mixture component, we compute the latent sample and the corresponding action log-probability, and the KL-divergence between the component posterior and prior. This is then marginalised over all yt, with an additional KL over the categorical transitions. For more details, see Appendix C.
Information-asymmetry As noted in previous work (Tirumala et al., 2019; Galashov et al., 2019), hierarchical approaches often benefit from information-asymmetry, with higher levels seeing additional context or task-specific information. This ensures that the high-level remains responsible for abstract, task-related behaviours, while the low-level executes simpler motor primitives. We employ similar techniques in HeLMS: the low-level inputs xLL comprise the proprioceptive state of the embodied agent; the mid-level inputs xML also include the poses of objects in the environment; and the high-level xHL concatenates both object and proprioceptive state for a short number of lookahead timesteps. The high- and low-level are similar to (Merel et al., 2019), with the low-level controller enabling motor primitives based on proprioceptive information, and the high-level using the lookahead information to provide additional context regarding behavioural intent when specifying which skill to use. The key difference is the categorical high-level and the additional mid-level, with which HeLMS can learn more object-centric skills and transfer these to downstream tasks.
Network architectures The architecture and information flow in HeLMS are shown in Figure 1b. The high-level network contains a gated head, which uses the previous skill yt−1 to index into one of K categorical heads, each of which specify a distribution over yt. For a given yt, the corresponding mid-level skill network is selected and used to sample a latent action zt, which is then used as input for the latent-conditioned low-level controller, which parameterises the action distribution. The skill transition prior p(yt |yt−1) is also learned, and is parameterised as a linear softmax layer which takes in a one-hot representation of yt−1 and outputs the distribution over yt. All components are trained end-to-end via the objective in Equation 3.
3.2 REINFORCEMENT LEARNING WITH RELOADED SKILLS
Once learned, we propose two methods to transfer the hierarchical skill space to downstream tasks. Following previous work (e.g. (Merel et al., 2019; Singh et al., 2021)), we freeze the low-level controller p(at | zt,xt), and learn a policy for either the continuous (zt) or discrete (yt) latent.
Categorical agent One simple and effective technique is to additionally freeze the mid-level components q(zt |yt,xt), and learn a categorical high-level controller π(yt |xt) for the downstream task. The learning objective is given by:
J = Eπ [∑ t γt (rt − ηyKL(π(yt |xt) ||π0(yt |xt))) ] , (4)
where the standard discounted return objective in RL is augmented by an additional term performing KL-regularisation to some prior π0 scaled by coefficient ηy . This could be any categorical distribution such as the previously learned transition prior p(yt |yt−1), but in this paper we regularise to the uniform categorical prior to encourage diversity. While any RL algorithm could be used to optimize π(yt |xt), in this paper we use MPO (Abdolmaleki et al., 2018) with a categorical action distribution (see Appendix B for details). We hypothesise that this method improves sample efficiency by converting a continuous control problem into a discrete abstract action space, which may also aid in credit assignment. However, since both the mid-level components and low-level are frozen, it can limit flexibility and plasticity, and also requires that all of the mid- and low-level input states are available in the downstream task. We call this method HeLMS-cat.
Mixture agent A more flexible method of transfer is to train a latent mixture policy, π(zt |xt) =∑ yt π(yt |xt)π(zt |yt,xt). In this case, the learning objective is given by:
J = Eπ [∑ t γt ( rt − ηyKL(π(yt |xt) ||π0(yt |xt))− ηz ∑ yt KL(π(zt |yt,xt) ||π0(zt |yt,xt)) )] , (5)
where in addition to the categorical prior, we also regularise each mid-level skill to a corresponding prior π0(zt |yt,xt). While the priors could be any policies, we set them to be the skill posteriors q(zt |yt,xt) learned offline, to ensure the mixture components remain close to the pre-learned skills. This is related to (Tirumala et al., 2019), which also applies KL-regularisation at multiple levels of a hierarchy. While the high-level controller π(yt |xt) is learned from scratch, the mixture components can also be initialised to q(zt |yt,xt), to allow for initial exploration over the space of skills. Alternatively, the mixture components can use different inputs, such as vision: this setup allows vision-based skills to be learned efficiently by regularising to state-based skills learned offline. We optimise this using RHPO (Wulfmeier et al., 2020), which employs a similar underlying optimisation to MPO for mixture policies (see Appendix B for details). We call this HeLMS-mix.
4 EXPERIMENTS
Our experiments focus on the following questions: (1) Can we learn a hierarchical latent mixture skill space of distinct, interpretable behaviours? (2) How do we best reuse this skill space to improve sample efficiency and performance on downstream tasks? (3) Can the learned skills transfer effectively to multiple downstream scenarios: (i) different objects; (ii) different tasks; and (iii) different modalities such as vision-based policies? (4) How exactly do these skills aid learning of downstream manipulation tasks? Do they aid exploration? Are they useful in sparse or dense reward scenarios?
4.1 EXPERIMENTAL SETUP
Environment and Tasks We focus on manipulation tasks, using a MuJoCo-based environment with a single Sawyer arm, and three objects coloured red, green, and blue. We follow the challenging object stacking benchmark of Lee et al. (2021), which specifies five object sets (Figure 2), carefully designed to have diverse geometries and present different challenges for a stacking agent. These range from simple rectangular objects (object set 4), to geometries such as slanted faces (sets 1 and 2) that make grasping or stacking the objects more challenging. This environment allows us
to systematically evaluate generalisation of manipulation behaviours for different tasks interacting with geometrically different objects. For further information, we refer the reader to Appendix D.1 or to (Lee et al., 2021). Details of the rewards for the different tasks are also provided in Appendix F.
Datasets To evaluate our approach and baselines in the manipulation settings, we use two datasets:
• red_on_blue_stacking: this data is collected by an agent trained to stack the red object on the blue object and ignore the green one, for the simplest object set, set4. • all_pairs_stacking: similar to the previous case, but with all six pairwise stacking
combinations of {red, green, blue}, and covering all of the five object sets.
Baselines For evaluation in transfer scenarios, we compare HeLMS with a number of baselines:
• From scratch: We learn the task from scratch with MPO, without an offline learning phase. • NPMP+KL: We compare against NPMP (Merel et al., 2019), which is the most simi-
lar skill-based approach in terms of information-asymmetry and policy conditioning. We make some small changes to the originally proposed method, and also apply additional KL-regularisation to the latent prior: we found this to improve performance significantly in our experiments. For more details and an ablation, see Appendix A.2.
• Behaviour Cloning (BC): We apply behaviour cloning to the dataset, and fine-tune this policy via MPO on the downstream task. While the actor is initialised to the solution obtained via BC, the critic still needs to be learned from scratch.
• Hierarchical BC: We evaluate a hierarchical variant of BC with a similar latent space z to NPMP using a latent Gaussian high-level controller. However, rather than freezing the low-level and learning just a high-level policy, Hierarchical BC fine-tunes the entire model.
• Asymmetric actor-critic: For state-to-vision transfer, HeLMS uses prior skills that depend on object states to learn a purely vision-based policy. Thus, we also compare against a variant of MPO with an asymmetric actor-critic (Pinto et al., 2017) setup, which uses object states differently: to speed up learning of the critic, while still learning a vision-based actor.
4.2 LEARNING SKILLS FROM OFFLINE DATA
We first aim to understand whether we can learn a set of distinct and interpretable skills from data (question (1)). For this, we train HeLMS on the red_on_blue_stacking dataset with 5 skills.
(a) Set 1 (b) Set 2 (c) Set 3 (d) Set 5
Figure 5: (a) Performance on pyramid task; and (b) image sequence showing episode rollout from a learned solution on this task (left-to-right, top-to-bottom).
Figure 6: Performance for vision-based stacking.
Figure 3a shows some example episode rollouts when the learned hierarchical agent is executed in the environment, holding the high-level categorical skill constant for an episode. Each row represents a different skill component, and the resulting behaviours are both distinct and diverse: for example, a lifting skill (row 1) where the gripper closes and rises up, a reaching skill (row 2) where the gripper moves to the red object, or a grasping skill (row 3) where the gripper lowers and closes its fingers. Furthermore, without explicitly encouraging this, the emergent skills capture temporal consistency: Figure 3b shows the learned prior p(yt |yt−1) (visualised as a transition matrix) assigns high probability along the diagonal (remaining in the same skill). Finally, Figure 3c demonstrates that all skills are used, without degeneracy.
4.3 TRANSFER TO DOWNSTREAM TASKS
Generalising to different objects We next evaluate whether the previously learned skills (i.e. trained on the simple objects in set 4) can effectively transfer to more challenging object interaction scenarios: the other four object sets proposed by (Lee et al., 2021). The task uses a sparse staged reward, with reward incrementally given after completing each sub-goal of the stacking task. As shown in Figure 4, both variants of HeLMS learn significantly faster than baselines on the different object sets. Compared to the strongest baseline (NPMP), HeLMS reaches better average asymptotic performance (and much lower variance) on two object sets (1 and 3), performs similarly on set 5, and does poorer on object set 2. The performance on object set 2 potentially highlights a trade-off between incorporating higher-level abstract behaviours and maintaining low-level flexibility: this object set often requires a reorientation of the bottom object due to its slanted faces, a behaviour that is not common in the offline dataset, which might require greater adaptation of mid- and low-level skills. This is an interesting investigation we leave for future work.
Compositional reuse of skills To evaluate whether the learned skills are composable for new tasks, we train HeLMS on the all_pairs_stacking dataset with 10 skills, and transfer to a pyramid task. In this setting, the agent has to place the red object adjacent to the green object, and stack the blue object on top to construct a pyramid. The task is specified via a sparse staged reward
for each stage or sub-task: reaching, grasping, lifting, and placing the red object, and subsequently the blue object. In Figure 5(a), we plot the performance of both variants of our approach, as well as NPMP and MPO; we omit the BC baselines as this involves transferring to a fundamentally different task. Both HeLMS-mix and HeLMS-cat reach a higher asymptotic performance than both NPMP and MPO, indicating that the learned skills can be better transferred to a different task. We show an episode rollout in Figure 5(b) in which the learned agent can successfully solve the task.
From state to vision-based policies While our method learns skills from proprioception and object state, we evaluate whether these skills can be used to more efficiently learn a vision-based policy. This is invaluable for practical real-world scenarios, since the agent acts from pure visual observation at test time without requiring privileged and often difficult-to-obtain object state information.
We use the HeLMS-mix variant to transfer skills to a vision-based policy, by reusing the low-level controller, initialising a new high-level controller and mid-level latent skills (with vision and proprioception as input), and KL-regularising these to the previously learned state-based skills. While the learned policy is vision-based, this KL-regularisation still assumes access to object states during training. For a fair comparison, we additionally compare our approach with a version of MPO using an asymmetric critic (Pinto et al., 2017), which exploits object state information instead of vision in the critic, and also use this for HeLMS. As shown in Figure 6, learning a vision-based policy with MPO from scratch is very slow and computationally intensive, but an asymmetric critic significantly speeds up learning, supporting the empirical findings of Pinto et al. (2017). However, HeLMS once again demonstrates better sample efficiency, and reaches slightly better asymptotic performance. We note that this uses the same offline model as for the object generalisation experiments, showing that the same state-based skill space can be reused in numerous settings, even for vision-based tasks.
4.4 WHERE AND HOW CAN HIERARCHICAL SKILL REUSE BE EFFECTIVE?
Sparse reward tasks We first investigate how HeLMS performs for different rewards: a dense shaped reward, the sparse staged reward from the object generalisation experiments, and a fully sparse reward that is only provided after the agent stacks the object. For this experiment, we use the skill space trained on red_on_blue_stacking and transfer it to the same RL task of stacking on object set 4. The results are shown in Figure 7. With a dense reward (and no object transfer required), all of the approaches can successfully learn the task. With the sparse staged reward, the baselines all plateau at a lower performance, with the exception of NPMP, as previously discussed. However, for the challenging fully-sparse scenario, HeLMS is the only method that achieves nonzero reward. This neatly illustrates the benefit of the proposed hierarchy of skills: it allows for directed exploration which ensures that even sparse rewards can be encountered. This is consistent with observations from prior work in hierarchical reinforcement learning (Florensa et al., 2017; Nachum et al., 2019), and we next investigate this claim in more depth for our manipulation setting.
Exploration To measure whether the proposed approach leads to more directed exploration, we record the average coverage in state space at the start of RL (i.e. zero-shot transfer). This is computed as the variance (over an episode) of the state xt, separated into three interpretable groups:
Method Reward State coverage (×10 −2)
Dense Staged Joints Grasp Objects MPO 3.16 0.0 8.72 0.004 1.21
NPMP 3.67 0.0 3.43 0.0 1.45 BC 31.68 0.004 4.22 0.05 1.52 Hier. BC 16.42 0.004 2.61 0.04 1.31 HeLMS 20.46 0.05 2.98 1.10 1.61
Table 1: Analysis of zero-shot exploration at the start of RL, in terms of reward and state coverage (variance over an episode of different subsets of the agent’s state). Results are averaged over 1000 episodes.
(a) (b)
Figure 8: Ablation for continuous and discrete components during offline learning, when transferring to the (a) easy case (object set 4) and (b) hard case (object set 3).
joints (angles and velocity), grasp (a simulated grasp sensor), and object poses. We also record the total reward (dense and sparse staged). The results are reported in Table 1. While all approaches achieve some zero-shot dense reward (with BC the most effective), HeLMS1 receives a sparse staged reward an order of magnitude greater. Further, in this experiment we found it was able to achieve the fully sparse reward (stacked) in one episode. Analysing the state coverage results, while other methods are able to cover the joint space more (e.g. by randomly moving the joints), HeLMS is nearly two orders of magnitude higher for grasp states. This indicates the utility of hierarchical skills: by acting over the space of abstract skills rather than low-level actions, HeLMS performs directed exploration and targets particular states of interest, such as grasping an object.
4.5 ABLATION STUDIES
Capturing continuous and discrete structure To evaluate the benefit of both continuous and discrete components, we train our method with a fixed variance of zero for each latent component (i.e. ‘discrete-only’) and transfer to the stacking task with sparse staged reward in an easy case (object set 4) and hard case (object set 3), as shown in Figure 8(a) and (b). We also evaluate the ‘continuousonly’ case with just a single Gaussian to represent the high- and mid-level skills: this is equivalent to the NPMP+KL baseline. We observe the the discrete component alone leads to improved sample efficiency in both cases, but modelling both discrete and continuous latent behaviours makes a significant difference in the hard case. In other words, when adapting to challenging objects, it is important to capture discrete skills, but allow for latent variation in how they are executed.
KL-regularisation We also perform an ablation for KL-regularisation during the offline phase (via βz) and online RL (via ηz), to gauge the impact on transfer; see Appendix A.1 for details.
5 CONCLUSION
We present HeLMS, an approach to learn transferable and reusable skills from offline data using a hierarchical mixture latent variable model. We analyse the learned skills to show that they effectively cluster data into distinct, interpretable behaviours. We demonstrate that the learned skills can be flexibly transferred to different tasks, unseen objects, and to different modalities (such as from state to vision). Ablation studies indicate that it is beneficial to model both discrete modes and continuous variation in behaviour, and highlight the importance of KL-regularisation when transferring to RL and fine-tuning the entire mixture of skills. We also perform extensive analysis to understand where and how the proposed skill hierarchy can be most useful: we find that it is particularly invaluable in sparse reward settings due to its ability to perform directed exploration.
There are a number of interesting avenues for future work. While our model demonstrated temporal consistency, it would be useful to more actively encourage and exploit this for sample-efficient transfer. It would also be useful to extend this work to better fine-tune lower level behaviours, to allow for flexibility while exploiting high-level behavioural abstractions.
1Note that HeLMS-cat and HeLMS-mix are identical for this analysis: at the start of reinforcement learning, both variants transfer the mid-level skills while initialising a new high-level controller.
ACKNOWLEDGMENTS
The authors would like to thank Coline Devin for detailed comments on the paper and for generating the all_pairs_stacking dataset. We would also like to thank Alex X. Lee and Konstantinos Bousmalis for help with setting up manipulation experiments. We are also grateful to reviewers for their feedback.
A ADDITIONAL EXPERIMENTS
A.1 ABLATIONS FOR KL-REGULARISATION
In these experiments, we investigate the effect of KL-regularisation on the mid-level components, both for the offline learning phase (regularising each component to p(zt |yt) = N (0, I) via coefficient βz), and the online reinforcement learning stage via HeLMS-mix (regularising each component to the mid-level skills learned offline, via coefficient ηz)). The results are reported in Figure 9, where each plot represents a different setting for offline KL-regularisation (either regularisation toN (0, I) with βz = 0.01, or no regularisation with βz = 0) and a different transfer case (the easy case of transferring to object set 4, or the hard case of transferring to object set 3). Each plot shows the downstream performance when varying the strength of KL-regularisation during RL via coefficient ηz . The HeLMS-cat approach represents the extreme case where the skills are entirely frozen (i.e. full regularisation).
The results suggest some interesting properties of the latent skill space based on regularisation. When regularising the mid-level components to the N (0, I) prior, it is important to regularise during online RL; this is especially true for the hard transfer case, where HeLMS-cat performs much better, and the performance degrades significantly with lower regularisation values. However, when removing mid-level regularisation during offline learning, the method is insensitive to regularisation during RL over the entire range evaluated, from 0.01 to 100.0. We conjecture that with mid-level skills regularised to N (0, I), the different mid-level skills are drawn closer together and occupy a more compact region in latent space, such that KL-regularisation is necessary during RL for a skill to avoid drifting and overlapping with the latent distribution of other skills (i.e. skill degeneracy). In contrast, without offline KL-regularisation, the skills are free to expand and occupy more distant regions of the latent space, rendering further regularisation unnecessary during RL. Such latent space properties could be further analysed to improve learning and transfer of skills; we leave this as an interesting direction for future work.
A.2 NPMP ABLATION
The Neural Probabilistic Motor Primitives (NPMP) work (Merel et al., 2019) presents a strong baseline approach to learning transferable motor behaviours, and we run ablations to ensure a fair comparison to the strongest possible result. As discussed in the main text, NPMP employs a Gaussian high-level latent encoder with a AR(1) prior in the latent space. We also try a fixed N(0, I) prior (this is equivalent to an AR(1) prior with a coefficient of 0, so can be considered a hyperparameter choice). Since our method benefits from KL-regularisation during RL, we apply this to NPMP as well.
As shown in Figure 10, we find that both changes lead to substantial improvements in the manipulation domain, on all five object sets. Consequently, in our main experiments, we report results with the best variant, using a N(0, I) prior with KL-regularisation during RL.
B REINFORCEMENT LEARNING WITH MPO AND RHPO
As discussed in Section 3.2, the hierarchy of skills are transferred to RL in two ways: HeLMScat, which learns a new high-level categorical policy π(yt |xt) via MPO (Abdolmaleki et al., 2018); or HeLMS-mix, which learns a mixture policy π(zt |xt) = ∑ yt π(yt |xt)π(zt |yt,xt) via RHPO (Wulfmeier et al., 2020). We describe the optimisation for both of these cases in the following subsections. For clarity of notation, we omit the additional KL-regularisation terms introduced in Section 3.2 and describe just the base methods of MPO and RHPO when applied to the RL setting in this paper. These KL-terms are incorporated as additional loss terms in the policy improvement stage.
B.1 HELMS-CAT VIA MPO
Maximum a posteriori Policy Optimisation (MPO) is an Expectation-Maximisation-based algorithm that performs off-policy updates in three steps: (1) updating the critic; (2) creating a non-parametric intermediate policy by weighting sampled actions using the critic; and (3) updating the parametric policy to fit the critic-reweighted non-parametric policy, with trust region constraints to improve stability. We detail each of these steps below. Note that while the original MPO operates in the environment’s action space, we use it here for the high-level controller, to set the categorical variable yt.
Policy evaluation First, the critic is updated via a TD(0) objective as:
min θ L(θ) = Ext,yt∼B
[( QT −Qφ(xt,yt))2 ] , (6)
Here, QT = rt + γExt+1,yt+1 [Q′(st+1,yt+1)] is the 1-step target with the state transition (xt,yt,xt+1) returned from the replay buffer B, and next action sampled from yt+1 ∼ π′(·|xt+1). π′ and Q′ are target networks for the policy and the critic, used to stabilise learning.
Policy improvement Next, we proceed with the first step of policy improvement by constructing an intermediate non-parametric policy q(yt|xt), and optimising the following constrained objective:
max q J(q) = Eyt∼q,xt∼B
[ Qφ(xt,yt) ] , s.t. Ext∼B [ KL ( q(·|xt)‖πθk(·|xt) )] ≤ E , (7)
where E defines a bound on the KL divergence between the non-parametric and parametric policies at the current learning step k. This constrained optimisation problem has the following closed-form solution:
q(yt |xt) ∝ πθk(yt |xt) exp (Qφ(xt,yt)/η) . (8)
In other words, this step constructs an intermediate policy which reweights samples from the previous policy using exponentiated temperature-scaled critic values. The temperature parameter η is derived based on the dual of the Lagrangian; for futher details please refer to (Abdolmaleki et al., 2018).
Finally, we can fit a parametric policy to the non-parametric distribution q(yt |xt) by minimising their KL-divergence, subject to a trust-region constraint on the parametric policy:
θk+1 = argmin θ
Ext∼B [ KL(q(yt |xt) ||πθ(yt|xt)) ] ,
s.t. Ext∼B [ KL ( πθk+1(yt |xt) ||πθk(yt |xt) )] ≤ M . (9)
This optimisation problem can be solved via Lagrangian relaxation, with the Lagrangian multiplier M modulating the strength of the trust-region constraint. For further details and full derivations, please refer to (Abdolmaleki et al., 2018).
B.2 HELMS-MIX VIA RHPO
RHPO (Wulfmeier et al., 2020) follows a similar optimisation procedure as MPO, but extends it to mixture policies and multi-task settings. We do not exploit the multi-task capability in this work, but utilise RHPO to optimise the mixture policy in latent space, π(zt |xt) =∑
yt π(yt |xt)π(zt |yt,xt). The Q-function Qφ(xt, zt) and parametric policy πθk(zt |xt) use the continuous latents zt as actions instead of the categorical yt. This is also in contrast to the original formulation of RHPO, which uses the environment’s action space. Compared to MPO, the policy improvement stage of the non-parametric policy is minimally adapted to take into account the new mixture policy. The key difference is in the parametric policy update step, which optimises the following:
θk+1 = argmin θ
Ext∼B [ KL(q(zt |xt) ||πθ(zt|xt)) ] ,
s.t. Ext∼B [ KL ( πθk+1(yt |xt) ||πθk(yt |xt) ) + ∑ yt KL ( πθk+1(zt |yt,xt) ||πθk(zt |yt,xt) )] ≤ M . (10)
In other words, separate trust-region constraints are applied to a sum of KL-divergences: for the high-level categorical and for each of the mixture components. Following the original RHPO, we separate the single constraint into decoupled constraints that set a different for the means, covariances, and categorical ( µ, σ , and cat, respectively). This allows the optimiser to independently modulate how much the categorical distribution, component means, and component variances can change. For further details and full derivations, please refer to (Wulfmeier et al., 2020).
C ELBO DERIVATION AND INTUITIONS
We can compute the Evidence Lower Bound for the state-conditional action distribution, p(a1:T |x1:T ) ≥ ELBO, as follows:
ELBO = p(a1:T |x1:T )− KL(q(y0:T , z1:T |x1:T ) || p(y0:T , z1:T |x1:T )) = Eq(y0:T ,z1:T |x1:T ) [ log p(a1:T ,y0:T , z1:T |x1:T )− log q(y0:T , z1:T |x1:T ) ] = Eq1:T [ T∑ t=1 log p(at | zt,xt) + log p(zt |yt) + log p(yt |yt−1)
− log q(zt |yt,xt)− log q(yt |yt−1,x)
]
= T∑ t=1 Eq1:T
[ log p(at | zt,xt)− KL(q(zt |yt,xt) || p(zt |yt))
−KL(q(yt |yt−1,xt) || p(yt |yt−1)) ] (11)
We note that the first two terms in the expectation depend only on timestep t, so we can simplify and marginalise exactly over all discrete {y1:T }\yt. For the final term, we note that the KL at timestep t is constant with respect to yt (as it already marginalises over the whole distribution), and only depends on yt−1. Lastly, we will use sampling to approximate the expectation over zt. This yields the following:
ELBO = T∑ t=1 Eq(zt |yt,xt) [∑ y0:T q(y0:T |x1:T ) ( log p(at | zt,xt)− KL(q(zt |yt,xt) || p(zt |yt))
−KL(q(yt |yt−1,xt) || p(yt |yt−1)) )]
ELBO ≈ T∑ t=1 [∑ yt q(yt |x1:t) ( per-component recon loss︷ ︸︸ ︷ log p(at | z̃{yt}t ,xt)−βz per-component KL regulariser︷ ︸︸ ︷ KL(q(zt |yt,xt) || p(zt |yt)) )]
−βy T∑ t=1 [∑ yt−1 q(yt−1 |x1:t−1)KL(q(yt |yt−1,xt) || p(yt |yt−1))︸ ︷︷ ︸ discrete regulariser ] (12)
where z̃{yt}t ∼ q(zt |yt,xt), the coefficients βy and βz can be used to weight the KL terms, and the cumulative component probability q(yt |x1:t) can be computed iteratively as:
q(yt |x1:t) = ∑ yt−1 q(yt |yt−1,xt)q(yt−1 |x1:t−1) (13)
In other words, for each timestep t and each mixture component, we compute the latent sample and the corresponding action log-probability, and the KL-divergence between the component posterior and prior. This is then marginalised over all yt, with an additional KL over the categorical transitions.
Structuring the graphical model and ELBO in this form has a number of useful properties. First, the ELBO terms include an action reconstruction loss and KL term for each mixture component, scaled by the posterior probability of each component given the history. For a given state, this pressures the model to assign higher posterior probability to components that have low reconstruction cost or KL, which allows different components to specialise for different parts of the state space. Second, the categorical KL between posterior and prior categorical transition distributions is scaled by the
posterior probability of the previous component given history q(yt−1 |x1:t−1): this allows the relative probabilities of past skill transitions along a trajectory to be considered when regularising the current skill distribution. Finally, this formulation does not require any sampling or backpropagation through the categorical variable: starting from t = 0, the terms for each timestep can be efficiently computed by recursively updating the posterior over components given history (q(yt |x1:t)), and summing over all possible categorical values at each timestep.
D ENVIRONMENT PARAMETERS
As discussed earlier in the paper, all experiments take place in a MuJoCo-based object manipulation environment using a Sawyer robot manipulator and three objects: red, green, and blue. The state variables in the Sawyer environment are shown in Table 3. All state variables are stacked for 3 frames for all agents. The object states are only provided to the mid-level and high-level for HeLMS runs, and the camera images are only used by the high- and mid-level controller in the vision transfer experiments (without object states).
The action space is also shown in Table 4. Since the action dimensions vary significantly in range, they are normalised to be between [−1, 1] for all methods during learning. When learning via RL, we apply domain randomisation to physics (but not visual randomisation), and a randomly sampled action delay of 0-2 timesteps. This is applied for all approaches, and ensures that we can learn a policy that is robust to small changes in the environment.
D.1 OBJECT SETS
As discussed in the main paper, we use the object sets defined by Lee et al. (2021), which are carefully designed to cover different object geometries and affordances, presenting different challenges for object interaction tasks. The object sets are shown in Figure 11 (the image has been taken directly from (Lee et al., 2021) for clarity), and feature both simulated and real-world versions; in this paper we focus on the simulated versions. As discussed in detail by (Lee et al., 2021), each object set has a different degree of difficulty and presents a different challenge to the task of stacking red-on-blue:
• In object set 1, the red object has slanted surfaces that make it difficult to grasp, while the blue object is an octagonal prism that can roll.
• In object set 2, the blue object has slanted surfaces, such that the red object will likely slide off unless the blue object is first reoriented.
• In object set 3, the red object is long and narrow, requiring a precise grasp and careful placement.
• Object set 4 is the easiest case with rectangular prisms for both red and blue. • Object set 5 is also relatively easy, but the blue object has ten faces, meaning limited surface
area for stacking.
For more details about the object sets and the rationale behind their design, we refer the reader to (Lee et al., 2021).
E NETWORK ARCHITECTURES AND HYPERPARAMETERS
The network architecture details and hyperparameters for HeLMS are shown in Table 5. Parameter sweeps were performed for the β coefficients during offline learning and the η coefficients during RL. Small sweeps were also performed for the RHPO parameters (refer to (Wulfmeier et al., 2020) for details), but these were found to be fairly insensitive. All other parameters were kept fixed, and used for all methods except where highlighted in the following subsections. All RL experiments were run with 3 seeds to capture variation in each method.
For network architectures, all experiments except for vision used simple 2-layer MLPs for the highand low-level controllers, and for each mid-level mixture component. An input representation network was used to encode the inputs before passing them to the networks that were learned from scratch: i.e. the high-level for state-based experiments, and both high- and mid-level for vision (re-
call that while the state-based experiments can reuse the mid-level components conditioned on object state, the vision-based policy learned them from scratch and KL-regularised to the offline mid-level skills). The critic network was a 3-layer MLP, applied to the output of another input representation network (separate to the actor, but with the same architecture) with concatenated action.
F REWARDS
Throughout the experiments, we employ different reward functions for different tasks and to study the efficacy of our method in sparse versus dense reward scenarios.
Reward stages and primitive functions The reward functions for stacking and pyramid tasks use various reward primitives and staged rewards for completing sub-tasks. Each of these rewards are within the range of [0, 1]
These include:
• reach(obj): a shaped distance reward to bring the TCP to within a certain tolerance of obj. • grasp(): a binary reward for triggering the gripper’s grasp sensor. • close_fingers(): a shaped distance reward to bring the fingers inwards. • lift(obj): shaped reward for lifting the gripper sufficiently high above obj. • hover(obj1,obj2): shaped reward for holding obj1 above obj2. • stack(obj1,obj2): a sparse reward, only provided if obj1 is on top of obj2 to
within both a horizontal and vertical tolerance. • above(obj,dist): shaped reward for being dist above obj, but anywhere horizon-
tally. • pyramid(obj1,obj2,obj3): a sparse reward, only provided if obj3 is on top of the
point midway between obj1 and obj2, to within both a horizontal and vertical tolerance. • place_near(obj1,obj2): sparse reward provided if obj1 is sufficiently near obj2.
Dense stacking reward The dense stacking reward contains a number of stages, where each stage represents a sub-task and has a maximum reward of 1. The stages are:
• reach(red) AND grasp(): Reach and grasp the red object. • lift(red) AND grasp(): Lift the red object. • hover(red,blue): Hover with the red object above the blue object. • stack(red,blue): Place the red object on top of the blue one. • stack(red,blue) AND above(red): Move the gripper above after a completed
stack.
At each timestep, the latest stage to receive non-zero reward is considered to be the current stage, and all previous stages are assigned a reward of 1. The reward for this timestep is then obtained by summing rewards for all stages, and scaling by the number of stages, to ensure the highest possible reward on any timestep is 1.
Sparse staged stacking reward The sparse staged stacking reward is similar to the dense reward variant, but each stage is sparsified by only providing the reward for the stage once it exceeds a value of 0.95.
This scenario emulates an important real-world problem: that it may be difficult in certain cases to specify carefully shaped meaningful rewards, and it can often be easier to specify (sparsely) whether a condition (such as stacking) has been met.
Sparse stacking reward This fully sparse reward uses the stack(red,blue) function to provide reward only when conditions for stacking red on blue have been met.
Pyramid reward The pyramid-building reward uses a staged sparse reward, where each stage represents a sub-task and has a maximum reward of 1. If a stage has dense reward, it is sparsified by only providing the reward once it exceeds a value of 0.95. The stages are:
• reach(red) AND grasp(): Reach and grasp the red object. • lift(red) AND grasp(): Lift the red object. • hover(red,green): Hover with the red object above the green object (with a larger
horizontal tolerance, as it does not need to be directly above). • place_near(red,green): Place the red object sufficiently close to the green object. • reach(blue) AND grasp(): Reach and grasp the blue object. • lift(blue) AND grasp(): Lift the blue object. • hover(blue,green) AND hover(blue,red): Hover with the blue object above
the central position between red and green objects. • pyramid(blue,red,green): Place the blue object on top to make a pyramid. • pyramid(blue,red,green) AND above(blue): Move the gripper above after a
completed stack.
At each timestep, the latest stage to receive non-zero reward is considered to be the current stage, and all previous stages are assigned a reward of 1. The reward for this timestep is then obtained by summing rewards for all stages, and scaling by the number of stages, to ensure the highest possible reward on any timestep is 1. | 1. What is the focus and contribution of the paper regarding reinforcement learning?
2. What are the strengths and weaknesses of the proposed three-leveled hierarchy of skills?
3. Do you have any concerns about the experimental evaluation and comparisons with other methods?
4. Could you provide more explanation and examples of the benefits and limitations of the two levels of representation?
5. What are your suggestions for improving the methodology and providing more insightful results? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a method to learn a three-leveled hierarchy of skills offline, from a dataset of demonstrations, that can then be applied to accelerate reinforcement learning. The three-level architecture is novel. It encodes a discrete selection, a continuous contextual variable (dependent on the discrete selection) and a low level policy dependant of the continuous contextual variable. The results show some improvements over baselines, especially in a sparse reward context where the presented method capitalizes from the offline learned strategies for exploration.
Review
Strengths:
The paper is well-written and easy to follow. It is nice to read (except for some parts of the method that are rushed)
The method is novel, extending prior work
The results show some improvements
Weaknesses:
Structurally, I like papers that push the related work section to the back because they first present some relevant common theoretical components to understand both the presented method and the related work. This is not the case here. I would recommend placing the related work after the introduction to fully understand and compare the method section to the previous methods.
The experimental evaluation is scarce. The method, although explained as very general, is only applied to one domain. Some of the results are not completely clear. The paper would gain on clarity and support for conclusions if the method would be applied to other RL domains, even simple ones. Trained policies could be used to generate expert demonstrations. In its current form, the experiments fall short.
The comparison to other methods, especially to NPMP, is not completely clear. The results are somewhat mixed. This is probably a consequence of the limited experimental evaluation with only one domain. Right now, apart from the sparse reward setup, it is not very clear the pros and cons of the presented method compared to previous ones.
Could you provide an intuition of what is the different information represented by the categorical and continuous high-level latent codes? Both represent the context. What is exactly the benefit of the two levels? Temporal consistency and commitment? Semantic information encoding? Additional experiments in other domains and with other mixtures of information at different levels would help
What happens if there are other observations passed to the different levels? How sensitive are the results to that?
What happens if during training one or more of the categorical values are not allowed? Can the system recover (find the necessary skills via exploration)?
I’d recommend including a small figure with the objects used in the experimental evaluation to avoid having to go back and forth to know what are “set1”, “set2”...
The method section is a bit rushed. I’d dedicate more time to go step by step over the derivation (not completely, that is in the appendix), explaining more of why things are done instead of just describing the mathematical equations. |
ICLR | Title
Learning transferable motor skills with hierarchical latent mixture policies
Abstract
For robots operating in the real world, it is desirable to learn reusable behaviours that can effectively be transferred and adapted to numerous tasks and scenarios. We propose an approach to learn abstract motor skills from data using a hierarchical mixture latent variable model. In contrast to existing work, our method exploits a three-level hierarchy of both discrete and continuous latent variables, to capture a set of high-level behaviours while allowing for variance in how they are executed. We demonstrate in manipulation domains that the method can effectively cluster offline data into distinct, executable behaviours, while retaining the flexibility of a continuous latent variable model. The resulting skills can be transferred and fine-tuned on new tasks, unseen objects, and from state to vision-based policies, yielding better sample efficiency and asymptotic performance compared to existing skilland imitation-based methods. We further analyse how and when the skills are most beneficial: they encourage directed exploration to cover large regions of the state space relevant to the task, making them most effective in challenging sparse-reward settings.
1 INTRODUCTION
Reinforcement learning is a powerful and flexible paradigm to train embodied agents, but relies on large amounts of agent experience, computation, and time, on each individual task. Learning each task from scratch is inefficient: it is desirable to learn a set of skills that can efficiently be reused and adapted to related downstream tasks. This is particularly pertinent for real-world robots, where interaction is expensive and data-efficiency is crucial. There are numerous existing approaches to learn transferable embodied skills, usually formulated as a two-level hierarchy with a high-level controller and low-level skills. These methods predominantly represent skills as being either continuous, such as goal-conditioned (Lynch et al., 2019; Pertsch et al., 2020b) or latent space policies (Haarnoja et al., 2018; Merel et al., 2019; Singh et al., 2021); or discrete, such as mixture or option-based methods (Sutton et al., 1999; Daniel et al., 2012; Florensa et al., 2017; Wulfmeier et al., 2021). Our goal is to combine these perspectives to leverage their complementary advantages.
We propose an approach to learn a three-level skill hierarchy from an offline dataset, capturing both discrete and continuous variations at multiple levels of behavioural abstraction. The model comprises a low-level latent-conditioned controller that can learn motor primitives, a set of continuous latent mid-level skills, and a discrete high-level controller that can compose and select among these abstract mid-level behaviours. Since the mid- and high-level form a mixture, we call our method Hierarchical Latent Mixtures of Skills (HeLMS). We demonstrate on challenging object manipulation tasks that our method can decompose a dataset into distinct, intuitive, and reusable behaviours. We show that these skills lead to improved sample efficiency and performance in numerous transfer scenarios: reusing skills for new tasks, generalising across unseen objects, and transferring from state to vision-based policies. Further analysis and ablations reveal that both continuous and discrete components are beneficial, and that the learned hierarchical skills are most useful in sparse-reward settings, as they encourage directed exploration of task-relevant parts of the state space. ∗Corresponding author. Email: [email protected] †Work done while at DeepMind
Our main contributions are as follows:
• We propose a novel approach to learn skills at different levels of abstraction from an offline dataset. The method captures both discrete behavioural modes and continuous variation using a hierarchical mixture latent variable model.
• We present two techniques to reuse and adapt the learned skill hierarchy via reinforcement learning in downstream tasks, and perform extensive evaluation and benchmarking in different transfer settings: to new tasks and objects, and from state to vision-based policies.
• We present a detailed analysis to interpret the learned skills, understand when they are most beneficial, and evaluate the utility of both continuous and discrete skill representations.
2 RELATED WORK
A long-standing challenge in reinforcement learning is the ability to learn reusable motor skills that can be transferred efficiently to related settings. One way to learn such skills is via multi-task reinforcement learning (Heess et al., 2016; James et al., 2018; Hausman et al., 2018; Riedmiller et al., 2018), with the intuition that behaviors useful for a given task should aid the learning of related tasks. However, this often requires careful curation of the task set, where each skill represents a separate task. Some approaches avoid this by learning skills in an unsupervised manner using intrinsic objectives that often maximize the entropy of visited states while keeping skills distinguishable (Gregor et al., 2017; Eysenbach et al., 2019; Sharma et al., 2019; Zhang et al., 2020).
A large body of work explores skills from the perspective of unsupervised segmentation of repeatable behaviours in temporal data (Niekum & Barto, 2011; Ranchod et al., 2015; Krüger et al., 2016; Lioutikov et al., 2017; Shiarlis et al., 2018; Kipf et al., 2019; Tanneberg et al., 2021). Other works investigate movement or motor primitives that can be selected or sequenced together to solve complex manipulation or locomotion tasks (Mülling et al., 2013; Rueckert et al., 2015; Lioutikov et al., 2015; Paraschos et al., 2018; Merel et al., 2020; Tosatto et al., 2021; Dalal et al., 2021). Some of these methods also employ mixture models to jointly model low-level motion primitives and a high-level primitive controller (Muelling et al., 2010; Colomé & Torras, 2018; Pervez & Lee, 2018); the high-level controller can also be implicit and decentralised over the low-level primitives (Goyal et al., 2019).
Several existing approaches employ architectures in which the policy is comprised of two (or more) levels of hierarchy. Typically, a low-level controller represents the learned set of skills, and a high-level policy instructs the low-level controller via a latent variable or goal. Such latent variables can be discrete (Florensa et al., 2017; Wulfmeier et al., 2020) or continuous (Nachum et al., 2018; Haarnoja et al., 2018) and regularization of the latent space is often crucial (Tirumala et al., 2019). The latent variable can represent the behaviour for one timestep, for a fixed number of timesteps (Ajay et al., 2021), or options with different durations (Sutton et al., 1999; Bacon et al., 2017; Wulfmeier et al., 2021). One such approach that is particularly relevant (Florensa et al., 2017) learns a diverse set of skills, via a discrete latent variable that interacts multiplicatively with the state to enable continuous variation in a Stochastic Neural Network policy; this skill space is then transferred to locomotion tasks by learning a new categorical controller. Our method differs in a few key aspects: our proposed three-level hierarchical architecture explicitly models abstract discrete skills while allowing for temporal dependence and lower-level latent variation in their execution, enabling diverse object-centric behaviours in challenging manipulation tasks.
Our work is related to methods that learn robot policies from demonstrations (LfD, e.g. (Rajeswaran et al., 2018; Shiarlis et al., 2018; Strudel et al., 2020)) or more broadly from logged data (offline RL, e.g. (Wu et al., 2019; Kumar et al., 2020; Wang et al., 2020)). While many of these focus on learning single-task policies, several approaches learn skills offline that can be transferred online to new tasks (Merel et al., 2019; Lynch et al., 2019; Pertsch et al., 2020a; Ajay et al., 2021; Singh et al., 2021). These all train a two-level hierarchical model, with a high-level encoder that maps to a continuous latent space, and a low-level latent-conditioned controller. The high-level encoder can encode a whole trajectory (Pertsch et al., 2020a; 2021; Ajay et al., 2021); a short look-ahead state sequence (Merel et al., 2019); the current and final goal state (Lynch et al., 2019); or can even be simple isotropic Gaussian noise (Singh et al., 2021) that can be flexibly transformed by a flow-based low-level controller. At transfer time, a new high-level policy is learned from scratch: this can be more efficient with skill priors (Pertsch et al., 2020a) or temporal abstraction (Ajay et al., 2021).
HeLMS builds on this large body of work by explicitly modelling both discrete and continuous behavioural structure via a three-level skill hierarchy. We use similar information asymmetry to Neural Probabilistic Motor Primitives (NPMP) (Merel et al., 2019; 2020), conditioning the highlevel encoder on a short look-ahead trajectory. However HeLMS explicitly captures discrete modes of behaviour via the high-level controller, and learns an additional mid-level which is able to transfer abstract skills to downstream tasks, rather than learning a continuous latent policy from scratch.
3 METHOD
This paper examines a two-stage problem setup: an offline stage where a hierarchical skill space is learned from a dataset, and an online stage where these skills are transferred to a reinforcement learning setting. The dataset D comprises a set of trajectories, each a sequence of state-action pairs {xt,at}Tt=0. The model incorporates a discrete latent variable yt ∈ {1, . . . ,K} as a high-level skill selector (for a fixed number of skills K), and a mid-level continuous variable zt ∈ Rnz conditioned on yt which parameterises each skill. Marginally, zt is then a latent mixture distribution representing both a discrete set of skills and the variation in their execution. A sample of zt represents an abstract behaviour, which is then executed by a low-level controller p(at | zt,xt). The learned skill space can then be transferred to a reinforcement learning agent π in a Markov Decision Process defined by tuple {S,A, T ,R, γ}: these represent the state, action, and transition distributions, reward function, and discount factor respectively. When transferring, we train a new high-level controller that acts either at the level of discrete skills yt or continuous zt, and freeze lower levels of the policy.
We explain our method in detail in the following sections.
3.1 LATENT MIXTURE SKILL SPACES FROM OFFLINE DATA
Our method employs the generative model in Figure 1a. As shown, the state inputs can be different for each level of the hierarchy, but to keep notation uncluttered, we refer to all state inputs as xt and the specific input can be inferred from context. The joint distribution of actions and latents over a trajectory is decomposed into a latent prior p(y0:T , z1:T ) and a low-level controller p(at | zt,xt):
p(a1:T ,y0:T , z1:T |x1:T ) = p(y0:T , z1:T ) T∏ t=1 p(at | zt,xt)
p(y0:T , z1:T ) = p(y0) T∏ t=1 p(yt |yt−1)p(zt |yt). (1)
Intuitively, the categorical variable yt can capture discrete modes of behaviour, and the continuous latent zt is conditioned on this to vary the execution of each behaviour. Thus, zt follows a mixture
distribution, encoding all the relevant information on desired abstract behaviour for the low-level controller p(at | zt,xt). Since each categorical latent yt is dependent on yt−1, and zt is only dependent on yt, this prior can be thought of as a Hidden Markov model over the sequence of z1:T .
To perform inference over the latent variables, we introduce the variational approximation:
q(y0:T , z1:T |x1:T ) = p(y0) T∏ t=1 q(yt |yt−1,xt)q(zt |yt,xt) (2)
Here, the selection of a skill yt ∼ q(yt |yt−1,xt) is dependent on that of the previous timestep (allowing for temporal consistency), as well as the input. The mid-level skill is then parameterised by zt ∼ q(zt |yt,xt) based on the chosen skill and current input. p(y0) and p(yt |yt−1) model a skill prior and skill transition prior respectively, while p(zt |yt) represents a skill parameterisation prior to regularise each mid-level skill. While all of these priors can be learned in practice, we only found it necessary to learn the transition prior, with a uniform categorical for the initial skill prior and a simple fixed N (0, I) prior for p(zt |yt).
Training via the Evidence Lower Bound The proposed model contains a number of components with trainable parameters: the prior parameters ψ = {ψa, ψy} for the low-level controller and categorical transition prior respectively; and posterior parameters φ = {φy, φz} for the high-level controller and mid-level skills. For a trajectory {x1:T ,a1:T } ∼ D, we can compute the Evidence Lower Bound for the state-conditional action distribution, ELBO ≤ p(a1:T |x1:T ), as follows:
ELBO = Eqφ(y0:T ,z1:T |x1:T ) [ log pψ(a1:T ,y0:T , z1:T |x1:T )− log qφ(y0:T , z1:T |x1:T ) ]
≈ T∑ t=1 [∑ yt q(yt |x1:t) ( per-component action recon︷ ︸︸ ︷ log pψa(at | z̃ {yt} t ,xt)−βz per-component KL regulariser︷ ︸︸ ︷ KL(qφz (zt |yt,xt) || p(zt |yt)) )]
−βy T∑ t=1 [ ∑ yt−1 q(yt−1 |x1:t−1)KL ( qφy (yt |yt−1,xt) || pψy (yt |yt−1) )︸ ︷︷ ︸ categorical regulariser ] (3)
where z̃{yt}t ∼ q(zt |yt,xt). The coefficients βy and βz can be used to weight the KL terms, and the cumulative component probability q(yt |x1:t) can be computed iteratively as q(yt |x1:t) =∑
yt−1 qφy (yt |yt−1,xt)q(yt−1 |x1:t−1). In other words, for each timestep t and each mixture component, we compute the latent sample and the corresponding action log-probability, and the KL-divergence between the component posterior and prior. This is then marginalised over all yt, with an additional KL over the categorical transitions. For more details, see Appendix C.
Information-asymmetry As noted in previous work (Tirumala et al., 2019; Galashov et al., 2019), hierarchical approaches often benefit from information-asymmetry, with higher levels seeing additional context or task-specific information. This ensures that the high-level remains responsible for abstract, task-related behaviours, while the low-level executes simpler motor primitives. We employ similar techniques in HeLMS: the low-level inputs xLL comprise the proprioceptive state of the embodied agent; the mid-level inputs xML also include the poses of objects in the environment; and the high-level xHL concatenates both object and proprioceptive state for a short number of lookahead timesteps. The high- and low-level are similar to (Merel et al., 2019), with the low-level controller enabling motor primitives based on proprioceptive information, and the high-level using the lookahead information to provide additional context regarding behavioural intent when specifying which skill to use. The key difference is the categorical high-level and the additional mid-level, with which HeLMS can learn more object-centric skills and transfer these to downstream tasks.
Network architectures The architecture and information flow in HeLMS are shown in Figure 1b. The high-level network contains a gated head, which uses the previous skill yt−1 to index into one of K categorical heads, each of which specify a distribution over yt. For a given yt, the corresponding mid-level skill network is selected and used to sample a latent action zt, which is then used as input for the latent-conditioned low-level controller, which parameterises the action distribution. The skill transition prior p(yt |yt−1) is also learned, and is parameterised as a linear softmax layer which takes in a one-hot representation of yt−1 and outputs the distribution over yt. All components are trained end-to-end via the objective in Equation 3.
3.2 REINFORCEMENT LEARNING WITH RELOADED SKILLS
Once learned, we propose two methods to transfer the hierarchical skill space to downstream tasks. Following previous work (e.g. (Merel et al., 2019; Singh et al., 2021)), we freeze the low-level controller p(at | zt,xt), and learn a policy for either the continuous (zt) or discrete (yt) latent.
Categorical agent One simple and effective technique is to additionally freeze the mid-level components q(zt |yt,xt), and learn a categorical high-level controller π(yt |xt) for the downstream task. The learning objective is given by:
J = Eπ [∑ t γt (rt − ηyKL(π(yt |xt) ||π0(yt |xt))) ] , (4)
where the standard discounted return objective in RL is augmented by an additional term performing KL-regularisation to some prior π0 scaled by coefficient ηy . This could be any categorical distribution such as the previously learned transition prior p(yt |yt−1), but in this paper we regularise to the uniform categorical prior to encourage diversity. While any RL algorithm could be used to optimize π(yt |xt), in this paper we use MPO (Abdolmaleki et al., 2018) with a categorical action distribution (see Appendix B for details). We hypothesise that this method improves sample efficiency by converting a continuous control problem into a discrete abstract action space, which may also aid in credit assignment. However, since both the mid-level components and low-level are frozen, it can limit flexibility and plasticity, and also requires that all of the mid- and low-level input states are available in the downstream task. We call this method HeLMS-cat.
Mixture agent A more flexible method of transfer is to train a latent mixture policy, π(zt |xt) =∑ yt π(yt |xt)π(zt |yt,xt). In this case, the learning objective is given by:
J = Eπ [∑ t γt ( rt − ηyKL(π(yt |xt) ||π0(yt |xt))− ηz ∑ yt KL(π(zt |yt,xt) ||π0(zt |yt,xt)) )] , (5)
where in addition to the categorical prior, we also regularise each mid-level skill to a corresponding prior π0(zt |yt,xt). While the priors could be any policies, we set them to be the skill posteriors q(zt |yt,xt) learned offline, to ensure the mixture components remain close to the pre-learned skills. This is related to (Tirumala et al., 2019), which also applies KL-regularisation at multiple levels of a hierarchy. While the high-level controller π(yt |xt) is learned from scratch, the mixture components can also be initialised to q(zt |yt,xt), to allow for initial exploration over the space of skills. Alternatively, the mixture components can use different inputs, such as vision: this setup allows vision-based skills to be learned efficiently by regularising to state-based skills learned offline. We optimise this using RHPO (Wulfmeier et al., 2020), which employs a similar underlying optimisation to MPO for mixture policies (see Appendix B for details). We call this HeLMS-mix.
4 EXPERIMENTS
Our experiments focus on the following questions: (1) Can we learn a hierarchical latent mixture skill space of distinct, interpretable behaviours? (2) How do we best reuse this skill space to improve sample efficiency and performance on downstream tasks? (3) Can the learned skills transfer effectively to multiple downstream scenarios: (i) different objects; (ii) different tasks; and (iii) different modalities such as vision-based policies? (4) How exactly do these skills aid learning of downstream manipulation tasks? Do they aid exploration? Are they useful in sparse or dense reward scenarios?
4.1 EXPERIMENTAL SETUP
Environment and Tasks We focus on manipulation tasks, using a MuJoCo-based environment with a single Sawyer arm, and three objects coloured red, green, and blue. We follow the challenging object stacking benchmark of Lee et al. (2021), which specifies five object sets (Figure 2), carefully designed to have diverse geometries and present different challenges for a stacking agent. These range from simple rectangular objects (object set 4), to geometries such as slanted faces (sets 1 and 2) that make grasping or stacking the objects more challenging. This environment allows us
to systematically evaluate generalisation of manipulation behaviours for different tasks interacting with geometrically different objects. For further information, we refer the reader to Appendix D.1 or to (Lee et al., 2021). Details of the rewards for the different tasks are also provided in Appendix F.
Datasets To evaluate our approach and baselines in the manipulation settings, we use two datasets:
• red_on_blue_stacking: this data is collected by an agent trained to stack the red object on the blue object and ignore the green one, for the simplest object set, set4. • all_pairs_stacking: similar to the previous case, but with all six pairwise stacking
combinations of {red, green, blue}, and covering all of the five object sets.
Baselines For evaluation in transfer scenarios, we compare HeLMS with a number of baselines:
• From scratch: We learn the task from scratch with MPO, without an offline learning phase. • NPMP+KL: We compare against NPMP (Merel et al., 2019), which is the most simi-
lar skill-based approach in terms of information-asymmetry and policy conditioning. We make some small changes to the originally proposed method, and also apply additional KL-regularisation to the latent prior: we found this to improve performance significantly in our experiments. For more details and an ablation, see Appendix A.2.
• Behaviour Cloning (BC): We apply behaviour cloning to the dataset, and fine-tune this policy via MPO on the downstream task. While the actor is initialised to the solution obtained via BC, the critic still needs to be learned from scratch.
• Hierarchical BC: We evaluate a hierarchical variant of BC with a similar latent space z to NPMP using a latent Gaussian high-level controller. However, rather than freezing the low-level and learning just a high-level policy, Hierarchical BC fine-tunes the entire model.
• Asymmetric actor-critic: For state-to-vision transfer, HeLMS uses prior skills that depend on object states to learn a purely vision-based policy. Thus, we also compare against a variant of MPO with an asymmetric actor-critic (Pinto et al., 2017) setup, which uses object states differently: to speed up learning of the critic, while still learning a vision-based actor.
4.2 LEARNING SKILLS FROM OFFLINE DATA
We first aim to understand whether we can learn a set of distinct and interpretable skills from data (question (1)). For this, we train HeLMS on the red_on_blue_stacking dataset with 5 skills.
(a) Set 1 (b) Set 2 (c) Set 3 (d) Set 5
Figure 5: (a) Performance on pyramid task; and (b) image sequence showing episode rollout from a learned solution on this task (left-to-right, top-to-bottom).
Figure 6: Performance for vision-based stacking.
Figure 3a shows some example episode rollouts when the learned hierarchical agent is executed in the environment, holding the high-level categorical skill constant for an episode. Each row represents a different skill component, and the resulting behaviours are both distinct and diverse: for example, a lifting skill (row 1) where the gripper closes and rises up, a reaching skill (row 2) where the gripper moves to the red object, or a grasping skill (row 3) where the gripper lowers and closes its fingers. Furthermore, without explicitly encouraging this, the emergent skills capture temporal consistency: Figure 3b shows the learned prior p(yt |yt−1) (visualised as a transition matrix) assigns high probability along the diagonal (remaining in the same skill). Finally, Figure 3c demonstrates that all skills are used, without degeneracy.
4.3 TRANSFER TO DOWNSTREAM TASKS
Generalising to different objects We next evaluate whether the previously learned skills (i.e. trained on the simple objects in set 4) can effectively transfer to more challenging object interaction scenarios: the other four object sets proposed by (Lee et al., 2021). The task uses a sparse staged reward, with reward incrementally given after completing each sub-goal of the stacking task. As shown in Figure 4, both variants of HeLMS learn significantly faster than baselines on the different object sets. Compared to the strongest baseline (NPMP), HeLMS reaches better average asymptotic performance (and much lower variance) on two object sets (1 and 3), performs similarly on set 5, and does poorer on object set 2. The performance on object set 2 potentially highlights a trade-off between incorporating higher-level abstract behaviours and maintaining low-level flexibility: this object set often requires a reorientation of the bottom object due to its slanted faces, a behaviour that is not common in the offline dataset, which might require greater adaptation of mid- and low-level skills. This is an interesting investigation we leave for future work.
Compositional reuse of skills To evaluate whether the learned skills are composable for new tasks, we train HeLMS on the all_pairs_stacking dataset with 10 skills, and transfer to a pyramid task. In this setting, the agent has to place the red object adjacent to the green object, and stack the blue object on top to construct a pyramid. The task is specified via a sparse staged reward
for each stage or sub-task: reaching, grasping, lifting, and placing the red object, and subsequently the blue object. In Figure 5(a), we plot the performance of both variants of our approach, as well as NPMP and MPO; we omit the BC baselines as this involves transferring to a fundamentally different task. Both HeLMS-mix and HeLMS-cat reach a higher asymptotic performance than both NPMP and MPO, indicating that the learned skills can be better transferred to a different task. We show an episode rollout in Figure 5(b) in which the learned agent can successfully solve the task.
From state to vision-based policies While our method learns skills from proprioception and object state, we evaluate whether these skills can be used to more efficiently learn a vision-based policy. This is invaluable for practical real-world scenarios, since the agent acts from pure visual observation at test time without requiring privileged and often difficult-to-obtain object state information.
We use the HeLMS-mix variant to transfer skills to a vision-based policy, by reusing the low-level controller, initialising a new high-level controller and mid-level latent skills (with vision and proprioception as input), and KL-regularising these to the previously learned state-based skills. While the learned policy is vision-based, this KL-regularisation still assumes access to object states during training. For a fair comparison, we additionally compare our approach with a version of MPO using an asymmetric critic (Pinto et al., 2017), which exploits object state information instead of vision in the critic, and also use this for HeLMS. As shown in Figure 6, learning a vision-based policy with MPO from scratch is very slow and computationally intensive, but an asymmetric critic significantly speeds up learning, supporting the empirical findings of Pinto et al. (2017). However, HeLMS once again demonstrates better sample efficiency, and reaches slightly better asymptotic performance. We note that this uses the same offline model as for the object generalisation experiments, showing that the same state-based skill space can be reused in numerous settings, even for vision-based tasks.
4.4 WHERE AND HOW CAN HIERARCHICAL SKILL REUSE BE EFFECTIVE?
Sparse reward tasks We first investigate how HeLMS performs for different rewards: a dense shaped reward, the sparse staged reward from the object generalisation experiments, and a fully sparse reward that is only provided after the agent stacks the object. For this experiment, we use the skill space trained on red_on_blue_stacking and transfer it to the same RL task of stacking on object set 4. The results are shown in Figure 7. With a dense reward (and no object transfer required), all of the approaches can successfully learn the task. With the sparse staged reward, the baselines all plateau at a lower performance, with the exception of NPMP, as previously discussed. However, for the challenging fully-sparse scenario, HeLMS is the only method that achieves nonzero reward. This neatly illustrates the benefit of the proposed hierarchy of skills: it allows for directed exploration which ensures that even sparse rewards can be encountered. This is consistent with observations from prior work in hierarchical reinforcement learning (Florensa et al., 2017; Nachum et al., 2019), and we next investigate this claim in more depth for our manipulation setting.
Exploration To measure whether the proposed approach leads to more directed exploration, we record the average coverage in state space at the start of RL (i.e. zero-shot transfer). This is computed as the variance (over an episode) of the state xt, separated into three interpretable groups:
Method Reward State coverage (×10 −2)
Dense Staged Joints Grasp Objects MPO 3.16 0.0 8.72 0.004 1.21
NPMP 3.67 0.0 3.43 0.0 1.45 BC 31.68 0.004 4.22 0.05 1.52 Hier. BC 16.42 0.004 2.61 0.04 1.31 HeLMS 20.46 0.05 2.98 1.10 1.61
Table 1: Analysis of zero-shot exploration at the start of RL, in terms of reward and state coverage (variance over an episode of different subsets of the agent’s state). Results are averaged over 1000 episodes.
(a) (b)
Figure 8: Ablation for continuous and discrete components during offline learning, when transferring to the (a) easy case (object set 4) and (b) hard case (object set 3).
joints (angles and velocity), grasp (a simulated grasp sensor), and object poses. We also record the total reward (dense and sparse staged). The results are reported in Table 1. While all approaches achieve some zero-shot dense reward (with BC the most effective), HeLMS1 receives a sparse staged reward an order of magnitude greater. Further, in this experiment we found it was able to achieve the fully sparse reward (stacked) in one episode. Analysing the state coverage results, while other methods are able to cover the joint space more (e.g. by randomly moving the joints), HeLMS is nearly two orders of magnitude higher for grasp states. This indicates the utility of hierarchical skills: by acting over the space of abstract skills rather than low-level actions, HeLMS performs directed exploration and targets particular states of interest, such as grasping an object.
4.5 ABLATION STUDIES
Capturing continuous and discrete structure To evaluate the benefit of both continuous and discrete components, we train our method with a fixed variance of zero for each latent component (i.e. ‘discrete-only’) and transfer to the stacking task with sparse staged reward in an easy case (object set 4) and hard case (object set 3), as shown in Figure 8(a) and (b). We also evaluate the ‘continuousonly’ case with just a single Gaussian to represent the high- and mid-level skills: this is equivalent to the NPMP+KL baseline. We observe the the discrete component alone leads to improved sample efficiency in both cases, but modelling both discrete and continuous latent behaviours makes a significant difference in the hard case. In other words, when adapting to challenging objects, it is important to capture discrete skills, but allow for latent variation in how they are executed.
KL-regularisation We also perform an ablation for KL-regularisation during the offline phase (via βz) and online RL (via ηz), to gauge the impact on transfer; see Appendix A.1 for details.
5 CONCLUSION
We present HeLMS, an approach to learn transferable and reusable skills from offline data using a hierarchical mixture latent variable model. We analyse the learned skills to show that they effectively cluster data into distinct, interpretable behaviours. We demonstrate that the learned skills can be flexibly transferred to different tasks, unseen objects, and to different modalities (such as from state to vision). Ablation studies indicate that it is beneficial to model both discrete modes and continuous variation in behaviour, and highlight the importance of KL-regularisation when transferring to RL and fine-tuning the entire mixture of skills. We also perform extensive analysis to understand where and how the proposed skill hierarchy can be most useful: we find that it is particularly invaluable in sparse reward settings due to its ability to perform directed exploration.
There are a number of interesting avenues for future work. While our model demonstrated temporal consistency, it would be useful to more actively encourage and exploit this for sample-efficient transfer. It would also be useful to extend this work to better fine-tune lower level behaviours, to allow for flexibility while exploiting high-level behavioural abstractions.
1Note that HeLMS-cat and HeLMS-mix are identical for this analysis: at the start of reinforcement learning, both variants transfer the mid-level skills while initialising a new high-level controller.
ACKNOWLEDGMENTS
The authors would like to thank Coline Devin for detailed comments on the paper and for generating the all_pairs_stacking dataset. We would also like to thank Alex X. Lee and Konstantinos Bousmalis for help with setting up manipulation experiments. We are also grateful to reviewers for their feedback.
A ADDITIONAL EXPERIMENTS
A.1 ABLATIONS FOR KL-REGULARISATION
In these experiments, we investigate the effect of KL-regularisation on the mid-level components, both for the offline learning phase (regularising each component to p(zt |yt) = N (0, I) via coefficient βz), and the online reinforcement learning stage via HeLMS-mix (regularising each component to the mid-level skills learned offline, via coefficient ηz)). The results are reported in Figure 9, where each plot represents a different setting for offline KL-regularisation (either regularisation toN (0, I) with βz = 0.01, or no regularisation with βz = 0) and a different transfer case (the easy case of transferring to object set 4, or the hard case of transferring to object set 3). Each plot shows the downstream performance when varying the strength of KL-regularisation during RL via coefficient ηz . The HeLMS-cat approach represents the extreme case where the skills are entirely frozen (i.e. full regularisation).
The results suggest some interesting properties of the latent skill space based on regularisation. When regularising the mid-level components to the N (0, I) prior, it is important to regularise during online RL; this is especially true for the hard transfer case, where HeLMS-cat performs much better, and the performance degrades significantly with lower regularisation values. However, when removing mid-level regularisation during offline learning, the method is insensitive to regularisation during RL over the entire range evaluated, from 0.01 to 100.0. We conjecture that with mid-level skills regularised to N (0, I), the different mid-level skills are drawn closer together and occupy a more compact region in latent space, such that KL-regularisation is necessary during RL for a skill to avoid drifting and overlapping with the latent distribution of other skills (i.e. skill degeneracy). In contrast, without offline KL-regularisation, the skills are free to expand and occupy more distant regions of the latent space, rendering further regularisation unnecessary during RL. Such latent space properties could be further analysed to improve learning and transfer of skills; we leave this as an interesting direction for future work.
A.2 NPMP ABLATION
The Neural Probabilistic Motor Primitives (NPMP) work (Merel et al., 2019) presents a strong baseline approach to learning transferable motor behaviours, and we run ablations to ensure a fair comparison to the strongest possible result. As discussed in the main text, NPMP employs a Gaussian high-level latent encoder with a AR(1) prior in the latent space. We also try a fixed N(0, I) prior (this is equivalent to an AR(1) prior with a coefficient of 0, so can be considered a hyperparameter choice). Since our method benefits from KL-regularisation during RL, we apply this to NPMP as well.
As shown in Figure 10, we find that both changes lead to substantial improvements in the manipulation domain, on all five object sets. Consequently, in our main experiments, we report results with the best variant, using a N(0, I) prior with KL-regularisation during RL.
B REINFORCEMENT LEARNING WITH MPO AND RHPO
As discussed in Section 3.2, the hierarchy of skills are transferred to RL in two ways: HeLMScat, which learns a new high-level categorical policy π(yt |xt) via MPO (Abdolmaleki et al., 2018); or HeLMS-mix, which learns a mixture policy π(zt |xt) = ∑ yt π(yt |xt)π(zt |yt,xt) via RHPO (Wulfmeier et al., 2020). We describe the optimisation for both of these cases in the following subsections. For clarity of notation, we omit the additional KL-regularisation terms introduced in Section 3.2 and describe just the base methods of MPO and RHPO when applied to the RL setting in this paper. These KL-terms are incorporated as additional loss terms in the policy improvement stage.
B.1 HELMS-CAT VIA MPO
Maximum a posteriori Policy Optimisation (MPO) is an Expectation-Maximisation-based algorithm that performs off-policy updates in three steps: (1) updating the critic; (2) creating a non-parametric intermediate policy by weighting sampled actions using the critic; and (3) updating the parametric policy to fit the critic-reweighted non-parametric policy, with trust region constraints to improve stability. We detail each of these steps below. Note that while the original MPO operates in the environment’s action space, we use it here for the high-level controller, to set the categorical variable yt.
Policy evaluation First, the critic is updated via a TD(0) objective as:
min θ L(θ) = Ext,yt∼B
[( QT −Qφ(xt,yt))2 ] , (6)
Here, QT = rt + γExt+1,yt+1 [Q′(st+1,yt+1)] is the 1-step target with the state transition (xt,yt,xt+1) returned from the replay buffer B, and next action sampled from yt+1 ∼ π′(·|xt+1). π′ and Q′ are target networks for the policy and the critic, used to stabilise learning.
Policy improvement Next, we proceed with the first step of policy improvement by constructing an intermediate non-parametric policy q(yt|xt), and optimising the following constrained objective:
max q J(q) = Eyt∼q,xt∼B
[ Qφ(xt,yt) ] , s.t. Ext∼B [ KL ( q(·|xt)‖πθk(·|xt) )] ≤ E , (7)
where E defines a bound on the KL divergence between the non-parametric and parametric policies at the current learning step k. This constrained optimisation problem has the following closed-form solution:
q(yt |xt) ∝ πθk(yt |xt) exp (Qφ(xt,yt)/η) . (8)
In other words, this step constructs an intermediate policy which reweights samples from the previous policy using exponentiated temperature-scaled critic values. The temperature parameter η is derived based on the dual of the Lagrangian; for futher details please refer to (Abdolmaleki et al., 2018).
Finally, we can fit a parametric policy to the non-parametric distribution q(yt |xt) by minimising their KL-divergence, subject to a trust-region constraint on the parametric policy:
θk+1 = argmin θ
Ext∼B [ KL(q(yt |xt) ||πθ(yt|xt)) ] ,
s.t. Ext∼B [ KL ( πθk+1(yt |xt) ||πθk(yt |xt) )] ≤ M . (9)
This optimisation problem can be solved via Lagrangian relaxation, with the Lagrangian multiplier M modulating the strength of the trust-region constraint. For further details and full derivations, please refer to (Abdolmaleki et al., 2018).
B.2 HELMS-MIX VIA RHPO
RHPO (Wulfmeier et al., 2020) follows a similar optimisation procedure as MPO, but extends it to mixture policies and multi-task settings. We do not exploit the multi-task capability in this work, but utilise RHPO to optimise the mixture policy in latent space, π(zt |xt) =∑
yt π(yt |xt)π(zt |yt,xt). The Q-function Qφ(xt, zt) and parametric policy πθk(zt |xt) use the continuous latents zt as actions instead of the categorical yt. This is also in contrast to the original formulation of RHPO, which uses the environment’s action space. Compared to MPO, the policy improvement stage of the non-parametric policy is minimally adapted to take into account the new mixture policy. The key difference is in the parametric policy update step, which optimises the following:
θk+1 = argmin θ
Ext∼B [ KL(q(zt |xt) ||πθ(zt|xt)) ] ,
s.t. Ext∼B [ KL ( πθk+1(yt |xt) ||πθk(yt |xt) ) + ∑ yt KL ( πθk+1(zt |yt,xt) ||πθk(zt |yt,xt) )] ≤ M . (10)
In other words, separate trust-region constraints are applied to a sum of KL-divergences: for the high-level categorical and for each of the mixture components. Following the original RHPO, we separate the single constraint into decoupled constraints that set a different for the means, covariances, and categorical ( µ, σ , and cat, respectively). This allows the optimiser to independently modulate how much the categorical distribution, component means, and component variances can change. For further details and full derivations, please refer to (Wulfmeier et al., 2020).
C ELBO DERIVATION AND INTUITIONS
We can compute the Evidence Lower Bound for the state-conditional action distribution, p(a1:T |x1:T ) ≥ ELBO, as follows:
ELBO = p(a1:T |x1:T )− KL(q(y0:T , z1:T |x1:T ) || p(y0:T , z1:T |x1:T )) = Eq(y0:T ,z1:T |x1:T ) [ log p(a1:T ,y0:T , z1:T |x1:T )− log q(y0:T , z1:T |x1:T ) ] = Eq1:T [ T∑ t=1 log p(at | zt,xt) + log p(zt |yt) + log p(yt |yt−1)
− log q(zt |yt,xt)− log q(yt |yt−1,x)
]
= T∑ t=1 Eq1:T
[ log p(at | zt,xt)− KL(q(zt |yt,xt) || p(zt |yt))
−KL(q(yt |yt−1,xt) || p(yt |yt−1)) ] (11)
We note that the first two terms in the expectation depend only on timestep t, so we can simplify and marginalise exactly over all discrete {y1:T }\yt. For the final term, we note that the KL at timestep t is constant with respect to yt (as it already marginalises over the whole distribution), and only depends on yt−1. Lastly, we will use sampling to approximate the expectation over zt. This yields the following:
ELBO = T∑ t=1 Eq(zt |yt,xt) [∑ y0:T q(y0:T |x1:T ) ( log p(at | zt,xt)− KL(q(zt |yt,xt) || p(zt |yt))
−KL(q(yt |yt−1,xt) || p(yt |yt−1)) )]
ELBO ≈ T∑ t=1 [∑ yt q(yt |x1:t) ( per-component recon loss︷ ︸︸ ︷ log p(at | z̃{yt}t ,xt)−βz per-component KL regulariser︷ ︸︸ ︷ KL(q(zt |yt,xt) || p(zt |yt)) )]
−βy T∑ t=1 [∑ yt−1 q(yt−1 |x1:t−1)KL(q(yt |yt−1,xt) || p(yt |yt−1))︸ ︷︷ ︸ discrete regulariser ] (12)
where z̃{yt}t ∼ q(zt |yt,xt), the coefficients βy and βz can be used to weight the KL terms, and the cumulative component probability q(yt |x1:t) can be computed iteratively as:
q(yt |x1:t) = ∑ yt−1 q(yt |yt−1,xt)q(yt−1 |x1:t−1) (13)
In other words, for each timestep t and each mixture component, we compute the latent sample and the corresponding action log-probability, and the KL-divergence between the component posterior and prior. This is then marginalised over all yt, with an additional KL over the categorical transitions.
Structuring the graphical model and ELBO in this form has a number of useful properties. First, the ELBO terms include an action reconstruction loss and KL term for each mixture component, scaled by the posterior probability of each component given the history. For a given state, this pressures the model to assign higher posterior probability to components that have low reconstruction cost or KL, which allows different components to specialise for different parts of the state space. Second, the categorical KL between posterior and prior categorical transition distributions is scaled by the
posterior probability of the previous component given history q(yt−1 |x1:t−1): this allows the relative probabilities of past skill transitions along a trajectory to be considered when regularising the current skill distribution. Finally, this formulation does not require any sampling or backpropagation through the categorical variable: starting from t = 0, the terms for each timestep can be efficiently computed by recursively updating the posterior over components given history (q(yt |x1:t)), and summing over all possible categorical values at each timestep.
D ENVIRONMENT PARAMETERS
As discussed earlier in the paper, all experiments take place in a MuJoCo-based object manipulation environment using a Sawyer robot manipulator and three objects: red, green, and blue. The state variables in the Sawyer environment are shown in Table 3. All state variables are stacked for 3 frames for all agents. The object states are only provided to the mid-level and high-level for HeLMS runs, and the camera images are only used by the high- and mid-level controller in the vision transfer experiments (without object states).
The action space is also shown in Table 4. Since the action dimensions vary significantly in range, they are normalised to be between [−1, 1] for all methods during learning. When learning via RL, we apply domain randomisation to physics (but not visual randomisation), and a randomly sampled action delay of 0-2 timesteps. This is applied for all approaches, and ensures that we can learn a policy that is robust to small changes in the environment.
D.1 OBJECT SETS
As discussed in the main paper, we use the object sets defined by Lee et al. (2021), which are carefully designed to cover different object geometries and affordances, presenting different challenges for object interaction tasks. The object sets are shown in Figure 11 (the image has been taken directly from (Lee et al., 2021) for clarity), and feature both simulated and real-world versions; in this paper we focus on the simulated versions. As discussed in detail by (Lee et al., 2021), each object set has a different degree of difficulty and presents a different challenge to the task of stacking red-on-blue:
• In object set 1, the red object has slanted surfaces that make it difficult to grasp, while the blue object is an octagonal prism that can roll.
• In object set 2, the blue object has slanted surfaces, such that the red object will likely slide off unless the blue object is first reoriented.
• In object set 3, the red object is long and narrow, requiring a precise grasp and careful placement.
• Object set 4 is the easiest case with rectangular prisms for both red and blue. • Object set 5 is also relatively easy, but the blue object has ten faces, meaning limited surface
area for stacking.
For more details about the object sets and the rationale behind their design, we refer the reader to (Lee et al., 2021).
E NETWORK ARCHITECTURES AND HYPERPARAMETERS
The network architecture details and hyperparameters for HeLMS are shown in Table 5. Parameter sweeps were performed for the β coefficients during offline learning and the η coefficients during RL. Small sweeps were also performed for the RHPO parameters (refer to (Wulfmeier et al., 2020) for details), but these were found to be fairly insensitive. All other parameters were kept fixed, and used for all methods except where highlighted in the following subsections. All RL experiments were run with 3 seeds to capture variation in each method.
For network architectures, all experiments except for vision used simple 2-layer MLPs for the highand low-level controllers, and for each mid-level mixture component. An input representation network was used to encode the inputs before passing them to the networks that were learned from scratch: i.e. the high-level for state-based experiments, and both high- and mid-level for vision (re-
call that while the state-based experiments can reuse the mid-level components conditioned on object state, the vision-based policy learned them from scratch and KL-regularised to the offline mid-level skills). The critic network was a 3-layer MLP, applied to the output of another input representation network (separate to the actor, but with the same architecture) with concatenated action.
F REWARDS
Throughout the experiments, we employ different reward functions for different tasks and to study the efficacy of our method in sparse versus dense reward scenarios.
Reward stages and primitive functions The reward functions for stacking and pyramid tasks use various reward primitives and staged rewards for completing sub-tasks. Each of these rewards are within the range of [0, 1]
These include:
• reach(obj): a shaped distance reward to bring the TCP to within a certain tolerance of obj. • grasp(): a binary reward for triggering the gripper’s grasp sensor. • close_fingers(): a shaped distance reward to bring the fingers inwards. • lift(obj): shaped reward for lifting the gripper sufficiently high above obj. • hover(obj1,obj2): shaped reward for holding obj1 above obj2. • stack(obj1,obj2): a sparse reward, only provided if obj1 is on top of obj2 to
within both a horizontal and vertical tolerance. • above(obj,dist): shaped reward for being dist above obj, but anywhere horizon-
tally. • pyramid(obj1,obj2,obj3): a sparse reward, only provided if obj3 is on top of the
point midway between obj1 and obj2, to within both a horizontal and vertical tolerance. • place_near(obj1,obj2): sparse reward provided if obj1 is sufficiently near obj2.
Dense stacking reward The dense stacking reward contains a number of stages, where each stage represents a sub-task and has a maximum reward of 1. The stages are:
• reach(red) AND grasp(): Reach and grasp the red object. • lift(red) AND grasp(): Lift the red object. • hover(red,blue): Hover with the red object above the blue object. • stack(red,blue): Place the red object on top of the blue one. • stack(red,blue) AND above(red): Move the gripper above after a completed
stack.
At each timestep, the latest stage to receive non-zero reward is considered to be the current stage, and all previous stages are assigned a reward of 1. The reward for this timestep is then obtained by summing rewards for all stages, and scaling by the number of stages, to ensure the highest possible reward on any timestep is 1.
Sparse staged stacking reward The sparse staged stacking reward is similar to the dense reward variant, but each stage is sparsified by only providing the reward for the stage once it exceeds a value of 0.95.
This scenario emulates an important real-world problem: that it may be difficult in certain cases to specify carefully shaped meaningful rewards, and it can often be easier to specify (sparsely) whether a condition (such as stacking) has been met.
Sparse stacking reward This fully sparse reward uses the stack(red,blue) function to provide reward only when conditions for stacking red on blue have been met.
Pyramid reward The pyramid-building reward uses a staged sparse reward, where each stage represents a sub-task and has a maximum reward of 1. If a stage has dense reward, it is sparsified by only providing the reward once it exceeds a value of 0.95. The stages are:
• reach(red) AND grasp(): Reach and grasp the red object. • lift(red) AND grasp(): Lift the red object. • hover(red,green): Hover with the red object above the green object (with a larger
horizontal tolerance, as it does not need to be directly above). • place_near(red,green): Place the red object sufficiently close to the green object. • reach(blue) AND grasp(): Reach and grasp the blue object. • lift(blue) AND grasp(): Lift the blue object. • hover(blue,green) AND hover(blue,red): Hover with the blue object above
the central position between red and green objects. • pyramid(blue,red,green): Place the blue object on top to make a pyramid. • pyramid(blue,red,green) AND above(blue): Move the gripper above after a
completed stack.
At each timestep, the latest stage to receive non-zero reward is considered to be the current stage, and all previous stages are assigned a reward of 1. The reward for this timestep is then obtained by summing rewards for all stages, and scaling by the number of stages, to ensure the highest possible reward on any timestep is 1. | 1. What is the main contribution of the paper regarding latent variable controller models?
2. What are the strengths and weaknesses of the proposed approach, particularly in comparison with other works in the field?
3. How does the reviewer assess the clarity and organization of the paper's content, especially in problem formulation and experiment presentation?
4. Which specific questions or research directions does the reviewer suggest the paper should focus on more clearly?
5. What is the significance of the paper's findings regarding skill re-use and adaptation in reinforcement learning settings? | Summary Of The Paper
Review | Summary Of The Paper
This paper introduces a latent variable controller model that allows for reusable skill learning in behaviour cloning and reinforcement learning settings. The architecture comprises three stages or levels. At the highest level, input state information (eg. proprioception, visual input, object state information) is passed through an MLP to produce a discrete latent state skill selection variable, incorporating a discrete latent transition model. This skill selection prior is then used in a mid level network operating on the same input information, to produce a continuous latent state, conditioned on this discrete latent variable. Finally, this latent state, together with the input state is used by an actor network to produce actions.
The model is trained used a heuristic process depending on the deployment setting, with different elements frozen at different times, or through the use of a mixture agent that can be initialised from scratch or using previous skills (a similar approach is explored by Florensa et al., ICLR 2017). A KL regularisation on the discrete latent variable skill selector is used (also in prior work - Burke et al., CoRL 2019). Results show that this approach is effective on a range of tasks, and that skill re-use (as expected) out performs learning from scratch. Probably the most interesting result is the comparison with a hierarchical behaviour cloning network and the experiments around skills transfer. The idea seems sound, although the presentation is in need of some improvement, and at times the paper suffers from lack of clarity (in problem formulation and presentation of results). Moreover, some important work in the admittedly vast array of work on hierarchical models for RL and behaviour cloning is missed.
Review
Strengths:
Hierarchical policies are a great idea, and the proposed model seems like a sensible approach to integrate both discrete and continuous latent variables into a policy, in contrast to many existing approaches which use discrete latent indicator variables to trigger fixed policies.
Results show the proposed architecture seems to work, and I was particularly interested in the comparison with a hierarchical behaviour cloning model.
However, the most interesting part of this work lies around the RL phase of the work, where different levels of mixing are explored (completely freezing skills, vs allowing greater skill adaptation. This area is of particular interest as we move beyond "look we discovered some skills and re-used them" to more realistic online learning settings.
Weaknesses:
The paper is in some need of smoothing, and I found this a difficult read, despite being familiar with the field. In particular, I believe the paper would benefit from a clearer problem formulation (ie. is the problem setting a two stage learning from demonstration one followed by an RL phase that makes use of the initial learned skills, or is it a tabular rasa setting with skills gradually incorporated and re-used) Initial sections suggest the former, but some experimental settings and baselines seem to indicate both. I understand that the architecture is very general, and the aim was to show it can be used in both settings with tweaks, but I think the paper would benefit greatly from making the different settings more clear in an expanded problem formulation.
I would also greatly appreciate a more structured experiments setting, more clearly delineating the tasks and with a much more focused aim. As an example, the baselines are all designed to solve slightly different problems, and the broad set of experiments are not necessarily presented to make a cohesive argument, rather they seem to aim to answer many separate questions. A potential option would be to restructure this section around the question of how much prior do we need and how quickly we can allow skill adaptation, which is a very interesting research question.
Questions/ Comments:
There is a wealth of literature in hierachical and resuable skills discovery for RL/ continuous control that is missing from this work, in particular around switching nonlinear dynamical systems. See below for a small selection of work deserving mention.
This paper is particularly important, as it discusses a range of architectures to embed discrete latent variables for skill discovery and later re-use: Carlos Florensa, Yan Duan, Pieter Abbeel, "Stochastic Neural Networks for Hierarchical Reinforcement Learning" ICLR 2017
Other work on re-usable skill discovery using discrete latent variable models: Scott Niekum, Andrew Barto, "Clustering via Dirichlet Process Mixture Models for Portable Skill Discovery", Neurips 2011
P. Ranchod, B. Rosman and G. Konidaris, "Nonparametric Bayesian reward segmentation for skill discovery using inverse reinforcement learning," 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, pp. 471-477, doi: 10.1109/IROS.2015.7353414.
Kipf, Thomas, et al. "Compile: Compositional imitation learning and execution." International Conference on Machine Learning. PMLR, 2019.
Hany Abdulsamad, Jan Peters, "Hierarchical Decomposition of Nonlinear Dynamics and Control for System Identification and Policy Distillation", Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:904-914, 2020.
D. Tanneberg, K. Ploeger, E. Rueckert and J. Peters, "SKID RAW: Skill Discovery From Raw Trajectories," in IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 4696-4703, July 2021, doi: 10.1109/LRA.2021.3068891.
Categorical regularisation in hierarchical models has been proposed previously in: Michael Burke, Yordan Hristov, Subramanian Ramamoorthy, "Hybrid system identification using switching density networks", Proceedings of the Conference on Robot Learning, PMLR 100:172-181, 2020.
See also: Zhe Dong, Bryan Seybold, Kevin Murphy, Hung Bui, "Collapsed Amortized Variational Inference for Switching Nonlinear Dynamical Systems", Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2638-2647, 2020.
Minor queries/ comments: I assume that the number of skills needs to be pre-specified and remains fixed?
Section 2.3: "Following most previous work, we ..." - citation needed. |
ICLR | Title
Learning transferable motor skills with hierarchical latent mixture policies
Abstract
For robots operating in the real world, it is desirable to learn reusable behaviours that can effectively be transferred and adapted to numerous tasks and scenarios. We propose an approach to learn abstract motor skills from data using a hierarchical mixture latent variable model. In contrast to existing work, our method exploits a three-level hierarchy of both discrete and continuous latent variables, to capture a set of high-level behaviours while allowing for variance in how they are executed. We demonstrate in manipulation domains that the method can effectively cluster offline data into distinct, executable behaviours, while retaining the flexibility of a continuous latent variable model. The resulting skills can be transferred and fine-tuned on new tasks, unseen objects, and from state to vision-based policies, yielding better sample efficiency and asymptotic performance compared to existing skilland imitation-based methods. We further analyse how and when the skills are most beneficial: they encourage directed exploration to cover large regions of the state space relevant to the task, making them most effective in challenging sparse-reward settings.
1 INTRODUCTION
Reinforcement learning is a powerful and flexible paradigm to train embodied agents, but relies on large amounts of agent experience, computation, and time, on each individual task. Learning each task from scratch is inefficient: it is desirable to learn a set of skills that can efficiently be reused and adapted to related downstream tasks. This is particularly pertinent for real-world robots, where interaction is expensive and data-efficiency is crucial. There are numerous existing approaches to learn transferable embodied skills, usually formulated as a two-level hierarchy with a high-level controller and low-level skills. These methods predominantly represent skills as being either continuous, such as goal-conditioned (Lynch et al., 2019; Pertsch et al., 2020b) or latent space policies (Haarnoja et al., 2018; Merel et al., 2019; Singh et al., 2021); or discrete, such as mixture or option-based methods (Sutton et al., 1999; Daniel et al., 2012; Florensa et al., 2017; Wulfmeier et al., 2021). Our goal is to combine these perspectives to leverage their complementary advantages.
We propose an approach to learn a three-level skill hierarchy from an offline dataset, capturing both discrete and continuous variations at multiple levels of behavioural abstraction. The model comprises a low-level latent-conditioned controller that can learn motor primitives, a set of continuous latent mid-level skills, and a discrete high-level controller that can compose and select among these abstract mid-level behaviours. Since the mid- and high-level form a mixture, we call our method Hierarchical Latent Mixtures of Skills (HeLMS). We demonstrate on challenging object manipulation tasks that our method can decompose a dataset into distinct, intuitive, and reusable behaviours. We show that these skills lead to improved sample efficiency and performance in numerous transfer scenarios: reusing skills for new tasks, generalising across unseen objects, and transferring from state to vision-based policies. Further analysis and ablations reveal that both continuous and discrete components are beneficial, and that the learned hierarchical skills are most useful in sparse-reward settings, as they encourage directed exploration of task-relevant parts of the state space. ∗Corresponding author. Email: [email protected] †Work done while at DeepMind
Our main contributions are as follows:
• We propose a novel approach to learn skills at different levels of abstraction from an offline dataset. The method captures both discrete behavioural modes and continuous variation using a hierarchical mixture latent variable model.
• We present two techniques to reuse and adapt the learned skill hierarchy via reinforcement learning in downstream tasks, and perform extensive evaluation and benchmarking in different transfer settings: to new tasks and objects, and from state to vision-based policies.
• We present a detailed analysis to interpret the learned skills, understand when they are most beneficial, and evaluate the utility of both continuous and discrete skill representations.
2 RELATED WORK
A long-standing challenge in reinforcement learning is the ability to learn reusable motor skills that can be transferred efficiently to related settings. One way to learn such skills is via multi-task reinforcement learning (Heess et al., 2016; James et al., 2018; Hausman et al., 2018; Riedmiller et al., 2018), with the intuition that behaviors useful for a given task should aid the learning of related tasks. However, this often requires careful curation of the task set, where each skill represents a separate task. Some approaches avoid this by learning skills in an unsupervised manner using intrinsic objectives that often maximize the entropy of visited states while keeping skills distinguishable (Gregor et al., 2017; Eysenbach et al., 2019; Sharma et al., 2019; Zhang et al., 2020).
A large body of work explores skills from the perspective of unsupervised segmentation of repeatable behaviours in temporal data (Niekum & Barto, 2011; Ranchod et al., 2015; Krüger et al., 2016; Lioutikov et al., 2017; Shiarlis et al., 2018; Kipf et al., 2019; Tanneberg et al., 2021). Other works investigate movement or motor primitives that can be selected or sequenced together to solve complex manipulation or locomotion tasks (Mülling et al., 2013; Rueckert et al., 2015; Lioutikov et al., 2015; Paraschos et al., 2018; Merel et al., 2020; Tosatto et al., 2021; Dalal et al., 2021). Some of these methods also employ mixture models to jointly model low-level motion primitives and a high-level primitive controller (Muelling et al., 2010; Colomé & Torras, 2018; Pervez & Lee, 2018); the high-level controller can also be implicit and decentralised over the low-level primitives (Goyal et al., 2019).
Several existing approaches employ architectures in which the policy is comprised of two (or more) levels of hierarchy. Typically, a low-level controller represents the learned set of skills, and a high-level policy instructs the low-level controller via a latent variable or goal. Such latent variables can be discrete (Florensa et al., 2017; Wulfmeier et al., 2020) or continuous (Nachum et al., 2018; Haarnoja et al., 2018) and regularization of the latent space is often crucial (Tirumala et al., 2019). The latent variable can represent the behaviour for one timestep, for a fixed number of timesteps (Ajay et al., 2021), or options with different durations (Sutton et al., 1999; Bacon et al., 2017; Wulfmeier et al., 2021). One such approach that is particularly relevant (Florensa et al., 2017) learns a diverse set of skills, via a discrete latent variable that interacts multiplicatively with the state to enable continuous variation in a Stochastic Neural Network policy; this skill space is then transferred to locomotion tasks by learning a new categorical controller. Our method differs in a few key aspects: our proposed three-level hierarchical architecture explicitly models abstract discrete skills while allowing for temporal dependence and lower-level latent variation in their execution, enabling diverse object-centric behaviours in challenging manipulation tasks.
Our work is related to methods that learn robot policies from demonstrations (LfD, e.g. (Rajeswaran et al., 2018; Shiarlis et al., 2018; Strudel et al., 2020)) or more broadly from logged data (offline RL, e.g. (Wu et al., 2019; Kumar et al., 2020; Wang et al., 2020)). While many of these focus on learning single-task policies, several approaches learn skills offline that can be transferred online to new tasks (Merel et al., 2019; Lynch et al., 2019; Pertsch et al., 2020a; Ajay et al., 2021; Singh et al., 2021). These all train a two-level hierarchical model, with a high-level encoder that maps to a continuous latent space, and a low-level latent-conditioned controller. The high-level encoder can encode a whole trajectory (Pertsch et al., 2020a; 2021; Ajay et al., 2021); a short look-ahead state sequence (Merel et al., 2019); the current and final goal state (Lynch et al., 2019); or can even be simple isotropic Gaussian noise (Singh et al., 2021) that can be flexibly transformed by a flow-based low-level controller. At transfer time, a new high-level policy is learned from scratch: this can be more efficient with skill priors (Pertsch et al., 2020a) or temporal abstraction (Ajay et al., 2021).
HeLMS builds on this large body of work by explicitly modelling both discrete and continuous behavioural structure via a three-level skill hierarchy. We use similar information asymmetry to Neural Probabilistic Motor Primitives (NPMP) (Merel et al., 2019; 2020), conditioning the highlevel encoder on a short look-ahead trajectory. However HeLMS explicitly captures discrete modes of behaviour via the high-level controller, and learns an additional mid-level which is able to transfer abstract skills to downstream tasks, rather than learning a continuous latent policy from scratch.
3 METHOD
This paper examines a two-stage problem setup: an offline stage where a hierarchical skill space is learned from a dataset, and an online stage where these skills are transferred to a reinforcement learning setting. The dataset D comprises a set of trajectories, each a sequence of state-action pairs {xt,at}Tt=0. The model incorporates a discrete latent variable yt ∈ {1, . . . ,K} as a high-level skill selector (for a fixed number of skills K), and a mid-level continuous variable zt ∈ Rnz conditioned on yt which parameterises each skill. Marginally, zt is then a latent mixture distribution representing both a discrete set of skills and the variation in their execution. A sample of zt represents an abstract behaviour, which is then executed by a low-level controller p(at | zt,xt). The learned skill space can then be transferred to a reinforcement learning agent π in a Markov Decision Process defined by tuple {S,A, T ,R, γ}: these represent the state, action, and transition distributions, reward function, and discount factor respectively. When transferring, we train a new high-level controller that acts either at the level of discrete skills yt or continuous zt, and freeze lower levels of the policy.
We explain our method in detail in the following sections.
3.1 LATENT MIXTURE SKILL SPACES FROM OFFLINE DATA
Our method employs the generative model in Figure 1a. As shown, the state inputs can be different for each level of the hierarchy, but to keep notation uncluttered, we refer to all state inputs as xt and the specific input can be inferred from context. The joint distribution of actions and latents over a trajectory is decomposed into a latent prior p(y0:T , z1:T ) and a low-level controller p(at | zt,xt):
p(a1:T ,y0:T , z1:T |x1:T ) = p(y0:T , z1:T ) T∏ t=1 p(at | zt,xt)
p(y0:T , z1:T ) = p(y0) T∏ t=1 p(yt |yt−1)p(zt |yt). (1)
Intuitively, the categorical variable yt can capture discrete modes of behaviour, and the continuous latent zt is conditioned on this to vary the execution of each behaviour. Thus, zt follows a mixture
distribution, encoding all the relevant information on desired abstract behaviour for the low-level controller p(at | zt,xt). Since each categorical latent yt is dependent on yt−1, and zt is only dependent on yt, this prior can be thought of as a Hidden Markov model over the sequence of z1:T .
To perform inference over the latent variables, we introduce the variational approximation:
q(y0:T , z1:T |x1:T ) = p(y0) T∏ t=1 q(yt |yt−1,xt)q(zt |yt,xt) (2)
Here, the selection of a skill yt ∼ q(yt |yt−1,xt) is dependent on that of the previous timestep (allowing for temporal consistency), as well as the input. The mid-level skill is then parameterised by zt ∼ q(zt |yt,xt) based on the chosen skill and current input. p(y0) and p(yt |yt−1) model a skill prior and skill transition prior respectively, while p(zt |yt) represents a skill parameterisation prior to regularise each mid-level skill. While all of these priors can be learned in practice, we only found it necessary to learn the transition prior, with a uniform categorical for the initial skill prior and a simple fixed N (0, I) prior for p(zt |yt).
Training via the Evidence Lower Bound The proposed model contains a number of components with trainable parameters: the prior parameters ψ = {ψa, ψy} for the low-level controller and categorical transition prior respectively; and posterior parameters φ = {φy, φz} for the high-level controller and mid-level skills. For a trajectory {x1:T ,a1:T } ∼ D, we can compute the Evidence Lower Bound for the state-conditional action distribution, ELBO ≤ p(a1:T |x1:T ), as follows:
ELBO = Eqφ(y0:T ,z1:T |x1:T ) [ log pψ(a1:T ,y0:T , z1:T |x1:T )− log qφ(y0:T , z1:T |x1:T ) ]
≈ T∑ t=1 [∑ yt q(yt |x1:t) ( per-component action recon︷ ︸︸ ︷ log pψa(at | z̃ {yt} t ,xt)−βz per-component KL regulariser︷ ︸︸ ︷ KL(qφz (zt |yt,xt) || p(zt |yt)) )]
−βy T∑ t=1 [ ∑ yt−1 q(yt−1 |x1:t−1)KL ( qφy (yt |yt−1,xt) || pψy (yt |yt−1) )︸ ︷︷ ︸ categorical regulariser ] (3)
where z̃{yt}t ∼ q(zt |yt,xt). The coefficients βy and βz can be used to weight the KL terms, and the cumulative component probability q(yt |x1:t) can be computed iteratively as q(yt |x1:t) =∑
yt−1 qφy (yt |yt−1,xt)q(yt−1 |x1:t−1). In other words, for each timestep t and each mixture component, we compute the latent sample and the corresponding action log-probability, and the KL-divergence between the component posterior and prior. This is then marginalised over all yt, with an additional KL over the categorical transitions. For more details, see Appendix C.
Information-asymmetry As noted in previous work (Tirumala et al., 2019; Galashov et al., 2019), hierarchical approaches often benefit from information-asymmetry, with higher levels seeing additional context or task-specific information. This ensures that the high-level remains responsible for abstract, task-related behaviours, while the low-level executes simpler motor primitives. We employ similar techniques in HeLMS: the low-level inputs xLL comprise the proprioceptive state of the embodied agent; the mid-level inputs xML also include the poses of objects in the environment; and the high-level xHL concatenates both object and proprioceptive state for a short number of lookahead timesteps. The high- and low-level are similar to (Merel et al., 2019), with the low-level controller enabling motor primitives based on proprioceptive information, and the high-level using the lookahead information to provide additional context regarding behavioural intent when specifying which skill to use. The key difference is the categorical high-level and the additional mid-level, with which HeLMS can learn more object-centric skills and transfer these to downstream tasks.
Network architectures The architecture and information flow in HeLMS are shown in Figure 1b. The high-level network contains a gated head, which uses the previous skill yt−1 to index into one of K categorical heads, each of which specify a distribution over yt. For a given yt, the corresponding mid-level skill network is selected and used to sample a latent action zt, which is then used as input for the latent-conditioned low-level controller, which parameterises the action distribution. The skill transition prior p(yt |yt−1) is also learned, and is parameterised as a linear softmax layer which takes in a one-hot representation of yt−1 and outputs the distribution over yt. All components are trained end-to-end via the objective in Equation 3.
3.2 REINFORCEMENT LEARNING WITH RELOADED SKILLS
Once learned, we propose two methods to transfer the hierarchical skill space to downstream tasks. Following previous work (e.g. (Merel et al., 2019; Singh et al., 2021)), we freeze the low-level controller p(at | zt,xt), and learn a policy for either the continuous (zt) or discrete (yt) latent.
Categorical agent One simple and effective technique is to additionally freeze the mid-level components q(zt |yt,xt), and learn a categorical high-level controller π(yt |xt) for the downstream task. The learning objective is given by:
J = Eπ [∑ t γt (rt − ηyKL(π(yt |xt) ||π0(yt |xt))) ] , (4)
where the standard discounted return objective in RL is augmented by an additional term performing KL-regularisation to some prior π0 scaled by coefficient ηy . This could be any categorical distribution such as the previously learned transition prior p(yt |yt−1), but in this paper we regularise to the uniform categorical prior to encourage diversity. While any RL algorithm could be used to optimize π(yt |xt), in this paper we use MPO (Abdolmaleki et al., 2018) with a categorical action distribution (see Appendix B for details). We hypothesise that this method improves sample efficiency by converting a continuous control problem into a discrete abstract action space, which may also aid in credit assignment. However, since both the mid-level components and low-level are frozen, it can limit flexibility and plasticity, and also requires that all of the mid- and low-level input states are available in the downstream task. We call this method HeLMS-cat.
Mixture agent A more flexible method of transfer is to train a latent mixture policy, π(zt |xt) =∑ yt π(yt |xt)π(zt |yt,xt). In this case, the learning objective is given by:
J = Eπ [∑ t γt ( rt − ηyKL(π(yt |xt) ||π0(yt |xt))− ηz ∑ yt KL(π(zt |yt,xt) ||π0(zt |yt,xt)) )] , (5)
where in addition to the categorical prior, we also regularise each mid-level skill to a corresponding prior π0(zt |yt,xt). While the priors could be any policies, we set them to be the skill posteriors q(zt |yt,xt) learned offline, to ensure the mixture components remain close to the pre-learned skills. This is related to (Tirumala et al., 2019), which also applies KL-regularisation at multiple levels of a hierarchy. While the high-level controller π(yt |xt) is learned from scratch, the mixture components can also be initialised to q(zt |yt,xt), to allow for initial exploration over the space of skills. Alternatively, the mixture components can use different inputs, such as vision: this setup allows vision-based skills to be learned efficiently by regularising to state-based skills learned offline. We optimise this using RHPO (Wulfmeier et al., 2020), which employs a similar underlying optimisation to MPO for mixture policies (see Appendix B for details). We call this HeLMS-mix.
4 EXPERIMENTS
Our experiments focus on the following questions: (1) Can we learn a hierarchical latent mixture skill space of distinct, interpretable behaviours? (2) How do we best reuse this skill space to improve sample efficiency and performance on downstream tasks? (3) Can the learned skills transfer effectively to multiple downstream scenarios: (i) different objects; (ii) different tasks; and (iii) different modalities such as vision-based policies? (4) How exactly do these skills aid learning of downstream manipulation tasks? Do they aid exploration? Are they useful in sparse or dense reward scenarios?
4.1 EXPERIMENTAL SETUP
Environment and Tasks We focus on manipulation tasks, using a MuJoCo-based environment with a single Sawyer arm, and three objects coloured red, green, and blue. We follow the challenging object stacking benchmark of Lee et al. (2021), which specifies five object sets (Figure 2), carefully designed to have diverse geometries and present different challenges for a stacking agent. These range from simple rectangular objects (object set 4), to geometries such as slanted faces (sets 1 and 2) that make grasping or stacking the objects more challenging. This environment allows us
to systematically evaluate generalisation of manipulation behaviours for different tasks interacting with geometrically different objects. For further information, we refer the reader to Appendix D.1 or to (Lee et al., 2021). Details of the rewards for the different tasks are also provided in Appendix F.
Datasets To evaluate our approach and baselines in the manipulation settings, we use two datasets:
• red_on_blue_stacking: this data is collected by an agent trained to stack the red object on the blue object and ignore the green one, for the simplest object set, set4. • all_pairs_stacking: similar to the previous case, but with all six pairwise stacking
combinations of {red, green, blue}, and covering all of the five object sets.
Baselines For evaluation in transfer scenarios, we compare HeLMS with a number of baselines:
• From scratch: We learn the task from scratch with MPO, without an offline learning phase. • NPMP+KL: We compare against NPMP (Merel et al., 2019), which is the most simi-
lar skill-based approach in terms of information-asymmetry and policy conditioning. We make some small changes to the originally proposed method, and also apply additional KL-regularisation to the latent prior: we found this to improve performance significantly in our experiments. For more details and an ablation, see Appendix A.2.
• Behaviour Cloning (BC): We apply behaviour cloning to the dataset, and fine-tune this policy via MPO on the downstream task. While the actor is initialised to the solution obtained via BC, the critic still needs to be learned from scratch.
• Hierarchical BC: We evaluate a hierarchical variant of BC with a similar latent space z to NPMP using a latent Gaussian high-level controller. However, rather than freezing the low-level and learning just a high-level policy, Hierarchical BC fine-tunes the entire model.
• Asymmetric actor-critic: For state-to-vision transfer, HeLMS uses prior skills that depend on object states to learn a purely vision-based policy. Thus, we also compare against a variant of MPO with an asymmetric actor-critic (Pinto et al., 2017) setup, which uses object states differently: to speed up learning of the critic, while still learning a vision-based actor.
4.2 LEARNING SKILLS FROM OFFLINE DATA
We first aim to understand whether we can learn a set of distinct and interpretable skills from data (question (1)). For this, we train HeLMS on the red_on_blue_stacking dataset with 5 skills.
(a) Set 1 (b) Set 2 (c) Set 3 (d) Set 5
Figure 5: (a) Performance on pyramid task; and (b) image sequence showing episode rollout from a learned solution on this task (left-to-right, top-to-bottom).
Figure 6: Performance for vision-based stacking.
Figure 3a shows some example episode rollouts when the learned hierarchical agent is executed in the environment, holding the high-level categorical skill constant for an episode. Each row represents a different skill component, and the resulting behaviours are both distinct and diverse: for example, a lifting skill (row 1) where the gripper closes and rises up, a reaching skill (row 2) where the gripper moves to the red object, or a grasping skill (row 3) where the gripper lowers and closes its fingers. Furthermore, without explicitly encouraging this, the emergent skills capture temporal consistency: Figure 3b shows the learned prior p(yt |yt−1) (visualised as a transition matrix) assigns high probability along the diagonal (remaining in the same skill). Finally, Figure 3c demonstrates that all skills are used, without degeneracy.
4.3 TRANSFER TO DOWNSTREAM TASKS
Generalising to different objects We next evaluate whether the previously learned skills (i.e. trained on the simple objects in set 4) can effectively transfer to more challenging object interaction scenarios: the other four object sets proposed by (Lee et al., 2021). The task uses a sparse staged reward, with reward incrementally given after completing each sub-goal of the stacking task. As shown in Figure 4, both variants of HeLMS learn significantly faster than baselines on the different object sets. Compared to the strongest baseline (NPMP), HeLMS reaches better average asymptotic performance (and much lower variance) on two object sets (1 and 3), performs similarly on set 5, and does poorer on object set 2. The performance on object set 2 potentially highlights a trade-off between incorporating higher-level abstract behaviours and maintaining low-level flexibility: this object set often requires a reorientation of the bottom object due to its slanted faces, a behaviour that is not common in the offline dataset, which might require greater adaptation of mid- and low-level skills. This is an interesting investigation we leave for future work.
Compositional reuse of skills To evaluate whether the learned skills are composable for new tasks, we train HeLMS on the all_pairs_stacking dataset with 10 skills, and transfer to a pyramid task. In this setting, the agent has to place the red object adjacent to the green object, and stack the blue object on top to construct a pyramid. The task is specified via a sparse staged reward
for each stage or sub-task: reaching, grasping, lifting, and placing the red object, and subsequently the blue object. In Figure 5(a), we plot the performance of both variants of our approach, as well as NPMP and MPO; we omit the BC baselines as this involves transferring to a fundamentally different task. Both HeLMS-mix and HeLMS-cat reach a higher asymptotic performance than both NPMP and MPO, indicating that the learned skills can be better transferred to a different task. We show an episode rollout in Figure 5(b) in which the learned agent can successfully solve the task.
From state to vision-based policies While our method learns skills from proprioception and object state, we evaluate whether these skills can be used to more efficiently learn a vision-based policy. This is invaluable for practical real-world scenarios, since the agent acts from pure visual observation at test time without requiring privileged and often difficult-to-obtain object state information.
We use the HeLMS-mix variant to transfer skills to a vision-based policy, by reusing the low-level controller, initialising a new high-level controller and mid-level latent skills (with vision and proprioception as input), and KL-regularising these to the previously learned state-based skills. While the learned policy is vision-based, this KL-regularisation still assumes access to object states during training. For a fair comparison, we additionally compare our approach with a version of MPO using an asymmetric critic (Pinto et al., 2017), which exploits object state information instead of vision in the critic, and also use this for HeLMS. As shown in Figure 6, learning a vision-based policy with MPO from scratch is very slow and computationally intensive, but an asymmetric critic significantly speeds up learning, supporting the empirical findings of Pinto et al. (2017). However, HeLMS once again demonstrates better sample efficiency, and reaches slightly better asymptotic performance. We note that this uses the same offline model as for the object generalisation experiments, showing that the same state-based skill space can be reused in numerous settings, even for vision-based tasks.
4.4 WHERE AND HOW CAN HIERARCHICAL SKILL REUSE BE EFFECTIVE?
Sparse reward tasks We first investigate how HeLMS performs for different rewards: a dense shaped reward, the sparse staged reward from the object generalisation experiments, and a fully sparse reward that is only provided after the agent stacks the object. For this experiment, we use the skill space trained on red_on_blue_stacking and transfer it to the same RL task of stacking on object set 4. The results are shown in Figure 7. With a dense reward (and no object transfer required), all of the approaches can successfully learn the task. With the sparse staged reward, the baselines all plateau at a lower performance, with the exception of NPMP, as previously discussed. However, for the challenging fully-sparse scenario, HeLMS is the only method that achieves nonzero reward. This neatly illustrates the benefit of the proposed hierarchy of skills: it allows for directed exploration which ensures that even sparse rewards can be encountered. This is consistent with observations from prior work in hierarchical reinforcement learning (Florensa et al., 2017; Nachum et al., 2019), and we next investigate this claim in more depth for our manipulation setting.
Exploration To measure whether the proposed approach leads to more directed exploration, we record the average coverage in state space at the start of RL (i.e. zero-shot transfer). This is computed as the variance (over an episode) of the state xt, separated into three interpretable groups:
Method Reward State coverage (×10 −2)
Dense Staged Joints Grasp Objects MPO 3.16 0.0 8.72 0.004 1.21
NPMP 3.67 0.0 3.43 0.0 1.45 BC 31.68 0.004 4.22 0.05 1.52 Hier. BC 16.42 0.004 2.61 0.04 1.31 HeLMS 20.46 0.05 2.98 1.10 1.61
Table 1: Analysis of zero-shot exploration at the start of RL, in terms of reward and state coverage (variance over an episode of different subsets of the agent’s state). Results are averaged over 1000 episodes.
(a) (b)
Figure 8: Ablation for continuous and discrete components during offline learning, when transferring to the (a) easy case (object set 4) and (b) hard case (object set 3).
joints (angles and velocity), grasp (a simulated grasp sensor), and object poses. We also record the total reward (dense and sparse staged). The results are reported in Table 1. While all approaches achieve some zero-shot dense reward (with BC the most effective), HeLMS1 receives a sparse staged reward an order of magnitude greater. Further, in this experiment we found it was able to achieve the fully sparse reward (stacked) in one episode. Analysing the state coverage results, while other methods are able to cover the joint space more (e.g. by randomly moving the joints), HeLMS is nearly two orders of magnitude higher for grasp states. This indicates the utility of hierarchical skills: by acting over the space of abstract skills rather than low-level actions, HeLMS performs directed exploration and targets particular states of interest, such as grasping an object.
4.5 ABLATION STUDIES
Capturing continuous and discrete structure To evaluate the benefit of both continuous and discrete components, we train our method with a fixed variance of zero for each latent component (i.e. ‘discrete-only’) and transfer to the stacking task with sparse staged reward in an easy case (object set 4) and hard case (object set 3), as shown in Figure 8(a) and (b). We also evaluate the ‘continuousonly’ case with just a single Gaussian to represent the high- and mid-level skills: this is equivalent to the NPMP+KL baseline. We observe the the discrete component alone leads to improved sample efficiency in both cases, but modelling both discrete and continuous latent behaviours makes a significant difference in the hard case. In other words, when adapting to challenging objects, it is important to capture discrete skills, but allow for latent variation in how they are executed.
KL-regularisation We also perform an ablation for KL-regularisation during the offline phase (via βz) and online RL (via ηz), to gauge the impact on transfer; see Appendix A.1 for details.
5 CONCLUSION
We present HeLMS, an approach to learn transferable and reusable skills from offline data using a hierarchical mixture latent variable model. We analyse the learned skills to show that they effectively cluster data into distinct, interpretable behaviours. We demonstrate that the learned skills can be flexibly transferred to different tasks, unseen objects, and to different modalities (such as from state to vision). Ablation studies indicate that it is beneficial to model both discrete modes and continuous variation in behaviour, and highlight the importance of KL-regularisation when transferring to RL and fine-tuning the entire mixture of skills. We also perform extensive analysis to understand where and how the proposed skill hierarchy can be most useful: we find that it is particularly invaluable in sparse reward settings due to its ability to perform directed exploration.
There are a number of interesting avenues for future work. While our model demonstrated temporal consistency, it would be useful to more actively encourage and exploit this for sample-efficient transfer. It would also be useful to extend this work to better fine-tune lower level behaviours, to allow for flexibility while exploiting high-level behavioural abstractions.
1Note that HeLMS-cat and HeLMS-mix are identical for this analysis: at the start of reinforcement learning, both variants transfer the mid-level skills while initialising a new high-level controller.
ACKNOWLEDGMENTS
The authors would like to thank Coline Devin for detailed comments on the paper and for generating the all_pairs_stacking dataset. We would also like to thank Alex X. Lee and Konstantinos Bousmalis for help with setting up manipulation experiments. We are also grateful to reviewers for their feedback.
A ADDITIONAL EXPERIMENTS
A.1 ABLATIONS FOR KL-REGULARISATION
In these experiments, we investigate the effect of KL-regularisation on the mid-level components, both for the offline learning phase (regularising each component to p(zt |yt) = N (0, I) via coefficient βz), and the online reinforcement learning stage via HeLMS-mix (regularising each component to the mid-level skills learned offline, via coefficient ηz)). The results are reported in Figure 9, where each plot represents a different setting for offline KL-regularisation (either regularisation toN (0, I) with βz = 0.01, or no regularisation with βz = 0) and a different transfer case (the easy case of transferring to object set 4, or the hard case of transferring to object set 3). Each plot shows the downstream performance when varying the strength of KL-regularisation during RL via coefficient ηz . The HeLMS-cat approach represents the extreme case where the skills are entirely frozen (i.e. full regularisation).
The results suggest some interesting properties of the latent skill space based on regularisation. When regularising the mid-level components to the N (0, I) prior, it is important to regularise during online RL; this is especially true for the hard transfer case, where HeLMS-cat performs much better, and the performance degrades significantly with lower regularisation values. However, when removing mid-level regularisation during offline learning, the method is insensitive to regularisation during RL over the entire range evaluated, from 0.01 to 100.0. We conjecture that with mid-level skills regularised to N (0, I), the different mid-level skills are drawn closer together and occupy a more compact region in latent space, such that KL-regularisation is necessary during RL for a skill to avoid drifting and overlapping with the latent distribution of other skills (i.e. skill degeneracy). In contrast, without offline KL-regularisation, the skills are free to expand and occupy more distant regions of the latent space, rendering further regularisation unnecessary during RL. Such latent space properties could be further analysed to improve learning and transfer of skills; we leave this as an interesting direction for future work.
A.2 NPMP ABLATION
The Neural Probabilistic Motor Primitives (NPMP) work (Merel et al., 2019) presents a strong baseline approach to learning transferable motor behaviours, and we run ablations to ensure a fair comparison to the strongest possible result. As discussed in the main text, NPMP employs a Gaussian high-level latent encoder with a AR(1) prior in the latent space. We also try a fixed N(0, I) prior (this is equivalent to an AR(1) prior with a coefficient of 0, so can be considered a hyperparameter choice). Since our method benefits from KL-regularisation during RL, we apply this to NPMP as well.
As shown in Figure 10, we find that both changes lead to substantial improvements in the manipulation domain, on all five object sets. Consequently, in our main experiments, we report results with the best variant, using a N(0, I) prior with KL-regularisation during RL.
B REINFORCEMENT LEARNING WITH MPO AND RHPO
As discussed in Section 3.2, the hierarchy of skills are transferred to RL in two ways: HeLMScat, which learns a new high-level categorical policy π(yt |xt) via MPO (Abdolmaleki et al., 2018); or HeLMS-mix, which learns a mixture policy π(zt |xt) = ∑ yt π(yt |xt)π(zt |yt,xt) via RHPO (Wulfmeier et al., 2020). We describe the optimisation for both of these cases in the following subsections. For clarity of notation, we omit the additional KL-regularisation terms introduced in Section 3.2 and describe just the base methods of MPO and RHPO when applied to the RL setting in this paper. These KL-terms are incorporated as additional loss terms in the policy improvement stage.
B.1 HELMS-CAT VIA MPO
Maximum a posteriori Policy Optimisation (MPO) is an Expectation-Maximisation-based algorithm that performs off-policy updates in three steps: (1) updating the critic; (2) creating a non-parametric intermediate policy by weighting sampled actions using the critic; and (3) updating the parametric policy to fit the critic-reweighted non-parametric policy, with trust region constraints to improve stability. We detail each of these steps below. Note that while the original MPO operates in the environment’s action space, we use it here for the high-level controller, to set the categorical variable yt.
Policy evaluation First, the critic is updated via a TD(0) objective as:
min θ L(θ) = Ext,yt∼B
[( QT −Qφ(xt,yt))2 ] , (6)
Here, QT = rt + γExt+1,yt+1 [Q′(st+1,yt+1)] is the 1-step target with the state transition (xt,yt,xt+1) returned from the replay buffer B, and next action sampled from yt+1 ∼ π′(·|xt+1). π′ and Q′ are target networks for the policy and the critic, used to stabilise learning.
Policy improvement Next, we proceed with the first step of policy improvement by constructing an intermediate non-parametric policy q(yt|xt), and optimising the following constrained objective:
max q J(q) = Eyt∼q,xt∼B
[ Qφ(xt,yt) ] , s.t. Ext∼B [ KL ( q(·|xt)‖πθk(·|xt) )] ≤ E , (7)
where E defines a bound on the KL divergence between the non-parametric and parametric policies at the current learning step k. This constrained optimisation problem has the following closed-form solution:
q(yt |xt) ∝ πθk(yt |xt) exp (Qφ(xt,yt)/η) . (8)
In other words, this step constructs an intermediate policy which reweights samples from the previous policy using exponentiated temperature-scaled critic values. The temperature parameter η is derived based on the dual of the Lagrangian; for futher details please refer to (Abdolmaleki et al., 2018).
Finally, we can fit a parametric policy to the non-parametric distribution q(yt |xt) by minimising their KL-divergence, subject to a trust-region constraint on the parametric policy:
θk+1 = argmin θ
Ext∼B [ KL(q(yt |xt) ||πθ(yt|xt)) ] ,
s.t. Ext∼B [ KL ( πθk+1(yt |xt) ||πθk(yt |xt) )] ≤ M . (9)
This optimisation problem can be solved via Lagrangian relaxation, with the Lagrangian multiplier M modulating the strength of the trust-region constraint. For further details and full derivations, please refer to (Abdolmaleki et al., 2018).
B.2 HELMS-MIX VIA RHPO
RHPO (Wulfmeier et al., 2020) follows a similar optimisation procedure as MPO, but extends it to mixture policies and multi-task settings. We do not exploit the multi-task capability in this work, but utilise RHPO to optimise the mixture policy in latent space, π(zt |xt) =∑
yt π(yt |xt)π(zt |yt,xt). The Q-function Qφ(xt, zt) and parametric policy πθk(zt |xt) use the continuous latents zt as actions instead of the categorical yt. This is also in contrast to the original formulation of RHPO, which uses the environment’s action space. Compared to MPO, the policy improvement stage of the non-parametric policy is minimally adapted to take into account the new mixture policy. The key difference is in the parametric policy update step, which optimises the following:
θk+1 = argmin θ
Ext∼B [ KL(q(zt |xt) ||πθ(zt|xt)) ] ,
s.t. Ext∼B [ KL ( πθk+1(yt |xt) ||πθk(yt |xt) ) + ∑ yt KL ( πθk+1(zt |yt,xt) ||πθk(zt |yt,xt) )] ≤ M . (10)
In other words, separate trust-region constraints are applied to a sum of KL-divergences: for the high-level categorical and for each of the mixture components. Following the original RHPO, we separate the single constraint into decoupled constraints that set a different for the means, covariances, and categorical ( µ, σ , and cat, respectively). This allows the optimiser to independently modulate how much the categorical distribution, component means, and component variances can change. For further details and full derivations, please refer to (Wulfmeier et al., 2020).
C ELBO DERIVATION AND INTUITIONS
We can compute the Evidence Lower Bound for the state-conditional action distribution, p(a1:T |x1:T ) ≥ ELBO, as follows:
ELBO = p(a1:T |x1:T )− KL(q(y0:T , z1:T |x1:T ) || p(y0:T , z1:T |x1:T )) = Eq(y0:T ,z1:T |x1:T ) [ log p(a1:T ,y0:T , z1:T |x1:T )− log q(y0:T , z1:T |x1:T ) ] = Eq1:T [ T∑ t=1 log p(at | zt,xt) + log p(zt |yt) + log p(yt |yt−1)
− log q(zt |yt,xt)− log q(yt |yt−1,x)
]
= T∑ t=1 Eq1:T
[ log p(at | zt,xt)− KL(q(zt |yt,xt) || p(zt |yt))
−KL(q(yt |yt−1,xt) || p(yt |yt−1)) ] (11)
We note that the first two terms in the expectation depend only on timestep t, so we can simplify and marginalise exactly over all discrete {y1:T }\yt. For the final term, we note that the KL at timestep t is constant with respect to yt (as it already marginalises over the whole distribution), and only depends on yt−1. Lastly, we will use sampling to approximate the expectation over zt. This yields the following:
ELBO = T∑ t=1 Eq(zt |yt,xt) [∑ y0:T q(y0:T |x1:T ) ( log p(at | zt,xt)− KL(q(zt |yt,xt) || p(zt |yt))
−KL(q(yt |yt−1,xt) || p(yt |yt−1)) )]
ELBO ≈ T∑ t=1 [∑ yt q(yt |x1:t) ( per-component recon loss︷ ︸︸ ︷ log p(at | z̃{yt}t ,xt)−βz per-component KL regulariser︷ ︸︸ ︷ KL(q(zt |yt,xt) || p(zt |yt)) )]
−βy T∑ t=1 [∑ yt−1 q(yt−1 |x1:t−1)KL(q(yt |yt−1,xt) || p(yt |yt−1))︸ ︷︷ ︸ discrete regulariser ] (12)
where z̃{yt}t ∼ q(zt |yt,xt), the coefficients βy and βz can be used to weight the KL terms, and the cumulative component probability q(yt |x1:t) can be computed iteratively as:
q(yt |x1:t) = ∑ yt−1 q(yt |yt−1,xt)q(yt−1 |x1:t−1) (13)
In other words, for each timestep t and each mixture component, we compute the latent sample and the corresponding action log-probability, and the KL-divergence between the component posterior and prior. This is then marginalised over all yt, with an additional KL over the categorical transitions.
Structuring the graphical model and ELBO in this form has a number of useful properties. First, the ELBO terms include an action reconstruction loss and KL term for each mixture component, scaled by the posterior probability of each component given the history. For a given state, this pressures the model to assign higher posterior probability to components that have low reconstruction cost or KL, which allows different components to specialise for different parts of the state space. Second, the categorical KL between posterior and prior categorical transition distributions is scaled by the
posterior probability of the previous component given history q(yt−1 |x1:t−1): this allows the relative probabilities of past skill transitions along a trajectory to be considered when regularising the current skill distribution. Finally, this formulation does not require any sampling or backpropagation through the categorical variable: starting from t = 0, the terms for each timestep can be efficiently computed by recursively updating the posterior over components given history (q(yt |x1:t)), and summing over all possible categorical values at each timestep.
D ENVIRONMENT PARAMETERS
As discussed earlier in the paper, all experiments take place in a MuJoCo-based object manipulation environment using a Sawyer robot manipulator and three objects: red, green, and blue. The state variables in the Sawyer environment are shown in Table 3. All state variables are stacked for 3 frames for all agents. The object states are only provided to the mid-level and high-level for HeLMS runs, and the camera images are only used by the high- and mid-level controller in the vision transfer experiments (without object states).
The action space is also shown in Table 4. Since the action dimensions vary significantly in range, they are normalised to be between [−1, 1] for all methods during learning. When learning via RL, we apply domain randomisation to physics (but not visual randomisation), and a randomly sampled action delay of 0-2 timesteps. This is applied for all approaches, and ensures that we can learn a policy that is robust to small changes in the environment.
D.1 OBJECT SETS
As discussed in the main paper, we use the object sets defined by Lee et al. (2021), which are carefully designed to cover different object geometries and affordances, presenting different challenges for object interaction tasks. The object sets are shown in Figure 11 (the image has been taken directly from (Lee et al., 2021) for clarity), and feature both simulated and real-world versions; in this paper we focus on the simulated versions. As discussed in detail by (Lee et al., 2021), each object set has a different degree of difficulty and presents a different challenge to the task of stacking red-on-blue:
• In object set 1, the red object has slanted surfaces that make it difficult to grasp, while the blue object is an octagonal prism that can roll.
• In object set 2, the blue object has slanted surfaces, such that the red object will likely slide off unless the blue object is first reoriented.
• In object set 3, the red object is long and narrow, requiring a precise grasp and careful placement.
• Object set 4 is the easiest case with rectangular prisms for both red and blue. • Object set 5 is also relatively easy, but the blue object has ten faces, meaning limited surface
area for stacking.
For more details about the object sets and the rationale behind their design, we refer the reader to (Lee et al., 2021).
E NETWORK ARCHITECTURES AND HYPERPARAMETERS
The network architecture details and hyperparameters for HeLMS are shown in Table 5. Parameter sweeps were performed for the β coefficients during offline learning and the η coefficients during RL. Small sweeps were also performed for the RHPO parameters (refer to (Wulfmeier et al., 2020) for details), but these were found to be fairly insensitive. All other parameters were kept fixed, and used for all methods except where highlighted in the following subsections. All RL experiments were run with 3 seeds to capture variation in each method.
For network architectures, all experiments except for vision used simple 2-layer MLPs for the highand low-level controllers, and for each mid-level mixture component. An input representation network was used to encode the inputs before passing them to the networks that were learned from scratch: i.e. the high-level for state-based experiments, and both high- and mid-level for vision (re-
call that while the state-based experiments can reuse the mid-level components conditioned on object state, the vision-based policy learned them from scratch and KL-regularised to the offline mid-level skills). The critic network was a 3-layer MLP, applied to the output of another input representation network (separate to the actor, but with the same architecture) with concatenated action.
F REWARDS
Throughout the experiments, we employ different reward functions for different tasks and to study the efficacy of our method in sparse versus dense reward scenarios.
Reward stages and primitive functions The reward functions for stacking and pyramid tasks use various reward primitives and staged rewards for completing sub-tasks. Each of these rewards are within the range of [0, 1]
These include:
• reach(obj): a shaped distance reward to bring the TCP to within a certain tolerance of obj. • grasp(): a binary reward for triggering the gripper’s grasp sensor. • close_fingers(): a shaped distance reward to bring the fingers inwards. • lift(obj): shaped reward for lifting the gripper sufficiently high above obj. • hover(obj1,obj2): shaped reward for holding obj1 above obj2. • stack(obj1,obj2): a sparse reward, only provided if obj1 is on top of obj2 to
within both a horizontal and vertical tolerance. • above(obj,dist): shaped reward for being dist above obj, but anywhere horizon-
tally. • pyramid(obj1,obj2,obj3): a sparse reward, only provided if obj3 is on top of the
point midway between obj1 and obj2, to within both a horizontal and vertical tolerance. • place_near(obj1,obj2): sparse reward provided if obj1 is sufficiently near obj2.
Dense stacking reward The dense stacking reward contains a number of stages, where each stage represents a sub-task and has a maximum reward of 1. The stages are:
• reach(red) AND grasp(): Reach and grasp the red object. • lift(red) AND grasp(): Lift the red object. • hover(red,blue): Hover with the red object above the blue object. • stack(red,blue): Place the red object on top of the blue one. • stack(red,blue) AND above(red): Move the gripper above after a completed
stack.
At each timestep, the latest stage to receive non-zero reward is considered to be the current stage, and all previous stages are assigned a reward of 1. The reward for this timestep is then obtained by summing rewards for all stages, and scaling by the number of stages, to ensure the highest possible reward on any timestep is 1.
Sparse staged stacking reward The sparse staged stacking reward is similar to the dense reward variant, but each stage is sparsified by only providing the reward for the stage once it exceeds a value of 0.95.
This scenario emulates an important real-world problem: that it may be difficult in certain cases to specify carefully shaped meaningful rewards, and it can often be easier to specify (sparsely) whether a condition (such as stacking) has been met.
Sparse stacking reward This fully sparse reward uses the stack(red,blue) function to provide reward only when conditions for stacking red on blue have been met.
Pyramid reward The pyramid-building reward uses a staged sparse reward, where each stage represents a sub-task and has a maximum reward of 1. If a stage has dense reward, it is sparsified by only providing the reward once it exceeds a value of 0.95. The stages are:
• reach(red) AND grasp(): Reach and grasp the red object. • lift(red) AND grasp(): Lift the red object. • hover(red,green): Hover with the red object above the green object (with a larger
horizontal tolerance, as it does not need to be directly above). • place_near(red,green): Place the red object sufficiently close to the green object. • reach(blue) AND grasp(): Reach and grasp the blue object. • lift(blue) AND grasp(): Lift the blue object. • hover(blue,green) AND hover(blue,red): Hover with the blue object above
the central position between red and green objects. • pyramid(blue,red,green): Place the blue object on top to make a pyramid. • pyramid(blue,red,green) AND above(blue): Move the gripper above after a
completed stack.
At each timestep, the latest stage to receive non-zero reward is considered to be the current stage, and all previous stages are assigned a reward of 1. The reward for this timestep is then obtained by summing rewards for all stages, and scaling by the number of stages, to ensure the highest possible reward on any timestep is 1. | 1. What is the main contribution of the paper regarding motor skill learning?
2. What are the strengths of the proposed hierarchy system, particularly in transferring skills between tasks?
3. What are some concerns or questions regarding the experimental results, such as the performance of HeLMS-mix and NPMP?
4. How does the reviewer assess the clarity and quality of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a three-level hierarchy for learning motor skills. At the bottom, there is a low-level controller for action generation and at the top a high-level controller that generates sequences of skills, with continuous control signals generated by skills at the mid-level. When transferring motor skills from one task to another using reinforcement learning, it is possible to either retrain only the high-level controller or the mid-level one as well. In experiments, it is shown that for transfers to more complex manipulation tasks with sparse rewards, you are more likely to get convergence in training with positive rewards, if only retraining the high-level controller. Thus you are given more flexibility and are possibly able to learn more complex tasks, with the proposed hierarchy compared to earlier methods based on only two levels.
Review
The most interesting aspect of the proposed system is the fact that transfer by retraining only at the discrete level can be so advantageous. This is true at least as long as you stick to manipulation for which skills like moving the end-effector and opening/closing the gripper are quite generally applicable. However, since different levels may work at different dimensions of the state space, it might be possible to retrain the mid-level controller from one modality to another, something that is illustrated in the experiments.
The experimental section is quite extensive and includes comparisons to other alternative methods, as well as ablation studies. Two variants are tested for transfer learning; one that only retrains the high-level controller (HeLMS-cat) and one that also retrains the mid-level (HeLMS-mix), while the low-level controller is always kept fixed. The first option seems to be more beneficial in complex cases with sparse rewards. In fact, it is a bit surprising that HeLMS-mix rarely seems to surpass HeLMS-cat. It is easy to assume that this is due to the similarity between manipulation tasks when it comes to what skill sets are required.
Compared to the other alternative methods tested, HeLMS seems to cover a larger state space during exploration, which in turn leads to higher rewards, when rewards are sparse. With the three-level hierarchy, exploration can be done at a higher level of abstraction while exploiting skills already learned.
There are some questions worth asking regarding the experimental results. It is shown that for more complicated tasks, HeLMS-mix might come to the point where the average reward starts to decrease. It would seem more reasonable if the reward had just flattened out at a lower level, which happens in most other cases. An explanation given in the paper is that in such cases spurious reward correlations might cause the skills to drift. Does this instability come as a result of including an additional hierarchical level, since none of the other tested methods seems to show anything similar? Another question is why NPMP performs better on object set 2 specifically? Is there any reasonable explanation or is it just a coincidence?
When referring to HeLMS in the experiments, it would be good if the paper mentioned either HeLMS-cat or HeLMS-mix, unless both variants are intended. Otherwise, the reader has to go into the text to see which variant was actually used. For example, which variant is used in Table I? By the way, in this table, what is going on with the space coverage in the dimension of grasping? Since it is binary it looks as if none of the other methods really tries to explore grasping, which is odd given the nature of the task given.
Figures 7 c)-d) use the wrong notation for the regularization weight, at least compared to what is written in the text. Also, instead of HeLMS-cat the label says just HeLM.
On page three, “ELBO <= p(a|x)” is written in the wrong order. It is correct in appendix B. Other than that, the paper is very easy to read and understand, with clarity in both language and notations. |
ICLR | Title
Efficient Exploration for Model-based Reinforcement Learning with Continuous States and Actions
Abstract
Balancing exploration and exploitation is crucial in reinforcement learning (RL). In this paper, we study the model-based posterior sampling algorithm in continuous state-action spaces theoretically and empirically. First, we improve the regret bound: with the assumption that reward and transition functions can be modeled as Gaussian Processes with linear kernels, we develop a Bayesian regret bound of Õ(Hd √ T ), where H is the episode length, d is the dimension of the stateaction space, and T indicates the total time steps. Our bound can be extended to nonlinear cases as well: using linear kernels on the feature representation φ, the Bayesian regret bound becomes Õ(Hdφ √ T ), where dφ is the dimension of the representation space. Moreover, we present MPC-PSRL, a model-based posterior sampling algorithm with model predictive control for action selection. To capture the uncertainty in models and realize posterior sampling, we use Bayesian linear regression on the penultimate layer (the feature representation layer φ) of neural networks. Empirical results show that our algorithm achieves the best sample efficiency in benchmark control tasks compared to prior model-based algorithms, and matches the asymptotic performance of model-free algorithms.
N/A
√ T ), where H is the episode length, d is the dimension of the state-
action space, and T indicates the total time steps. Our bound can be extended to nonlinear cases as well: using linear kernels on the feature representation φ, the Bayesian regret bound becomes Õ(H3/2dφ √ T ), where dφ is the dimension of the representation space. Moreover, we present MPC-PSRL, a model-based posterior sampling algorithm with model predictive control for action selection. To capture the uncertainty in models and realize posterior sampling, we use Bayesian linear regression on the penultimate layer (the feature representation layer φ) of neural networks. Empirical results show that our algorithm achieves the best sample efficiency in benchmark control tasks compared to prior model-based algorithms, and matches the asymptotic performance of model-free algorithms.
1 INTRODUCTION
In reinforcement learning (RL), an agent interacts with an unknown environment which is typically modeled as a Markov Decision Process (MDP). Efficient exploration has been one of the main challenges in RL: the agent is expected to balance between exploring unseen state-action pairs to gain more knowledge about the environment, and exploiting existing knowledge to optimize rewards in the presence of known data.
To achieve efficient exploration, Bayesian reinforcement learning is proposed, where the MDP itself is treated as a random variable with a prior distribution. This prior distribution of the MDP provides an initial uncertainty estimate of the environment, which generally contains distributions of transition dynamics and reward functions. The epistemic uncertainty (subjective uncertainty due to limited data) in reinforcement learning can be captured by posterior distributions given the data collected by the agent.
Posterior sampling reinforcement learning (PSRL), motivated by Thompson sampling in bandit problems (Thompson, 1933), serves as a provably efficient algorithm under Bayesian settings. In PSRL, the agent maintains a posterior distribution for the MDP and follows an optimal policy with respect to a single MDP sampled from the posterior distribution for interaction in each episode. Appealing results of PSRL in tabular RL were presented by both model-based (Osband et al., 2013; Osband & Van Roy, 2017) and model free approaches (Osband et al., 2019) in terms of the Bayesian regret. For H-horizon episodic RL, PSRL was proved to achieve a regret bound of Õ(H √ SAT ), where S and A denote the number of states and actions, respectively. However, in continuous state-action spaces S and A can be infinite, hence the above results do not apply.
Although PSRL in continuous spaces has also been studied in episodic RL, existing results either provide no guarantee or suffer from an exponential order of H . In this paper, we achieve the first Bayesian regret bound for posterior sampling algorithms that is near optimal in T (i.e. √ T ) and
polynomial in the episode lengthH for continuous state-action spaces. We will explain the limitations of previous works in Section 1.1, then summarize our approach and contributions in Section 1.2.
1.1 LIMITATIONS OF PREVIOUS BAYESIAN REGRETS IN CONTINUOUS SPACES
The exponetial order ofH: In model-based settings, Osband & Van Roy (2014) derive a regret bound of Õ(σR √ dK(R)dE(R)T + E[L∗]σp √ dK(P )dE(P )), where L∗ is a global Lipschitz constant for the future value function defined in their eq. (3). However, L∗ is dependent on H: the difference between input states will propagate in H steps, which results in a term dependent of H in the value function. The authors do not mention this dependency, so there is no clear dependency on H in their regret. Moreover, they use the Lipschitz constant of the underlying value function as an upper bound of L∗ in the corollaries, which yields an exponential order in H . Take their Corollary 2 of linear quadratic systems as an example: the regret bound is Õ(σCλ1n2 √ T ), where λ1 is the largest eigenvalue of the matrix Q in the optimal value function V1(s) = sTQs. 1 However, the largest eigenvalue of Q is actually exponential in H 2. Even if we change the reward function from quadratic to linear,the Lipschitz constant of the optimal value function is still exponential in H 3. Chowdhury & Gopalan (2019) maintains the assumption of this Lipschitz property, thus there exists E[L∗] with no clear dependency on H in their regret, and in their Corollary 2 of LQR, they follow the same steps as Osband & Van Roy (2014), and still maintain a term with λ1, which is actually exponential in H as discussed. Although Osband & Van Roy (2014) mentions that system noise helps to smooth future values, but they do not explore it although the noise is assumed to be subgaussian. The authors directly use the Lipschitz continuity of the underlying function in the analysis of LQR, thus they cannot avoid the exponential term in H . Chowdhury & Gopalan (2019) do not explore how the system noise can improve the theoretical bound either. In model-free settings, Azizzadenesheli et al. (2018) develops a regret bound of Õ(dφ √ T ) using a linear function approximator in the Q-network, where dφ is the dimension of the feature representation vector of the state-action space, but their bound is still exponential in H as mentioned in their paper.
High dimensionality: The eluder dimension of neural networks in Osband & Van Roy (2014) can be infinite, and the information gain (Srinivas et al., 2012) used in Chowdhury & Gopalan (2019) yields exponential order of the state-action spaces dimension d if nonlinear kernels are used, such as SE kernels. However, linear kernels can only model linear functions, thus the representation power is highly restricted if the polynomial order of d is desired.
1.2 OUR APPROACH AND MAIN CONTRIBUTIONS
To further imporve the regret bound for PSRL in continuous spaces, especially with explicit dependency on H , we study model-based posterior sampling algorithms in episodic RL. We assume that rewards and transitions can be modeled as Gaussian Processes with linear kernels, and extend the assumption to non-linear settings utilizing features extracted by neural networks. For the linear case, we develop a Bayesian regret bound of Õ(H3/2d √ T ). Using feature embedding technique as mentioned in Yang & Wang (2019), we derive a bound of Õ(H3/2dφ √ T ). Our Bayesian regret is the best-known Bayesian regret for posterior sampling algorithms in continuous state-action spaces, and it also matches the best-known frequentist regret (Zanette et al. (2020), will be discussed in Section 2). Explicitly dependent on d,H, T , our result achieves a significant improvement in terms of the Bayesian regret of PSRL algorithms compared to previous works:
1. We significantly improved the order of H to polynomial: In our analysis, we use the property of subgaussian noise, which is already assumed in Osband & Van Roy (2014) and Chowdhury & Gopalan (2019), to develop a bound with clear polynomial dependency on H , without assuming the Lipschitz continuity of the underlying value function. More specifically, we prove Lemma 1, and use
1V1 denotes the value function counting from step 1 to H within an episode, s is the initial state, reward at the i-th step ri = sTi Psi + a T i Rai + P,i, and the state at the i+ 1-th step si+1 = Asi +Bai + P,i , i ∈ [H].
2Recall the Bellman equation we have Vi(si) = minai s T i Psi + a T i Rai + P,i + Vi+1(Asi +Bai + P,i), VH+1(s) = 0 . Thus in V1(s), there is a term of (AH−1s)TP (AH−1s), and the eigenvalue of the matrix (AH−1)TPAH−1 is exponential in H .
3For example, if ri = sTi P + a T i R+ P,i, there would still exist term of (A H−1s)TP in V1(s).
it to develop a clear dependency on H , thus we can avoid handling the Lipschitz continuity of the underlying value function.
2. Lower dimensionality compared to Osband & Van Roy (2014) and Chowdhury & Gopalan (2019): We first derive results for linear kernels, and increase the representation power of the linear model by building a Bayesian linear regression model on the feature representation space instead of the original state-action space. As a result, we can use the result of linear kernels to derive a bound linear in the feature dimension. The feature dimension, which in practice is dimension of the last hidden layers in the neural networks required for learning, is much lower than exponential of the input dimension, so we avoid the exponential order of the dimension from the use of nonlinear kernels in Chowdhury & Gopalan (2019).
3. Fewer assumptions and different proof strategy compared to Chowdhury & Gopalan (2019): Although we also use kernelized MDPs like Chowdhury & Gopalan (2019), we omit their assumption A1 (Lipschitz assumption) and A2 (Regularity assumption), only use A3 (subgaussian noise). We avoid A1 since it could be derived from our Lemma 1. Moreover, We directly analyze the regret bound of PSRL using the fact that the sampled and the real unknown MDP share the same distribution conditioned on history. In contrast, Chowdhury & Gopalan (2019) first analyze UCRL (Upper confidence bound in RL) with an extra assumption A2, then transfer it to PSRL.
Empirically, we implement PSRL using Bayesian linear regression (BLR) on the penultimate layer (for feature representation) of neural networks when fitting transition and reward models. We use model predictive control (MPC,Camacho & Alba (2013)) to optimize the policy under the sampled models in each episode as an approximate solution of the sampled MDP as described in Section 5. Experiments show that our algorithm achieves more efficient exploration compared with previous model-based algorithms in control benchmark tasks.
2 RELATED WORK ON FREQUENIST REGRETS
Besides the aforementioned works on Bayesian regret bounds, the majority of papers in efficient RL choose the non-Bayesian perspective and develop frequentist regret bounds where the regret for any MDP M∗ ∈M is bounded and M∗ ∈M holds with high probability. frequentist regret bounds can be expressed in the Bayesian view: for a given confidence setM, the frequentist regret bound implies an identical Bayes regret bound for any prior distribution with support onM. Note that frequentist regret is extensively studied in tabular RL (see Jaksch et al. (2010), Azar et al. (2017), and Jin et al. (2018) as examples), among which the best bound for episodic settings is Õ(H √ SAT ).
There is also a line of work that develops frequentist bounds with feature representation. Most recently, MatrixRL proposed by (Yang & Wang, 2019) uses low dimensional representation and achieves a regret bound of Õ(H2dφ √ T ), which is the best-known frequentist bound in model based settings. While our method is also model-based, we achieve a tighter regret bound when compared in the Bayesian view. In model-free settings, Jin et al. (2020) developed a bound of Õ(H3/2d3/2φ √ T ). Zanette et al. (2020) further improved the regret to Õ(H3/2dφ √ T ) by the proposed an algorithm called ELEANOR, which achieves the best-known frequentist bound in model-free settings. They showed that it is unimprovable with the help of a lower bound established in the bandit literature. Despite that our regret is developed in model-based settings, it matches their bound with the same order of H , dφ and T in the Bayesian view. Moreover, their algorithm involves optimization over all MDPs in the confidence set, and thus can be computationally prohibitive. Our method is computationally tractable as it is much easier to optimize a single sampled MDP, while matching their regret bound in the Bayesian view.
3 PRELIMINARIES
3.1 PROBLEM FORMULATION
We model an episodic finite-horizon Markov Decision Process (MDP) M as {S,A, RM , PM , H, σr, σf , Rmax, ρ}, where S ⊂ Rds and A ⊂ Rda denote state and action spaces, respectively. Each episode with length H has an initial state distribution ρ. At time step i ∈ [1, H] within an episode, the agent observes si ∈ S, selects ai ∈ A, receives a noised
reward ri ∼ RM (si, ai) and transitions to a noised new state si+1 ∼ PM (si, ai). More specifically, r(si, ai) = r̄
M (si, ai) + r and si+1 = fM (si, ai) + f , where r ∼ N (0, σ2r), f ∼ N (0, σ2fIds). Variances σ2r and σ 2 f are fixed to control the noise level. Without loss of generality, we assume the expected reward an agent receives at a single step is bounded |r̄M (s, a)| ≤ Rmax, ∀s ∈ S, a ∈ A.Let µ : S → A be a deterministic policy. Here we define the value function for state s at time step i with policy µ as VMµ,i(s) = E[ΣHj=i[r̄M (sj , aj)|si = s], where sj+1 ∼ PM (sj , aj) and aj = µ(sj). With the bound expected reward, we have that |V (s)| ≤ HRmax, ∀s. We use M∗ to indicate the real unknown MDP which includes R∗ and P ∗, and M∗ itself is treated as a random variable. Thus, we can treat the real noiseless reward function r̄∗ and transition function f∗ as random processes as well. In the posterior sampling algorithm πPS , Mk is a random sample from the posterior distribution of the real unknown MDP M∗ in the kth episode, which includes the posterior samples of Rk and P k , given history prior to the kth episode: Hk := {s1,1, a1,1, r1,1, · · · , sk−1,H , ak−1,H , rk−1,H}, where sk,i, ak,i and rk,i indicate the state, action, and reward at time step i in episode k. We define the the optimal policy under M as µM ∈argmaxµ VMµ,i(s) for all s ∈ S and i ∈ [H]. In particular, µ∗ indicates the optimal policy under M∗ and µk represents the optimal policy under Mk. Let ∆k denote the regret over the kth episode:
∆k = ∫ ρ(s1)(V M∗ µ∗,1(s1)− VM ∗ µk,1(s1))ds1 (1)
Then we can express the regret of πps up to time step T as:
Regret(T, πps,M∗) := Σ d TH e k=1 ∆k, (2)
Let BayesRegret(T, πps, φ) denote the Beyesian regret of πps as defined in Osband & Van Roy (2017), where φ is the prior distribution of M∗:
BayesRegret(T, πps, φ) = E[Regret(T, πps,M∗)]. (3)
3.2 ASSUMPTIONS
Generally, we consider modeling an unknown target function g : Rd → R. We are given a set of noisy samples y = [y1...., yT ]T at points X = [x1, ..., xT ]T , X ⊂ D, where D is compact and convex, yi = g(xi) + i with i ∼ N(0, σ2) i.i.d. Gaussian noise ∀i ∈ {1, · · · , T}. We model g as a sample from a Gaussian ProcessGP (µ(x),K(x, x′)), specified by the mean function µ(x) = E[g(x)] and the covariance (kernel) function K(x, x′) = E[(g(x)− µ(x)(g(x′)− µ(x′)]. Let the prior distribution without any data as GP (0,K(x, x′)). Then the posterior distribution over g given X and y is also a GP with mean µT (x), covariance KT (x, x′), and variance σ2T (x): µT (x) = K(x,X)(K(X,X) + σ2I)−1y,KT (x, x′) = K(x, x′) − K(X,x)T (K(X,X) + σ2I)−1K(X,x), σ2T (x) = KT (x, x), where K(X,x) = [K(x1, x), ...,K(xT , x)]T , K(X,X) = [K(xi, xj)]1≤i≤T,1≤j≤T .
We model our reward function r̄M as a Gaussian Process with noise σ2r . For transition models, we treat each dimension independently: each fi(s, a), i = 1, .., dS is modeled independently as above, and with the same noise level σ2f in each dimension. Thus it corresponds to our formulation in the RL setting. Since the posterior covariance matrix is only dependent on the input rather than the target value, the distribution of each fi(s, a) shares the same covariance matrice and only differs in the mean function.
4 BAYESIAN REGRET ANALYSIS
4.1 LINEAR CASE
Theorem 1 In the RL problem formulated in Section 3.1, under the assumption of Section 3.2 with linear kernels4, we have BayesRegret(T, πps,M∗) = Õ(H3/2d √ T ), where d is the dimension of the state-action space, H is the episode length, and T is the time elapsed. 4GP with linear kernel correspond to Bayesian linear regression f(x) = wTx, where the prior distribution of the weight is w ∼ N (0,Σp).
Proof The regret in episode k can be rearranged as:
∆k = ∫ ρ(s1)(V M∗ µ∗, (s1)− VM k µk,1(s1)) + (V Mk µk,1(s1)− V M∗ µk,1(s1)))ds1 (4)
Note that conditioned upon historyHk for any k, Mk and M∗ are identically distributed. Osband & Van Roy (2014) showed that VM ∗
µ∗, − VM k
µk,1 is zero in expectation, and that only the second part of the regret decomposition need to be bounded when deriving the Bayesian regret of PSRL. Thus we can focus on the policy µk, the sampled Mk and real environment data generated by M∗. For clarity, the value function VM k
µk,1 is simplified to V k k,1 and V
M∗
µk,1 to V ∗ k,1. It suffices to derive bounds for any initial
state s1 as the regret bound will still hold through integration of the initial distribution ρ(s1).
We can rewrite the regret from concentration via the Bellman operator (see Section 5.1 in Osband et al. (2013)):
E[∆̃k|Hk] := E[V kk,1(s1)− V ∗k,1(s1)|Hk] = E[r̄k(s1, a1)− r̄∗(s1, a1) + ∫ P k(s′|s1, a1)V kk,2(s′)ds′ − ∫ P ∗(s′, |s1, a1)V ∗k,2(s′)ds′|Hk]
= E[ΣHi=1r̄k(si, ai)− r̄∗(si, ai) + ΣHi=1( ∫ (P k(s′|si, ai)− P ∗(s′|si, ai))V kk,i+1(s′)ds′)|Hk]
= E[∆̃k(r) + ∆̃k(f)|Hk] (5)
where ai = µk(si), si+1 ∼ P ∗(si+1|si, ai), ∆̃k(r) = ΣHi=1r̄k(si, ai) − r̄∗(si, ai), ∆̃k(f) = ΣHi=1( ∫ (P k(s′|si, ai)− P ∗(s′|si, ai))V kk,i+1(s′)ds′). Thus, here (si, ai) is the state-action pair that the agent encounters in the kth episode while using µk for interaction in the real MDP M∗. We can define Vk,H+1 = 0 to keep consistency. Note that we cannot treat si and ai as deterministic and only take the expectation directly on random reward and transition functions. Instead, we need to bound the difference using concentration properties of reward and transition functions modeled as Gaussian Processes (which also applies to any state-action pair), and then derive bounds of this expectation. For all i, we have ∫ (P k(s′|si, ai) − P ∗(s′|si, ai))V kk,i+1(s′)ds′ ≤
maxs |V kk,i+1(s)| ∫ |P k(s′|si, ai)−P ∗(s′|si, ai)|ds′ ≤ HRmax ∫ |P k(s′|si, ai)−P ∗(s′|si, ai)|ds′.
Now we present a lemma which enables us to derive a regret bound with explicit dependency on the episode length H .
Lemma 1 For two multivariate Gaussian distribution N (µ, σ2I), N (µ′, σ2I) with probability density function p1(x) and p2(x) respectively, x ∈ Rd ,∫
|p1(x)− p2(x)|dx ≤ √ 2
πσ2 ||µ− µ′||2.
The proof is in Appendix A.1. Clearly, this result can also be extended to sub-Gaussian noises.
Recall that P k(s′|si, ai) = N (fk(si, ai), σ2fI) and P ∗(s′|si, ai) = N (f∗(si, ai), σ2fI). By Lemma 1 we have ∫
|P k(s′|si, ai)− P ∗(s′|si, ai)|ds′ ≤ √ 2
πσ2f ||fk(si, ai)− f∗(si, ai)||2 (6)
Lemma 2 (Rigollet & Hütter, 2015) Let X1, ..., XN be N sub-Gaussian random variables with variance σ2 (not required to be independent). Then for any t > 0, P(max1≤i≤N |Xi| > t) ≤ 2Ne− t2 2σ2 .
Given history Hk, let f̄k(s, a) indicate the posterior mean of fk(s, a) in episode k, and σ2k(s, a) denotes the posterior variance of fk in each dimension. Note that f∗ and fk share the same variance in each dimension given history Hk, as described in Section 3. Consider all dimensions of the state space, by Lemma 2, we have that with probability at least 1 − δ, max1≤i≤ds |fki (s, a) −
f̄ki (s, a)| ≤ √ 2σ2k(s, a)log 2ds δ . Also, we can derive an upper bound for the norm of the state difference ||fk(s, a)− f̄k(s, a)||2 ≤ √ ds max1≤i≤ds |fki (s, a)− f̄ki (s, a)|, and so does ||f∗(s, a)− f̄k(s, a)||2 since f∗ and fk share the same posterior distribution. By the union bound, we have that with probability at least 1− 2δ ||fk(s, a)− f∗(s, a)||2 ≤ 2 √ 2dsσ2k(s, a)log 2ds δ .
Then we look at the sum of the differences over horizon H , without requiring each variable in the sum to be independent:
P(ΣHi=1||fk(si, ai)− f∗(si, ai)||2 > ΣHi=12 √
2dsσ2k(si, ai)log 2ds δ )
≤ P( H⋃ i=1 {||fk(si, ai)− f∗(si, ai)||2 > 2 √ 2dsσ2k(si, ai)log 2ds δ })
≤ ΣHi=1P(||fk(si, ai)− f∗(si, ai)||2 > 2 √
2dsσ2k(si, ai)log 2ds δ )
(7)
Thus, with probability at least 1 − 2Hδ, we have ΣHi=1||fk(si, ai) − f∗(si, ai)||2 ≤ ΣHi=12 √ 2dsσ2k(si, ai)log 2ds δ . Let δ ′ = 2Hδ, we have that with probability 1−δ, ΣHi=1||fk(si, ai)−
f∗(si, ai)||2 ≤ ΣHi=12 √ 2dsσ2k(si, ai)log 4Hds δ ≤ 2H √ 2dsσ2k(skmax , akmax)log 4Hds δ , where the index kmax = arg maxi σk(si, ai), i = 1, ...,H in episode k. Here, since the posterior distribution is only updated every H steps, we have to use data points with the max variance in each episode to bound the result. Similarly, using the union bound for [ TH ] episodes, and let C = √ 2 πσ2f , we have that with probability at least 1− δ, Σ[ T H ]
k=1[∆̃k(f)|Hk] ≤ Σ [ TH ] k=1Σ H i=12CHRmax||fk(si, ai)− f∗(si, ai)||2 ≤
Σ [ TH ]
k=14CH 2Rmax √ 2dsσ2k(skmax , akmax)log 4Tds δ .
In each episode k, let σ ′2 k (s, a) denote the posterior variance given only a subset of data points {(s1max , a1max), ..., (sk−1max , ak−1max)}, where each element has the max variance in the corresponding episode. By Eq.(6) in Williams & Vivarelli (2000), we know that the posterior variance reduces as the number of data points grows. Hence ∀(s, a), σ2k(s, a) ≤ σ ′2 k (s, a). By Theorem 5 in Srinivas et al. (2012) which provides a bound on the information gain, and Lemma 2 in Russo & Van Roy (2014) that bounds the sum of variances by the information gain, we have that Σ [ TH ]
k=1σ ′2 k (skmax , akmax) = O((ds + da)log[ TH ]) for linear kernels with bounded variances. Note that the bounded variance property for linear kernels only requires the range of all state-action pairs actually encountered in M∗ not to expand to infinity as T grows, which holds in general episodic MDPs.
Thus with probability 1− δ, and let δ = 1T ,
Σ [ TH ] k=1[∆̃k(f)|Hk] ≤ Σ [ TH ] k=14CH 2Rmax √ 2dsσ2k(skmax , akmax)log
4Tds δ
≤ Σ[ T H ]
k=18CH 2Rmax √ dsσ ′2 k (skmax , akmax)log(2Tds)
≤ 8CH2Rmax √ Σ [ TH ] k=1σ ′2 k (skmax , akmax) √ [ T H ] √ dslog(2Tds)
= 8CH 3 2Rmax √ T √ dslog(2Tds) ∗ √ O((ds + da)log[ T
H ]) = Õ((ds + da)H
3 2 √ T )
(8)
where Õ ignores logarithmic factors.
Therefore, E[Σ[ T H ]
k=1∆̃k(f)|Hk] ≤ (1− 1 T )Õ((ds + sa)H 3 2T ) + 1T 2HRmax ∗ [ T H ] = Õ(H
3 2 d √ T ),
where 2HRmax is the upper bound on the difference of value functions, and d = ds + da. By similar derivation, E[Σ[ T H ] k=1∆̃k(r)|Hk] = Õ( √ dHT ). Finally, through the tower property we have
BayesRegret(T, πps,M∗) = Õ(H 32 d √ T ).
Algorithm 1 MPC-PSRL Initialize data D with random actions for one episode repeat
Sample a transition model and a cost model at the beginning of each episode for i = 1 to H steps do
Obtain action using MPC with planning horizon τ : ai ∈ arg maxai:i+τ ∑i+τ t=i E[r(st, at)]
D = D ∪ {(si, ai, ri, si+1)} end for Train cost and dynamics representations φr and φf using data in D Update φr(s, a), φf (s, a) for all (s, a) collected Perform posterior update of wr and wf in cost and dynamics models using updated representations φr(s, a), φf (s, a) for all (s, a) collected
until convergence
4.2 NONLINEAR CASE VIA FEATURE REPRESENTATION
We can slightly modify the previous proof to derive the bound in settings that use feature representations. We can transform the state-action pair (s, a) to φf (s, a) ∈ Rdφ as the input of the transition model , and transform the newly transitioned state s′ to ψf (s′) ∈ Rdψ as the target, then the transition model can be established with respect to this feature embedding. We further assume dψ = O(dφ) as Assumption 1 in Yang & Wang (2019). Besides, we assume dφ′ = O(dφ) in the feature representation φr(s, a) ∈ Rdφ′ , then the reward model can also be established with respect to the feature embedding. Following similar steps, we can derive a Bayesian regret of Õ(H3/2dφ √ T ).
5 ALGORITHM DESCRIPTION
In this section, we elaborate our proposed algorithm, MPC-PSRL, as shown in Algorithm 1.
5.1 PREDICTIVE MODEL
When model the rewards and transitions, we use features extracted from the penultimate layer of fitted neural networks, and perform Bayesian linear regression on the feature vectors to update posterior distributions.
Feature representation: we first fit neural networks for transitions and rewards, using the same network architecture as Chua et al. (2018). Let xi denote the state-action pair (si, ai) and yi denote the target value. Specifically, we use reward ri as yi to fit rewards, and we take the difference between two consecutive states si+1 − si as yi to fit transitions. The penultimate layer of fitted neural networks is extracted as the feature representation, denoted as φf and φr for transitions and rewards, respectively. Note that in the transition feature embedding, we only use one neural network to extract features of state-action pairs from the penultimate layer to serve as φ, and leave the target states without further feature representation (the general setting is discussed in Section 4.2 where feature representations are used for both inputs and outputs), so the dimension of the target in the transition model d(ψ) equals to ds. Thus we have a modified regret bound of Õ(H3/2 √ ddφT ). We do not find the necessity to further extract feature representations in the target space, as it might introduce additional computational overhead. Although higher dimensionality of the hidden layers might imply better representation, we find that only modifying the width of the penultimate layer to dφ = ds + sa suffices in our experiments for both reward and transition models. Note that how to optimize the dimension of the penultimate layer for more efficient feature representation deserves further exploration.
Bayesian update and posterior sampling: here we describe the Bayesian update of transition and reward models using extracted features. Recall that Gaussian process with linear kernels is equivalent to Bayesian linear regression. By extracting the penultimate layer as feature representation φ, the target value y and the representation φ(x) could be seen as linearly related: y = w>φ(x) + , where is a zero-mean Gaussian noise with variance σ2 (which is σ2f for the transition model and σ 2 r for the reward model as defined in Section 3.1). We choose the prior distribution of weights w as zero-mean
Gaussian with covariance matrix Σp, then the posterior distribution of w is also multivariate Gaussian (Rasmussen (2003)): p(w|D) ∼ N ( σ−2A−1ΦY,A−1 ) where A = σ−2ΦΦ> + Σ−1p , Φ ∈ Rd×N is the concatenation of feature representations {φ(xi)}Ni=1, and Y ∈ RN is the concatenation of target values. At the beginning of each episode, we sample w from the posterior distribution to build the model, collect new data during the whole episode, and update the posterior distribution of w at the end of the episode using all the data collected.
Besides the posterior distribution of w, the feature representation φ is also updated in each episode with new data collected. We adopt a similar dual-update procedure as Riquelme et al. (2018): after representations for rewards and transitions are updated, feature vectors of all state-action pairs collected are re-computed. Then we apply Bayesian update on these feature vectors. See the description of Algorithm 1 for details.
5.2 PLANNING
During interaction with the environment, we use a MPC controller (Camacho & Alba (2013)) for planning. At each time step i, the controller takes state si and an action sequence ai:i+τ = {ai, ai+1, · · · , ai+τ} as the input, where τ is the planning horizon. We use transition and reward models to produce the first action ai of the sequence of optimized actions arg maxai:i+τ ∑i+τ t=i E[r(st, at)], where the expected return of a series of actions can be approximated using the mean return of several particles propagated with noises of our sampled reward and transition models. To compute the optimal action sequence, we use CEM (Botev et al. (2013)), which samples actions from a distribution closer to previous action samples with high rewards.
6 EXPERIMENTS
We compare our method with the following state-of-the art model-based and model-free algorithms on benchmark control tasks.
Model-free: Soft Actor Critic (SAC) from Haarnoja et al. (2018) is an off-policy deep actor-critic algorithm that utilizes entropy maximization to guide exploration. Deep Deterministic Policy Gradient (DDPG) from Barth-Maron et al. (2018) is an off-policy algorithm that concurrently learns a Qfunction and a policy, with a discount factor to guide exploration.
Model-based: Probabilistic Ensembles with Trajectory Sampling (PETS) from Chua et al. (2018) models the dynamics via an ensemble of probabilistic neural networks to capture epistemic uncertainty for exploration, and uses MPC for action selection, with a requirement to have access to oracle rewards for planning. Model-Based Policy Optimization (MBPO) from Janner et al. (2019) uses the same bootstrap ensemble techniques as PETS in modeling, but differs from PETS in policy optimization with a large amount of short model-generated rollouts, and can cope with environments with no oracle rewards provided. We do not compare with Gal et al. (2016), which adopts a single Bayesian neural network (BNN) with moment matching, as it is outperformed by PETS that uses an ensemble of BNNs with trajectory sampling. And we don’t compare with GP-based trajectory optimization methods with real rewards provided (Deisenroth & Rasmussen, 2011; Kamthe & Deisenroth, 2018), which are not only outperformed by PETS, but also computationally expensive and thus are limited to very small state-action spaces.
We use environments with various complexity and dimensionality for evaluation. Low-dimensional environments: continuous Cartpole (ds = 4, da = 1, H = 200, with a continuous action space compared to the classic Cartpole, which makes it harder to learn) and Pendulum Swing Up (ds = 3, da = 1, H = 200, a modified version of Pendulum where we limit the start state to make it harder for exploration). Trajectory optimization with oracle rewards in these two environments is easy and there is almost no difference in the performances for all model-based algorithms we compare, so we omit showing these learning curves. Higher dimensional environments: 7-DOF Reacher (ds = 17, da = 7, H = 150) and 7-DOF pusher (ds = 20, da = 7, H = 150) are two more challenging tasks as provided in Chua et al. (2018), where we conduct experiments both with and without true rewards, to compare with all baseline algorithms mentioned.
The learning curves of these algorithms are showed in Figure 1. When the oracle rewards are provided in Pusher and Reacher, our method outperforms PETS and MBPO: it converges more quickly with similar performance at convergence in Pusher, while in Reacher, not only does it learn faster but also performs better at convergence. As we use the same planning method (MPC) as PETS, results indicate that our model better captures uncertainty, which is beneficial to improving sample efficiency. When exploring in environments where both rewards and transition are unknown, our method learns significantly faster than previous model-based and model-free methods which do no require oracle rewards. Meanwhile, it matches the performance of SAC at convergence. Moreover, the performances of our algorithm in environments with and without oracle rewards can be similar, or even faster convergence (see Pusher with and without rewards), indicating that our algorithm excels at exploring both rewards and transitions.
From experimental results, it can be verified that our algorithm better captures the model uncertainty, and makes better use of uncertainty using posterior sampling. In our methods, by sampling from a Bayesian linear regression on a fitted feature space, and optimizing under the same sampled MDP in the whole episode instead of re-sampling at every step, the performance of our algorithm is guaranteed from a Bayesian view as analysed in Section 4. While PETS and MBPO use bootstrap ensembles of models with a limited ensemble size to "simulate" a Bayesian model, in which the convergence of the uncertainty is not guaranteed and is highly dependent on the training of the neural network. However, in our method there is a limitation of using MPC, which might fail in even higher-dimensional tasks shown in Janner et al. (2019). Incorporating policy gradient techniques for action-selection might further improve the performance and we leave it for future work.
7 CONCLUSION
In our paper, we derive a novel Bayesian regret for PSRL algorithm in continuous spaces with the assumption that true rewards and transitions (with or without feature embedding) can be modeled by GP with linear kernels. While matching the best-known bounds in previous works from a Bayesian view, PSRL also enjoys computational tractability. Moreover, we propose MPC-PSRL in continuous environments, and experiments show that our algorithm exceeds existing model-based and model-free methods with more efficient exploration.
A APPENDIX
A.1 PROOF OF LEMMA 1
Here we provide a proof of Lemma 1.
We first prove the results in Rd with d = 1: p1(x) ∼ N (µ, σ2), p2(x) ∼ N (µ′, σ2), without loss of generality, assume µ′ ≥ µ. The probabilistic distribution is symmetric with regard to µ+µ ′
2 . Note that p1(x) = p2(x) at x = µ+µ ′
2 . Thus the integration of absolute difference between pdf of p1 and p2 can be simplified as twice the integration of one side:∫ ∞
−∞ |p2(x)− p1(x)|dx = 2√ 2πσ2 ∫ ∞ µ+µ′
2
e −(x−µ′)2 2σ2 − e −(x−µ)2 2σ2 dx (9)
let z1 = x− µ, z2 = x− µ′, we have: 2√
2πσ2
∫ ∞ µ+µ′
2
e −(x−µ′)2 2σ2 − e −(x−µ)2 2σ2 dx
=
√ 2
πσ2 ∫ ∞ µ−µ′
2
e− z22 2σ2 dz2 −
√ 2
πσ2 ∫ ∞ µ′−µ
2
e− z21 2σ2 dz1
=
√ 2
πσ2
∫ µ′−µ 2
µ−µ′ 2
e− z2 2σ2 dz
= 2
√ 2
πσ2
∫ µ′−µ 2
0
e− z2 2σ2 dz ≤ 2 √ 2
πσ2
∫ µ′−µ 2
0
1dz =
√ 2
πσ2 |µ′ − µ|.
(10)
Now we extend the result to Rd(d ≥ 2): p1(x) ∼ N (µ, σ2I), p2(x) ∼ N (µ′, σ2I). We can rotate the coordinate system recursively to align the last axis with vector µ − µ′, such that the coordinates of µ and µ′ can be written as (0, 0, · · · , 0, µ̂), and (0, 0, · · · , 0, µ̂′) respectively, with |µ̂′ − µ̂| = ‖µ− µ′‖2. Without loss of generality, let µ̂ ≥ µ̂′.
Clearly, all points with equal distance to µ̂′ and µ̂ define a hyperplane P : xd = µ̂+µ̂ ′
2 where p1(x) = p2(x),∀x ∈ P . More specifically, the probabilistic distribution is symmetric with regard to P . Similar to the analysis in R1:
∫ ∞ −∞ ∫ ∞ −∞ · · · ∫ ∞ −∞ |p1(x)− p2(x)|dx1dx2 · · · dxd = 2√
(2π)dσ2d ∫ ∞ −∞ ∫ ∞ −∞ · · · ∫ ∞ µ̂+µ̂′
2
e −x21 2σ2 · · · e −x2d−1 2σ2 e −(xd−µ̂) 2 2σ2 dx1dx2 · · · dxd
− 2√ (2π)dσ2d ∫ ∞ −∞ ∫ ∞ −∞ · · · ∫ ∞ µ̂+µ̂′
2
e −x21 2σ2 · · · e −x2d−1 2σ2 e −(xd−µ̂ ′)2 2σ2 dx1dx2 · · · dxd
= 2√
(2π)dσ2d ∫ ∞ −∞ e −x21 2σ2 dx1 ∫ ∞ −∞ e −x22 2σ2 dx2 · · · ∫ ∞ −∞ e −x2d−1 2σ2 dxd−1 ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂)
2
2σ2 dxd
− 2√ (2π)dσ2d ∫ ∞ −∞ e −x21 2σ2 dx1 ∫ ∞ −∞ e −x22 2σ2 dx2 · · · ∫ ∞ −∞ e −x2d−1 2σ2 dxd−1 ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂
′)2
2σ2 dxd
=
√ 2
πσ2 ( ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂)
2 2σ2 dxd − ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂
′)2
2σ2 dxd)
(11)
let z1 = xd − µ̂,z2 = xd − µ̂′, we have:
∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂)
2 2σ2 dxd − ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂
′)2
2σ2 dxd
= ∫ ∞ µ̂′−µ̂
2
e −z21 2σ2 dz1 − ∫ ∞ µ̂−µ̂′
2
e −z22 2σ2 dz2
=
∫ µ̂−µ̂′ 2
µ̂′−µ̂ 2
e −z22 2σ2 dz
= 2
∫ µ̂−µ̂′ 2
0
e −z22 2σ dz ≤ 2
∫ µ̂−µ̂′ 2
0
1dz
= |µ̂− µ̂′|
(12)
Thus ∫∞ −∞ ∫∞ −∞ · · · ∫∞ −∞ |p1(x)− p2(x)|dx1dx2 · · · dxd ≤ √ 2 πσ2 ‖µ− µ ′‖2.
A.2 EXPERIMENTAL DETAILS
Here we provide hyperparameters for MBPO:
And we provide hyperparamters for MPC and Neural Networks in PETS:
Here are hyperparameters of our algorithm, which is similar with PETS, except for ensemble size(since we do not use ensembled models):
For SAC and DDPG, we use the open source code ( https://github.com/dongminlee94/ deep_rl) for implementation without changing their hyperparameters. We appreciate the authors for sharing the code! | 1. What is the focus of the paper regarding model-based reinforcement learning?
2. What are the strengths of the proposed posterior sampling algorithm?
3. What are the weaknesses of the paper, particularly in assumptions and comparisons with other works?
4. How does the reviewer assess the clarity and novelty of the paper's content?
5. Are there any concerns or questions regarding the computational complexity of the algorithm? | Review | Review
This paper studies the model-based reinforcement learning. They propose a posterior sampling algorithm and provide a Bayesian regret guarantee. Under the assumption that the reward and transition functions are Gaussian processes with linear kernels, the regret bound is in the order of H^1.5 d sqrt{T} where H is the episode length. The dependence that regret is in polynomial of H is a nice property. However, this advantage seems to be obtained by the assumption of Gaussian processes with linear kernels. Besides, I have the following comments.
It seems that the result in Theorem 1 is quite straightforward from the results in Osband & Van Roy 2014. Could you justify your contribution in this result.
What is MPC? Suppose to be model predictive control? It is not officially introduced in the context.
In terms of computational complexity, could you elaborate more on the computational complexity of each line (component) of the algorithm. For general cases, posterior sampling could also be expensive as there are no closed-form solutions, one may need to use MCMC method. |
ICLR | Title
Efficient Exploration for Model-based Reinforcement Learning with Continuous States and Actions
Abstract
Balancing exploration and exploitation is crucial in reinforcement learning (RL). In this paper, we study the model-based posterior sampling algorithm in continuous state-action spaces theoretically and empirically. First, we improve the regret bound: with the assumption that reward and transition functions can be modeled as Gaussian Processes with linear kernels, we develop a Bayesian regret bound of Õ(Hd √ T ), where H is the episode length, d is the dimension of the stateaction space, and T indicates the total time steps. Our bound can be extended to nonlinear cases as well: using linear kernels on the feature representation φ, the Bayesian regret bound becomes Õ(Hdφ √ T ), where dφ is the dimension of the representation space. Moreover, we present MPC-PSRL, a model-based posterior sampling algorithm with model predictive control for action selection. To capture the uncertainty in models and realize posterior sampling, we use Bayesian linear regression on the penultimate layer (the feature representation layer φ) of neural networks. Empirical results show that our algorithm achieves the best sample efficiency in benchmark control tasks compared to prior model-based algorithms, and matches the asymptotic performance of model-free algorithms.
N/A
√ T ), where H is the episode length, d is the dimension of the state-
action space, and T indicates the total time steps. Our bound can be extended to nonlinear cases as well: using linear kernels on the feature representation φ, the Bayesian regret bound becomes Õ(H3/2dφ √ T ), where dφ is the dimension of the representation space. Moreover, we present MPC-PSRL, a model-based posterior sampling algorithm with model predictive control for action selection. To capture the uncertainty in models and realize posterior sampling, we use Bayesian linear regression on the penultimate layer (the feature representation layer φ) of neural networks. Empirical results show that our algorithm achieves the best sample efficiency in benchmark control tasks compared to prior model-based algorithms, and matches the asymptotic performance of model-free algorithms.
1 INTRODUCTION
In reinforcement learning (RL), an agent interacts with an unknown environment which is typically modeled as a Markov Decision Process (MDP). Efficient exploration has been one of the main challenges in RL: the agent is expected to balance between exploring unseen state-action pairs to gain more knowledge about the environment, and exploiting existing knowledge to optimize rewards in the presence of known data.
To achieve efficient exploration, Bayesian reinforcement learning is proposed, where the MDP itself is treated as a random variable with a prior distribution. This prior distribution of the MDP provides an initial uncertainty estimate of the environment, which generally contains distributions of transition dynamics and reward functions. The epistemic uncertainty (subjective uncertainty due to limited data) in reinforcement learning can be captured by posterior distributions given the data collected by the agent.
Posterior sampling reinforcement learning (PSRL), motivated by Thompson sampling in bandit problems (Thompson, 1933), serves as a provably efficient algorithm under Bayesian settings. In PSRL, the agent maintains a posterior distribution for the MDP and follows an optimal policy with respect to a single MDP sampled from the posterior distribution for interaction in each episode. Appealing results of PSRL in tabular RL were presented by both model-based (Osband et al., 2013; Osband & Van Roy, 2017) and model free approaches (Osband et al., 2019) in terms of the Bayesian regret. For H-horizon episodic RL, PSRL was proved to achieve a regret bound of Õ(H √ SAT ), where S and A denote the number of states and actions, respectively. However, in continuous state-action spaces S and A can be infinite, hence the above results do not apply.
Although PSRL in continuous spaces has also been studied in episodic RL, existing results either provide no guarantee or suffer from an exponential order of H . In this paper, we achieve the first Bayesian regret bound for posterior sampling algorithms that is near optimal in T (i.e. √ T ) and
polynomial in the episode lengthH for continuous state-action spaces. We will explain the limitations of previous works in Section 1.1, then summarize our approach and contributions in Section 1.2.
1.1 LIMITATIONS OF PREVIOUS BAYESIAN REGRETS IN CONTINUOUS SPACES
The exponetial order ofH: In model-based settings, Osband & Van Roy (2014) derive a regret bound of Õ(σR √ dK(R)dE(R)T + E[L∗]σp √ dK(P )dE(P )), where L∗ is a global Lipschitz constant for the future value function defined in their eq. (3). However, L∗ is dependent on H: the difference between input states will propagate in H steps, which results in a term dependent of H in the value function. The authors do not mention this dependency, so there is no clear dependency on H in their regret. Moreover, they use the Lipschitz constant of the underlying value function as an upper bound of L∗ in the corollaries, which yields an exponential order in H . Take their Corollary 2 of linear quadratic systems as an example: the regret bound is Õ(σCλ1n2 √ T ), where λ1 is the largest eigenvalue of the matrix Q in the optimal value function V1(s) = sTQs. 1 However, the largest eigenvalue of Q is actually exponential in H 2. Even if we change the reward function from quadratic to linear,the Lipschitz constant of the optimal value function is still exponential in H 3. Chowdhury & Gopalan (2019) maintains the assumption of this Lipschitz property, thus there exists E[L∗] with no clear dependency on H in their regret, and in their Corollary 2 of LQR, they follow the same steps as Osband & Van Roy (2014), and still maintain a term with λ1, which is actually exponential in H as discussed. Although Osband & Van Roy (2014) mentions that system noise helps to smooth future values, but they do not explore it although the noise is assumed to be subgaussian. The authors directly use the Lipschitz continuity of the underlying function in the analysis of LQR, thus they cannot avoid the exponential term in H . Chowdhury & Gopalan (2019) do not explore how the system noise can improve the theoretical bound either. In model-free settings, Azizzadenesheli et al. (2018) develops a regret bound of Õ(dφ √ T ) using a linear function approximator in the Q-network, where dφ is the dimension of the feature representation vector of the state-action space, but their bound is still exponential in H as mentioned in their paper.
High dimensionality: The eluder dimension of neural networks in Osband & Van Roy (2014) can be infinite, and the information gain (Srinivas et al., 2012) used in Chowdhury & Gopalan (2019) yields exponential order of the state-action spaces dimension d if nonlinear kernels are used, such as SE kernels. However, linear kernels can only model linear functions, thus the representation power is highly restricted if the polynomial order of d is desired.
1.2 OUR APPROACH AND MAIN CONTRIBUTIONS
To further imporve the regret bound for PSRL in continuous spaces, especially with explicit dependency on H , we study model-based posterior sampling algorithms in episodic RL. We assume that rewards and transitions can be modeled as Gaussian Processes with linear kernels, and extend the assumption to non-linear settings utilizing features extracted by neural networks. For the linear case, we develop a Bayesian regret bound of Õ(H3/2d √ T ). Using feature embedding technique as mentioned in Yang & Wang (2019), we derive a bound of Õ(H3/2dφ √ T ). Our Bayesian regret is the best-known Bayesian regret for posterior sampling algorithms in continuous state-action spaces, and it also matches the best-known frequentist regret (Zanette et al. (2020), will be discussed in Section 2). Explicitly dependent on d,H, T , our result achieves a significant improvement in terms of the Bayesian regret of PSRL algorithms compared to previous works:
1. We significantly improved the order of H to polynomial: In our analysis, we use the property of subgaussian noise, which is already assumed in Osband & Van Roy (2014) and Chowdhury & Gopalan (2019), to develop a bound with clear polynomial dependency on H , without assuming the Lipschitz continuity of the underlying value function. More specifically, we prove Lemma 1, and use
1V1 denotes the value function counting from step 1 to H within an episode, s is the initial state, reward at the i-th step ri = sTi Psi + a T i Rai + P,i, and the state at the i+ 1-th step si+1 = Asi +Bai + P,i , i ∈ [H].
2Recall the Bellman equation we have Vi(si) = minai s T i Psi + a T i Rai + P,i + Vi+1(Asi +Bai + P,i), VH+1(s) = 0 . Thus in V1(s), there is a term of (AH−1s)TP (AH−1s), and the eigenvalue of the matrix (AH−1)TPAH−1 is exponential in H .
3For example, if ri = sTi P + a T i R+ P,i, there would still exist term of (A H−1s)TP in V1(s).
it to develop a clear dependency on H , thus we can avoid handling the Lipschitz continuity of the underlying value function.
2. Lower dimensionality compared to Osband & Van Roy (2014) and Chowdhury & Gopalan (2019): We first derive results for linear kernels, and increase the representation power of the linear model by building a Bayesian linear regression model on the feature representation space instead of the original state-action space. As a result, we can use the result of linear kernels to derive a bound linear in the feature dimension. The feature dimension, which in practice is dimension of the last hidden layers in the neural networks required for learning, is much lower than exponential of the input dimension, so we avoid the exponential order of the dimension from the use of nonlinear kernels in Chowdhury & Gopalan (2019).
3. Fewer assumptions and different proof strategy compared to Chowdhury & Gopalan (2019): Although we also use kernelized MDPs like Chowdhury & Gopalan (2019), we omit their assumption A1 (Lipschitz assumption) and A2 (Regularity assumption), only use A3 (subgaussian noise). We avoid A1 since it could be derived from our Lemma 1. Moreover, We directly analyze the regret bound of PSRL using the fact that the sampled and the real unknown MDP share the same distribution conditioned on history. In contrast, Chowdhury & Gopalan (2019) first analyze UCRL (Upper confidence bound in RL) with an extra assumption A2, then transfer it to PSRL.
Empirically, we implement PSRL using Bayesian linear regression (BLR) on the penultimate layer (for feature representation) of neural networks when fitting transition and reward models. We use model predictive control (MPC,Camacho & Alba (2013)) to optimize the policy under the sampled models in each episode as an approximate solution of the sampled MDP as described in Section 5. Experiments show that our algorithm achieves more efficient exploration compared with previous model-based algorithms in control benchmark tasks.
2 RELATED WORK ON FREQUENIST REGRETS
Besides the aforementioned works on Bayesian regret bounds, the majority of papers in efficient RL choose the non-Bayesian perspective and develop frequentist regret bounds where the regret for any MDP M∗ ∈M is bounded and M∗ ∈M holds with high probability. frequentist regret bounds can be expressed in the Bayesian view: for a given confidence setM, the frequentist regret bound implies an identical Bayes regret bound for any prior distribution with support onM. Note that frequentist regret is extensively studied in tabular RL (see Jaksch et al. (2010), Azar et al. (2017), and Jin et al. (2018) as examples), among which the best bound for episodic settings is Õ(H √ SAT ).
There is also a line of work that develops frequentist bounds with feature representation. Most recently, MatrixRL proposed by (Yang & Wang, 2019) uses low dimensional representation and achieves a regret bound of Õ(H2dφ √ T ), which is the best-known frequentist bound in model based settings. While our method is also model-based, we achieve a tighter regret bound when compared in the Bayesian view. In model-free settings, Jin et al. (2020) developed a bound of Õ(H3/2d3/2φ √ T ). Zanette et al. (2020) further improved the regret to Õ(H3/2dφ √ T ) by the proposed an algorithm called ELEANOR, which achieves the best-known frequentist bound in model-free settings. They showed that it is unimprovable with the help of a lower bound established in the bandit literature. Despite that our regret is developed in model-based settings, it matches their bound with the same order of H , dφ and T in the Bayesian view. Moreover, their algorithm involves optimization over all MDPs in the confidence set, and thus can be computationally prohibitive. Our method is computationally tractable as it is much easier to optimize a single sampled MDP, while matching their regret bound in the Bayesian view.
3 PRELIMINARIES
3.1 PROBLEM FORMULATION
We model an episodic finite-horizon Markov Decision Process (MDP) M as {S,A, RM , PM , H, σr, σf , Rmax, ρ}, where S ⊂ Rds and A ⊂ Rda denote state and action spaces, respectively. Each episode with length H has an initial state distribution ρ. At time step i ∈ [1, H] within an episode, the agent observes si ∈ S, selects ai ∈ A, receives a noised
reward ri ∼ RM (si, ai) and transitions to a noised new state si+1 ∼ PM (si, ai). More specifically, r(si, ai) = r̄
M (si, ai) + r and si+1 = fM (si, ai) + f , where r ∼ N (0, σ2r), f ∼ N (0, σ2fIds). Variances σ2r and σ 2 f are fixed to control the noise level. Without loss of generality, we assume the expected reward an agent receives at a single step is bounded |r̄M (s, a)| ≤ Rmax, ∀s ∈ S, a ∈ A.Let µ : S → A be a deterministic policy. Here we define the value function for state s at time step i with policy µ as VMµ,i(s) = E[ΣHj=i[r̄M (sj , aj)|si = s], where sj+1 ∼ PM (sj , aj) and aj = µ(sj). With the bound expected reward, we have that |V (s)| ≤ HRmax, ∀s. We use M∗ to indicate the real unknown MDP which includes R∗ and P ∗, and M∗ itself is treated as a random variable. Thus, we can treat the real noiseless reward function r̄∗ and transition function f∗ as random processes as well. In the posterior sampling algorithm πPS , Mk is a random sample from the posterior distribution of the real unknown MDP M∗ in the kth episode, which includes the posterior samples of Rk and P k , given history prior to the kth episode: Hk := {s1,1, a1,1, r1,1, · · · , sk−1,H , ak−1,H , rk−1,H}, where sk,i, ak,i and rk,i indicate the state, action, and reward at time step i in episode k. We define the the optimal policy under M as µM ∈argmaxµ VMµ,i(s) for all s ∈ S and i ∈ [H]. In particular, µ∗ indicates the optimal policy under M∗ and µk represents the optimal policy under Mk. Let ∆k denote the regret over the kth episode:
∆k = ∫ ρ(s1)(V M∗ µ∗,1(s1)− VM ∗ µk,1(s1))ds1 (1)
Then we can express the regret of πps up to time step T as:
Regret(T, πps,M∗) := Σ d TH e k=1 ∆k, (2)
Let BayesRegret(T, πps, φ) denote the Beyesian regret of πps as defined in Osband & Van Roy (2017), where φ is the prior distribution of M∗:
BayesRegret(T, πps, φ) = E[Regret(T, πps,M∗)]. (3)
3.2 ASSUMPTIONS
Generally, we consider modeling an unknown target function g : Rd → R. We are given a set of noisy samples y = [y1...., yT ]T at points X = [x1, ..., xT ]T , X ⊂ D, where D is compact and convex, yi = g(xi) + i with i ∼ N(0, σ2) i.i.d. Gaussian noise ∀i ∈ {1, · · · , T}. We model g as a sample from a Gaussian ProcessGP (µ(x),K(x, x′)), specified by the mean function µ(x) = E[g(x)] and the covariance (kernel) function K(x, x′) = E[(g(x)− µ(x)(g(x′)− µ(x′)]. Let the prior distribution without any data as GP (0,K(x, x′)). Then the posterior distribution over g given X and y is also a GP with mean µT (x), covariance KT (x, x′), and variance σ2T (x): µT (x) = K(x,X)(K(X,X) + σ2I)−1y,KT (x, x′) = K(x, x′) − K(X,x)T (K(X,X) + σ2I)−1K(X,x), σ2T (x) = KT (x, x), where K(X,x) = [K(x1, x), ...,K(xT , x)]T , K(X,X) = [K(xi, xj)]1≤i≤T,1≤j≤T .
We model our reward function r̄M as a Gaussian Process with noise σ2r . For transition models, we treat each dimension independently: each fi(s, a), i = 1, .., dS is modeled independently as above, and with the same noise level σ2f in each dimension. Thus it corresponds to our formulation in the RL setting. Since the posterior covariance matrix is only dependent on the input rather than the target value, the distribution of each fi(s, a) shares the same covariance matrice and only differs in the mean function.
4 BAYESIAN REGRET ANALYSIS
4.1 LINEAR CASE
Theorem 1 In the RL problem formulated in Section 3.1, under the assumption of Section 3.2 with linear kernels4, we have BayesRegret(T, πps,M∗) = Õ(H3/2d √ T ), where d is the dimension of the state-action space, H is the episode length, and T is the time elapsed. 4GP with linear kernel correspond to Bayesian linear regression f(x) = wTx, where the prior distribution of the weight is w ∼ N (0,Σp).
Proof The regret in episode k can be rearranged as:
∆k = ∫ ρ(s1)(V M∗ µ∗, (s1)− VM k µk,1(s1)) + (V Mk µk,1(s1)− V M∗ µk,1(s1)))ds1 (4)
Note that conditioned upon historyHk for any k, Mk and M∗ are identically distributed. Osband & Van Roy (2014) showed that VM ∗
µ∗, − VM k
µk,1 is zero in expectation, and that only the second part of the regret decomposition need to be bounded when deriving the Bayesian regret of PSRL. Thus we can focus on the policy µk, the sampled Mk and real environment data generated by M∗. For clarity, the value function VM k
µk,1 is simplified to V k k,1 and V
M∗
µk,1 to V ∗ k,1. It suffices to derive bounds for any initial
state s1 as the regret bound will still hold through integration of the initial distribution ρ(s1).
We can rewrite the regret from concentration via the Bellman operator (see Section 5.1 in Osband et al. (2013)):
E[∆̃k|Hk] := E[V kk,1(s1)− V ∗k,1(s1)|Hk] = E[r̄k(s1, a1)− r̄∗(s1, a1) + ∫ P k(s′|s1, a1)V kk,2(s′)ds′ − ∫ P ∗(s′, |s1, a1)V ∗k,2(s′)ds′|Hk]
= E[ΣHi=1r̄k(si, ai)− r̄∗(si, ai) + ΣHi=1( ∫ (P k(s′|si, ai)− P ∗(s′|si, ai))V kk,i+1(s′)ds′)|Hk]
= E[∆̃k(r) + ∆̃k(f)|Hk] (5)
where ai = µk(si), si+1 ∼ P ∗(si+1|si, ai), ∆̃k(r) = ΣHi=1r̄k(si, ai) − r̄∗(si, ai), ∆̃k(f) = ΣHi=1( ∫ (P k(s′|si, ai)− P ∗(s′|si, ai))V kk,i+1(s′)ds′). Thus, here (si, ai) is the state-action pair that the agent encounters in the kth episode while using µk for interaction in the real MDP M∗. We can define Vk,H+1 = 0 to keep consistency. Note that we cannot treat si and ai as deterministic and only take the expectation directly on random reward and transition functions. Instead, we need to bound the difference using concentration properties of reward and transition functions modeled as Gaussian Processes (which also applies to any state-action pair), and then derive bounds of this expectation. For all i, we have ∫ (P k(s′|si, ai) − P ∗(s′|si, ai))V kk,i+1(s′)ds′ ≤
maxs |V kk,i+1(s)| ∫ |P k(s′|si, ai)−P ∗(s′|si, ai)|ds′ ≤ HRmax ∫ |P k(s′|si, ai)−P ∗(s′|si, ai)|ds′.
Now we present a lemma which enables us to derive a regret bound with explicit dependency on the episode length H .
Lemma 1 For two multivariate Gaussian distribution N (µ, σ2I), N (µ′, σ2I) with probability density function p1(x) and p2(x) respectively, x ∈ Rd ,∫
|p1(x)− p2(x)|dx ≤ √ 2
πσ2 ||µ− µ′||2.
The proof is in Appendix A.1. Clearly, this result can also be extended to sub-Gaussian noises.
Recall that P k(s′|si, ai) = N (fk(si, ai), σ2fI) and P ∗(s′|si, ai) = N (f∗(si, ai), σ2fI). By Lemma 1 we have ∫
|P k(s′|si, ai)− P ∗(s′|si, ai)|ds′ ≤ √ 2
πσ2f ||fk(si, ai)− f∗(si, ai)||2 (6)
Lemma 2 (Rigollet & Hütter, 2015) Let X1, ..., XN be N sub-Gaussian random variables with variance σ2 (not required to be independent). Then for any t > 0, P(max1≤i≤N |Xi| > t) ≤ 2Ne− t2 2σ2 .
Given history Hk, let f̄k(s, a) indicate the posterior mean of fk(s, a) in episode k, and σ2k(s, a) denotes the posterior variance of fk in each dimension. Note that f∗ and fk share the same variance in each dimension given history Hk, as described in Section 3. Consider all dimensions of the state space, by Lemma 2, we have that with probability at least 1 − δ, max1≤i≤ds |fki (s, a) −
f̄ki (s, a)| ≤ √ 2σ2k(s, a)log 2ds δ . Also, we can derive an upper bound for the norm of the state difference ||fk(s, a)− f̄k(s, a)||2 ≤ √ ds max1≤i≤ds |fki (s, a)− f̄ki (s, a)|, and so does ||f∗(s, a)− f̄k(s, a)||2 since f∗ and fk share the same posterior distribution. By the union bound, we have that with probability at least 1− 2δ ||fk(s, a)− f∗(s, a)||2 ≤ 2 √ 2dsσ2k(s, a)log 2ds δ .
Then we look at the sum of the differences over horizon H , without requiring each variable in the sum to be independent:
P(ΣHi=1||fk(si, ai)− f∗(si, ai)||2 > ΣHi=12 √
2dsσ2k(si, ai)log 2ds δ )
≤ P( H⋃ i=1 {||fk(si, ai)− f∗(si, ai)||2 > 2 √ 2dsσ2k(si, ai)log 2ds δ })
≤ ΣHi=1P(||fk(si, ai)− f∗(si, ai)||2 > 2 √
2dsσ2k(si, ai)log 2ds δ )
(7)
Thus, with probability at least 1 − 2Hδ, we have ΣHi=1||fk(si, ai) − f∗(si, ai)||2 ≤ ΣHi=12 √ 2dsσ2k(si, ai)log 2ds δ . Let δ ′ = 2Hδ, we have that with probability 1−δ, ΣHi=1||fk(si, ai)−
f∗(si, ai)||2 ≤ ΣHi=12 √ 2dsσ2k(si, ai)log 4Hds δ ≤ 2H √ 2dsσ2k(skmax , akmax)log 4Hds δ , where the index kmax = arg maxi σk(si, ai), i = 1, ...,H in episode k. Here, since the posterior distribution is only updated every H steps, we have to use data points with the max variance in each episode to bound the result. Similarly, using the union bound for [ TH ] episodes, and let C = √ 2 πσ2f , we have that with probability at least 1− δ, Σ[ T H ]
k=1[∆̃k(f)|Hk] ≤ Σ [ TH ] k=1Σ H i=12CHRmax||fk(si, ai)− f∗(si, ai)||2 ≤
Σ [ TH ]
k=14CH 2Rmax √ 2dsσ2k(skmax , akmax)log 4Tds δ .
In each episode k, let σ ′2 k (s, a) denote the posterior variance given only a subset of data points {(s1max , a1max), ..., (sk−1max , ak−1max)}, where each element has the max variance in the corresponding episode. By Eq.(6) in Williams & Vivarelli (2000), we know that the posterior variance reduces as the number of data points grows. Hence ∀(s, a), σ2k(s, a) ≤ σ ′2 k (s, a). By Theorem 5 in Srinivas et al. (2012) which provides a bound on the information gain, and Lemma 2 in Russo & Van Roy (2014) that bounds the sum of variances by the information gain, we have that Σ [ TH ]
k=1σ ′2 k (skmax , akmax) = O((ds + da)log[ TH ]) for linear kernels with bounded variances. Note that the bounded variance property for linear kernels only requires the range of all state-action pairs actually encountered in M∗ not to expand to infinity as T grows, which holds in general episodic MDPs.
Thus with probability 1− δ, and let δ = 1T ,
Σ [ TH ] k=1[∆̃k(f)|Hk] ≤ Σ [ TH ] k=14CH 2Rmax √ 2dsσ2k(skmax , akmax)log
4Tds δ
≤ Σ[ T H ]
k=18CH 2Rmax √ dsσ ′2 k (skmax , akmax)log(2Tds)
≤ 8CH2Rmax √ Σ [ TH ] k=1σ ′2 k (skmax , akmax) √ [ T H ] √ dslog(2Tds)
= 8CH 3 2Rmax √ T √ dslog(2Tds) ∗ √ O((ds + da)log[ T
H ]) = Õ((ds + da)H
3 2 √ T )
(8)
where Õ ignores logarithmic factors.
Therefore, E[Σ[ T H ]
k=1∆̃k(f)|Hk] ≤ (1− 1 T )Õ((ds + sa)H 3 2T ) + 1T 2HRmax ∗ [ T H ] = Õ(H
3 2 d √ T ),
where 2HRmax is the upper bound on the difference of value functions, and d = ds + da. By similar derivation, E[Σ[ T H ] k=1∆̃k(r)|Hk] = Õ( √ dHT ). Finally, through the tower property we have
BayesRegret(T, πps,M∗) = Õ(H 32 d √ T ).
Algorithm 1 MPC-PSRL Initialize data D with random actions for one episode repeat
Sample a transition model and a cost model at the beginning of each episode for i = 1 to H steps do
Obtain action using MPC with planning horizon τ : ai ∈ arg maxai:i+τ ∑i+τ t=i E[r(st, at)]
D = D ∪ {(si, ai, ri, si+1)} end for Train cost and dynamics representations φr and φf using data in D Update φr(s, a), φf (s, a) for all (s, a) collected Perform posterior update of wr and wf in cost and dynamics models using updated representations φr(s, a), φf (s, a) for all (s, a) collected
until convergence
4.2 NONLINEAR CASE VIA FEATURE REPRESENTATION
We can slightly modify the previous proof to derive the bound in settings that use feature representations. We can transform the state-action pair (s, a) to φf (s, a) ∈ Rdφ as the input of the transition model , and transform the newly transitioned state s′ to ψf (s′) ∈ Rdψ as the target, then the transition model can be established with respect to this feature embedding. We further assume dψ = O(dφ) as Assumption 1 in Yang & Wang (2019). Besides, we assume dφ′ = O(dφ) in the feature representation φr(s, a) ∈ Rdφ′ , then the reward model can also be established with respect to the feature embedding. Following similar steps, we can derive a Bayesian regret of Õ(H3/2dφ √ T ).
5 ALGORITHM DESCRIPTION
In this section, we elaborate our proposed algorithm, MPC-PSRL, as shown in Algorithm 1.
5.1 PREDICTIVE MODEL
When model the rewards and transitions, we use features extracted from the penultimate layer of fitted neural networks, and perform Bayesian linear regression on the feature vectors to update posterior distributions.
Feature representation: we first fit neural networks for transitions and rewards, using the same network architecture as Chua et al. (2018). Let xi denote the state-action pair (si, ai) and yi denote the target value. Specifically, we use reward ri as yi to fit rewards, and we take the difference between two consecutive states si+1 − si as yi to fit transitions. The penultimate layer of fitted neural networks is extracted as the feature representation, denoted as φf and φr for transitions and rewards, respectively. Note that in the transition feature embedding, we only use one neural network to extract features of state-action pairs from the penultimate layer to serve as φ, and leave the target states without further feature representation (the general setting is discussed in Section 4.2 where feature representations are used for both inputs and outputs), so the dimension of the target in the transition model d(ψ) equals to ds. Thus we have a modified regret bound of Õ(H3/2 √ ddφT ). We do not find the necessity to further extract feature representations in the target space, as it might introduce additional computational overhead. Although higher dimensionality of the hidden layers might imply better representation, we find that only modifying the width of the penultimate layer to dφ = ds + sa suffices in our experiments for both reward and transition models. Note that how to optimize the dimension of the penultimate layer for more efficient feature representation deserves further exploration.
Bayesian update and posterior sampling: here we describe the Bayesian update of transition and reward models using extracted features. Recall that Gaussian process with linear kernels is equivalent to Bayesian linear regression. By extracting the penultimate layer as feature representation φ, the target value y and the representation φ(x) could be seen as linearly related: y = w>φ(x) + , where is a zero-mean Gaussian noise with variance σ2 (which is σ2f for the transition model and σ 2 r for the reward model as defined in Section 3.1). We choose the prior distribution of weights w as zero-mean
Gaussian with covariance matrix Σp, then the posterior distribution of w is also multivariate Gaussian (Rasmussen (2003)): p(w|D) ∼ N ( σ−2A−1ΦY,A−1 ) where A = σ−2ΦΦ> + Σ−1p , Φ ∈ Rd×N is the concatenation of feature representations {φ(xi)}Ni=1, and Y ∈ RN is the concatenation of target values. At the beginning of each episode, we sample w from the posterior distribution to build the model, collect new data during the whole episode, and update the posterior distribution of w at the end of the episode using all the data collected.
Besides the posterior distribution of w, the feature representation φ is also updated in each episode with new data collected. We adopt a similar dual-update procedure as Riquelme et al. (2018): after representations for rewards and transitions are updated, feature vectors of all state-action pairs collected are re-computed. Then we apply Bayesian update on these feature vectors. See the description of Algorithm 1 for details.
5.2 PLANNING
During interaction with the environment, we use a MPC controller (Camacho & Alba (2013)) for planning. At each time step i, the controller takes state si and an action sequence ai:i+τ = {ai, ai+1, · · · , ai+τ} as the input, where τ is the planning horizon. We use transition and reward models to produce the first action ai of the sequence of optimized actions arg maxai:i+τ ∑i+τ t=i E[r(st, at)], where the expected return of a series of actions can be approximated using the mean return of several particles propagated with noises of our sampled reward and transition models. To compute the optimal action sequence, we use CEM (Botev et al. (2013)), which samples actions from a distribution closer to previous action samples with high rewards.
6 EXPERIMENTS
We compare our method with the following state-of-the art model-based and model-free algorithms on benchmark control tasks.
Model-free: Soft Actor Critic (SAC) from Haarnoja et al. (2018) is an off-policy deep actor-critic algorithm that utilizes entropy maximization to guide exploration. Deep Deterministic Policy Gradient (DDPG) from Barth-Maron et al. (2018) is an off-policy algorithm that concurrently learns a Qfunction and a policy, with a discount factor to guide exploration.
Model-based: Probabilistic Ensembles with Trajectory Sampling (PETS) from Chua et al. (2018) models the dynamics via an ensemble of probabilistic neural networks to capture epistemic uncertainty for exploration, and uses MPC for action selection, with a requirement to have access to oracle rewards for planning. Model-Based Policy Optimization (MBPO) from Janner et al. (2019) uses the same bootstrap ensemble techniques as PETS in modeling, but differs from PETS in policy optimization with a large amount of short model-generated rollouts, and can cope with environments with no oracle rewards provided. We do not compare with Gal et al. (2016), which adopts a single Bayesian neural network (BNN) with moment matching, as it is outperformed by PETS that uses an ensemble of BNNs with trajectory sampling. And we don’t compare with GP-based trajectory optimization methods with real rewards provided (Deisenroth & Rasmussen, 2011; Kamthe & Deisenroth, 2018), which are not only outperformed by PETS, but also computationally expensive and thus are limited to very small state-action spaces.
We use environments with various complexity and dimensionality for evaluation. Low-dimensional environments: continuous Cartpole (ds = 4, da = 1, H = 200, with a continuous action space compared to the classic Cartpole, which makes it harder to learn) and Pendulum Swing Up (ds = 3, da = 1, H = 200, a modified version of Pendulum where we limit the start state to make it harder for exploration). Trajectory optimization with oracle rewards in these two environments is easy and there is almost no difference in the performances for all model-based algorithms we compare, so we omit showing these learning curves. Higher dimensional environments: 7-DOF Reacher (ds = 17, da = 7, H = 150) and 7-DOF pusher (ds = 20, da = 7, H = 150) are two more challenging tasks as provided in Chua et al. (2018), where we conduct experiments both with and without true rewards, to compare with all baseline algorithms mentioned.
The learning curves of these algorithms are showed in Figure 1. When the oracle rewards are provided in Pusher and Reacher, our method outperforms PETS and MBPO: it converges more quickly with similar performance at convergence in Pusher, while in Reacher, not only does it learn faster but also performs better at convergence. As we use the same planning method (MPC) as PETS, results indicate that our model better captures uncertainty, which is beneficial to improving sample efficiency. When exploring in environments where both rewards and transition are unknown, our method learns significantly faster than previous model-based and model-free methods which do no require oracle rewards. Meanwhile, it matches the performance of SAC at convergence. Moreover, the performances of our algorithm in environments with and without oracle rewards can be similar, or even faster convergence (see Pusher with and without rewards), indicating that our algorithm excels at exploring both rewards and transitions.
From experimental results, it can be verified that our algorithm better captures the model uncertainty, and makes better use of uncertainty using posterior sampling. In our methods, by sampling from a Bayesian linear regression on a fitted feature space, and optimizing under the same sampled MDP in the whole episode instead of re-sampling at every step, the performance of our algorithm is guaranteed from a Bayesian view as analysed in Section 4. While PETS and MBPO use bootstrap ensembles of models with a limited ensemble size to "simulate" a Bayesian model, in which the convergence of the uncertainty is not guaranteed and is highly dependent on the training of the neural network. However, in our method there is a limitation of using MPC, which might fail in even higher-dimensional tasks shown in Janner et al. (2019). Incorporating policy gradient techniques for action-selection might further improve the performance and we leave it for future work.
7 CONCLUSION
In our paper, we derive a novel Bayesian regret for PSRL algorithm in continuous spaces with the assumption that true rewards and transitions (with or without feature embedding) can be modeled by GP with linear kernels. While matching the best-known bounds in previous works from a Bayesian view, PSRL also enjoys computational tractability. Moreover, we propose MPC-PSRL in continuous environments, and experiments show that our algorithm exceeds existing model-based and model-free methods with more efficient exploration.
A APPENDIX
A.1 PROOF OF LEMMA 1
Here we provide a proof of Lemma 1.
We first prove the results in Rd with d = 1: p1(x) ∼ N (µ, σ2), p2(x) ∼ N (µ′, σ2), without loss of generality, assume µ′ ≥ µ. The probabilistic distribution is symmetric with regard to µ+µ ′
2 . Note that p1(x) = p2(x) at x = µ+µ ′
2 . Thus the integration of absolute difference between pdf of p1 and p2 can be simplified as twice the integration of one side:∫ ∞
−∞ |p2(x)− p1(x)|dx = 2√ 2πσ2 ∫ ∞ µ+µ′
2
e −(x−µ′)2 2σ2 − e −(x−µ)2 2σ2 dx (9)
let z1 = x− µ, z2 = x− µ′, we have: 2√
2πσ2
∫ ∞ µ+µ′
2
e −(x−µ′)2 2σ2 − e −(x−µ)2 2σ2 dx
=
√ 2
πσ2 ∫ ∞ µ−µ′
2
e− z22 2σ2 dz2 −
√ 2
πσ2 ∫ ∞ µ′−µ
2
e− z21 2σ2 dz1
=
√ 2
πσ2
∫ µ′−µ 2
µ−µ′ 2
e− z2 2σ2 dz
= 2
√ 2
πσ2
∫ µ′−µ 2
0
e− z2 2σ2 dz ≤ 2 √ 2
πσ2
∫ µ′−µ 2
0
1dz =
√ 2
πσ2 |µ′ − µ|.
(10)
Now we extend the result to Rd(d ≥ 2): p1(x) ∼ N (µ, σ2I), p2(x) ∼ N (µ′, σ2I). We can rotate the coordinate system recursively to align the last axis with vector µ − µ′, such that the coordinates of µ and µ′ can be written as (0, 0, · · · , 0, µ̂), and (0, 0, · · · , 0, µ̂′) respectively, with |µ̂′ − µ̂| = ‖µ− µ′‖2. Without loss of generality, let µ̂ ≥ µ̂′.
Clearly, all points with equal distance to µ̂′ and µ̂ define a hyperplane P : xd = µ̂+µ̂ ′
2 where p1(x) = p2(x),∀x ∈ P . More specifically, the probabilistic distribution is symmetric with regard to P . Similar to the analysis in R1:
∫ ∞ −∞ ∫ ∞ −∞ · · · ∫ ∞ −∞ |p1(x)− p2(x)|dx1dx2 · · · dxd = 2√
(2π)dσ2d ∫ ∞ −∞ ∫ ∞ −∞ · · · ∫ ∞ µ̂+µ̂′
2
e −x21 2σ2 · · · e −x2d−1 2σ2 e −(xd−µ̂) 2 2σ2 dx1dx2 · · · dxd
− 2√ (2π)dσ2d ∫ ∞ −∞ ∫ ∞ −∞ · · · ∫ ∞ µ̂+µ̂′
2
e −x21 2σ2 · · · e −x2d−1 2σ2 e −(xd−µ̂ ′)2 2σ2 dx1dx2 · · · dxd
= 2√
(2π)dσ2d ∫ ∞ −∞ e −x21 2σ2 dx1 ∫ ∞ −∞ e −x22 2σ2 dx2 · · · ∫ ∞ −∞ e −x2d−1 2σ2 dxd−1 ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂)
2
2σ2 dxd
− 2√ (2π)dσ2d ∫ ∞ −∞ e −x21 2σ2 dx1 ∫ ∞ −∞ e −x22 2σ2 dx2 · · · ∫ ∞ −∞ e −x2d−1 2σ2 dxd−1 ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂
′)2
2σ2 dxd
=
√ 2
πσ2 ( ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂)
2 2σ2 dxd − ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂
′)2
2σ2 dxd)
(11)
let z1 = xd − µ̂,z2 = xd − µ̂′, we have:
∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂)
2 2σ2 dxd − ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂
′)2
2σ2 dxd
= ∫ ∞ µ̂′−µ̂
2
e −z21 2σ2 dz1 − ∫ ∞ µ̂−µ̂′
2
e −z22 2σ2 dz2
=
∫ µ̂−µ̂′ 2
µ̂′−µ̂ 2
e −z22 2σ2 dz
= 2
∫ µ̂−µ̂′ 2
0
e −z22 2σ dz ≤ 2
∫ µ̂−µ̂′ 2
0
1dz
= |µ̂− µ̂′|
(12)
Thus ∫∞ −∞ ∫∞ −∞ · · · ∫∞ −∞ |p1(x)− p2(x)|dx1dx2 · · · dxd ≤ √ 2 πσ2 ‖µ− µ ′‖2.
A.2 EXPERIMENTAL DETAILS
Here we provide hyperparameters for MBPO:
And we provide hyperparamters for MPC and Neural Networks in PETS:
Here are hyperparameters of our algorithm, which is similar with PETS, except for ensemble size(since we do not use ensembled models):
For SAC and DDPG, we use the open source code ( https://github.com/dongminlee94/ deep_rl) for implementation without changing their hyperparameters. We appreciate the authors for sharing the code! | 1. What is the focus and contribution of the paper on model-based posterior sampling?
2. What are the strengths and weaknesses of the proposed method, particularly regarding its theoretical analysis and performance guarantees?
3. Do you have any concerns or questions about the numerical evaluations and comparisons with other algorithms?
4. How does the reviewer assess the clarity, reproducibility, and novelty of the paper's content?
5. Are there any specific issues or typos that the reviewer noticed in the paper, such as the definition of
B
a
y
e
s
R
e
g
r
e
t
or the regret bound expression? | Review | Review
The paper proposes a model-based posterior sampling algorithms with regret guarantees when the model is assumed to be drawn from a distribution randomly. The authors also provide numerical evaluations of the proposed method.
The contribution of this work as theoretical work is limited. There is no study on fundamental limit. In addition, the performance guarantee seems worse than existing ones, although a fair comparison might be unavailable due to different technical assumes. However, the authors do not provide numerical comparison to existing algorithms with performance guarantee.
It is not possible to assess the contribution from numerical comparison. There is no description on the hyperparameter selection of other algorithms (PETS, MDPO, SAC, ...). Hence, it is not reproducible as well.
The definition of
B
a
y
e
s
R
e
g
r
e
t
seems incorrect as it takes
M
∗
as input argument. The authors need to describe what they mean by the expectation in eq. (3). In my understanding,
B
a
y
e
s
R
e
g
r
e
t
should take hyper-parameter to generate
M
∗
as input, while
R
e
g
r
e
t
taking a random incidence of
M
∗
as input.
I don't understand the meaning of regret bound
O
~
(
H
3
/
2
d
ϕ
T
)
for non-linear case as regret of any algorithm is upper-bounded by
R
m
a
x
T
. |
ICLR | Title
Efficient Exploration for Model-based Reinforcement Learning with Continuous States and Actions
Abstract
Balancing exploration and exploitation is crucial in reinforcement learning (RL). In this paper, we study the model-based posterior sampling algorithm in continuous state-action spaces theoretically and empirically. First, we improve the regret bound: with the assumption that reward and transition functions can be modeled as Gaussian Processes with linear kernels, we develop a Bayesian regret bound of Õ(Hd √ T ), where H is the episode length, d is the dimension of the stateaction space, and T indicates the total time steps. Our bound can be extended to nonlinear cases as well: using linear kernels on the feature representation φ, the Bayesian regret bound becomes Õ(Hdφ √ T ), where dφ is the dimension of the representation space. Moreover, we present MPC-PSRL, a model-based posterior sampling algorithm with model predictive control for action selection. To capture the uncertainty in models and realize posterior sampling, we use Bayesian linear regression on the penultimate layer (the feature representation layer φ) of neural networks. Empirical results show that our algorithm achieves the best sample efficiency in benchmark control tasks compared to prior model-based algorithms, and matches the asymptotic performance of model-free algorithms.
N/A
√ T ), where H is the episode length, d is the dimension of the state-
action space, and T indicates the total time steps. Our bound can be extended to nonlinear cases as well: using linear kernels on the feature representation φ, the Bayesian regret bound becomes Õ(H3/2dφ √ T ), where dφ is the dimension of the representation space. Moreover, we present MPC-PSRL, a model-based posterior sampling algorithm with model predictive control for action selection. To capture the uncertainty in models and realize posterior sampling, we use Bayesian linear regression on the penultimate layer (the feature representation layer φ) of neural networks. Empirical results show that our algorithm achieves the best sample efficiency in benchmark control tasks compared to prior model-based algorithms, and matches the asymptotic performance of model-free algorithms.
1 INTRODUCTION
In reinforcement learning (RL), an agent interacts with an unknown environment which is typically modeled as a Markov Decision Process (MDP). Efficient exploration has been one of the main challenges in RL: the agent is expected to balance between exploring unseen state-action pairs to gain more knowledge about the environment, and exploiting existing knowledge to optimize rewards in the presence of known data.
To achieve efficient exploration, Bayesian reinforcement learning is proposed, where the MDP itself is treated as a random variable with a prior distribution. This prior distribution of the MDP provides an initial uncertainty estimate of the environment, which generally contains distributions of transition dynamics and reward functions. The epistemic uncertainty (subjective uncertainty due to limited data) in reinforcement learning can be captured by posterior distributions given the data collected by the agent.
Posterior sampling reinforcement learning (PSRL), motivated by Thompson sampling in bandit problems (Thompson, 1933), serves as a provably efficient algorithm under Bayesian settings. In PSRL, the agent maintains a posterior distribution for the MDP and follows an optimal policy with respect to a single MDP sampled from the posterior distribution for interaction in each episode. Appealing results of PSRL in tabular RL were presented by both model-based (Osband et al., 2013; Osband & Van Roy, 2017) and model free approaches (Osband et al., 2019) in terms of the Bayesian regret. For H-horizon episodic RL, PSRL was proved to achieve a regret bound of Õ(H √ SAT ), where S and A denote the number of states and actions, respectively. However, in continuous state-action spaces S and A can be infinite, hence the above results do not apply.
Although PSRL in continuous spaces has also been studied in episodic RL, existing results either provide no guarantee or suffer from an exponential order of H . In this paper, we achieve the first Bayesian regret bound for posterior sampling algorithms that is near optimal in T (i.e. √ T ) and
polynomial in the episode lengthH for continuous state-action spaces. We will explain the limitations of previous works in Section 1.1, then summarize our approach and contributions in Section 1.2.
1.1 LIMITATIONS OF PREVIOUS BAYESIAN REGRETS IN CONTINUOUS SPACES
The exponetial order ofH: In model-based settings, Osband & Van Roy (2014) derive a regret bound of Õ(σR √ dK(R)dE(R)T + E[L∗]σp √ dK(P )dE(P )), where L∗ is a global Lipschitz constant for the future value function defined in their eq. (3). However, L∗ is dependent on H: the difference between input states will propagate in H steps, which results in a term dependent of H in the value function. The authors do not mention this dependency, so there is no clear dependency on H in their regret. Moreover, they use the Lipschitz constant of the underlying value function as an upper bound of L∗ in the corollaries, which yields an exponential order in H . Take their Corollary 2 of linear quadratic systems as an example: the regret bound is Õ(σCλ1n2 √ T ), where λ1 is the largest eigenvalue of the matrix Q in the optimal value function V1(s) = sTQs. 1 However, the largest eigenvalue of Q is actually exponential in H 2. Even if we change the reward function from quadratic to linear,the Lipschitz constant of the optimal value function is still exponential in H 3. Chowdhury & Gopalan (2019) maintains the assumption of this Lipschitz property, thus there exists E[L∗] with no clear dependency on H in their regret, and in their Corollary 2 of LQR, they follow the same steps as Osband & Van Roy (2014), and still maintain a term with λ1, which is actually exponential in H as discussed. Although Osband & Van Roy (2014) mentions that system noise helps to smooth future values, but they do not explore it although the noise is assumed to be subgaussian. The authors directly use the Lipschitz continuity of the underlying function in the analysis of LQR, thus they cannot avoid the exponential term in H . Chowdhury & Gopalan (2019) do not explore how the system noise can improve the theoretical bound either. In model-free settings, Azizzadenesheli et al. (2018) develops a regret bound of Õ(dφ √ T ) using a linear function approximator in the Q-network, where dφ is the dimension of the feature representation vector of the state-action space, but their bound is still exponential in H as mentioned in their paper.
High dimensionality: The eluder dimension of neural networks in Osband & Van Roy (2014) can be infinite, and the information gain (Srinivas et al., 2012) used in Chowdhury & Gopalan (2019) yields exponential order of the state-action spaces dimension d if nonlinear kernels are used, such as SE kernels. However, linear kernels can only model linear functions, thus the representation power is highly restricted if the polynomial order of d is desired.
1.2 OUR APPROACH AND MAIN CONTRIBUTIONS
To further imporve the regret bound for PSRL in continuous spaces, especially with explicit dependency on H , we study model-based posterior sampling algorithms in episodic RL. We assume that rewards and transitions can be modeled as Gaussian Processes with linear kernels, and extend the assumption to non-linear settings utilizing features extracted by neural networks. For the linear case, we develop a Bayesian regret bound of Õ(H3/2d √ T ). Using feature embedding technique as mentioned in Yang & Wang (2019), we derive a bound of Õ(H3/2dφ √ T ). Our Bayesian regret is the best-known Bayesian regret for posterior sampling algorithms in continuous state-action spaces, and it also matches the best-known frequentist regret (Zanette et al. (2020), will be discussed in Section 2). Explicitly dependent on d,H, T , our result achieves a significant improvement in terms of the Bayesian regret of PSRL algorithms compared to previous works:
1. We significantly improved the order of H to polynomial: In our analysis, we use the property of subgaussian noise, which is already assumed in Osband & Van Roy (2014) and Chowdhury & Gopalan (2019), to develop a bound with clear polynomial dependency on H , without assuming the Lipschitz continuity of the underlying value function. More specifically, we prove Lemma 1, and use
1V1 denotes the value function counting from step 1 to H within an episode, s is the initial state, reward at the i-th step ri = sTi Psi + a T i Rai + P,i, and the state at the i+ 1-th step si+1 = Asi +Bai + P,i , i ∈ [H].
2Recall the Bellman equation we have Vi(si) = minai s T i Psi + a T i Rai + P,i + Vi+1(Asi +Bai + P,i), VH+1(s) = 0 . Thus in V1(s), there is a term of (AH−1s)TP (AH−1s), and the eigenvalue of the matrix (AH−1)TPAH−1 is exponential in H .
3For example, if ri = sTi P + a T i R+ P,i, there would still exist term of (A H−1s)TP in V1(s).
it to develop a clear dependency on H , thus we can avoid handling the Lipschitz continuity of the underlying value function.
2. Lower dimensionality compared to Osband & Van Roy (2014) and Chowdhury & Gopalan (2019): We first derive results for linear kernels, and increase the representation power of the linear model by building a Bayesian linear regression model on the feature representation space instead of the original state-action space. As a result, we can use the result of linear kernels to derive a bound linear in the feature dimension. The feature dimension, which in practice is dimension of the last hidden layers in the neural networks required for learning, is much lower than exponential of the input dimension, so we avoid the exponential order of the dimension from the use of nonlinear kernels in Chowdhury & Gopalan (2019).
3. Fewer assumptions and different proof strategy compared to Chowdhury & Gopalan (2019): Although we also use kernelized MDPs like Chowdhury & Gopalan (2019), we omit their assumption A1 (Lipschitz assumption) and A2 (Regularity assumption), only use A3 (subgaussian noise). We avoid A1 since it could be derived from our Lemma 1. Moreover, We directly analyze the regret bound of PSRL using the fact that the sampled and the real unknown MDP share the same distribution conditioned on history. In contrast, Chowdhury & Gopalan (2019) first analyze UCRL (Upper confidence bound in RL) with an extra assumption A2, then transfer it to PSRL.
Empirically, we implement PSRL using Bayesian linear regression (BLR) on the penultimate layer (for feature representation) of neural networks when fitting transition and reward models. We use model predictive control (MPC,Camacho & Alba (2013)) to optimize the policy under the sampled models in each episode as an approximate solution of the sampled MDP as described in Section 5. Experiments show that our algorithm achieves more efficient exploration compared with previous model-based algorithms in control benchmark tasks.
2 RELATED WORK ON FREQUENIST REGRETS
Besides the aforementioned works on Bayesian regret bounds, the majority of papers in efficient RL choose the non-Bayesian perspective and develop frequentist regret bounds where the regret for any MDP M∗ ∈M is bounded and M∗ ∈M holds with high probability. frequentist regret bounds can be expressed in the Bayesian view: for a given confidence setM, the frequentist regret bound implies an identical Bayes regret bound for any prior distribution with support onM. Note that frequentist regret is extensively studied in tabular RL (see Jaksch et al. (2010), Azar et al. (2017), and Jin et al. (2018) as examples), among which the best bound for episodic settings is Õ(H √ SAT ).
There is also a line of work that develops frequentist bounds with feature representation. Most recently, MatrixRL proposed by (Yang & Wang, 2019) uses low dimensional representation and achieves a regret bound of Õ(H2dφ √ T ), which is the best-known frequentist bound in model based settings. While our method is also model-based, we achieve a tighter regret bound when compared in the Bayesian view. In model-free settings, Jin et al. (2020) developed a bound of Õ(H3/2d3/2φ √ T ). Zanette et al. (2020) further improved the regret to Õ(H3/2dφ √ T ) by the proposed an algorithm called ELEANOR, which achieves the best-known frequentist bound in model-free settings. They showed that it is unimprovable with the help of a lower bound established in the bandit literature. Despite that our regret is developed in model-based settings, it matches their bound with the same order of H , dφ and T in the Bayesian view. Moreover, their algorithm involves optimization over all MDPs in the confidence set, and thus can be computationally prohibitive. Our method is computationally tractable as it is much easier to optimize a single sampled MDP, while matching their regret bound in the Bayesian view.
3 PRELIMINARIES
3.1 PROBLEM FORMULATION
We model an episodic finite-horizon Markov Decision Process (MDP) M as {S,A, RM , PM , H, σr, σf , Rmax, ρ}, where S ⊂ Rds and A ⊂ Rda denote state and action spaces, respectively. Each episode with length H has an initial state distribution ρ. At time step i ∈ [1, H] within an episode, the agent observes si ∈ S, selects ai ∈ A, receives a noised
reward ri ∼ RM (si, ai) and transitions to a noised new state si+1 ∼ PM (si, ai). More specifically, r(si, ai) = r̄
M (si, ai) + r and si+1 = fM (si, ai) + f , where r ∼ N (0, σ2r), f ∼ N (0, σ2fIds). Variances σ2r and σ 2 f are fixed to control the noise level. Without loss of generality, we assume the expected reward an agent receives at a single step is bounded |r̄M (s, a)| ≤ Rmax, ∀s ∈ S, a ∈ A.Let µ : S → A be a deterministic policy. Here we define the value function for state s at time step i with policy µ as VMµ,i(s) = E[ΣHj=i[r̄M (sj , aj)|si = s], where sj+1 ∼ PM (sj , aj) and aj = µ(sj). With the bound expected reward, we have that |V (s)| ≤ HRmax, ∀s. We use M∗ to indicate the real unknown MDP which includes R∗ and P ∗, and M∗ itself is treated as a random variable. Thus, we can treat the real noiseless reward function r̄∗ and transition function f∗ as random processes as well. In the posterior sampling algorithm πPS , Mk is a random sample from the posterior distribution of the real unknown MDP M∗ in the kth episode, which includes the posterior samples of Rk and P k , given history prior to the kth episode: Hk := {s1,1, a1,1, r1,1, · · · , sk−1,H , ak−1,H , rk−1,H}, where sk,i, ak,i and rk,i indicate the state, action, and reward at time step i in episode k. We define the the optimal policy under M as µM ∈argmaxµ VMµ,i(s) for all s ∈ S and i ∈ [H]. In particular, µ∗ indicates the optimal policy under M∗ and µk represents the optimal policy under Mk. Let ∆k denote the regret over the kth episode:
∆k = ∫ ρ(s1)(V M∗ µ∗,1(s1)− VM ∗ µk,1(s1))ds1 (1)
Then we can express the regret of πps up to time step T as:
Regret(T, πps,M∗) := Σ d TH e k=1 ∆k, (2)
Let BayesRegret(T, πps, φ) denote the Beyesian regret of πps as defined in Osband & Van Roy (2017), where φ is the prior distribution of M∗:
BayesRegret(T, πps, φ) = E[Regret(T, πps,M∗)]. (3)
3.2 ASSUMPTIONS
Generally, we consider modeling an unknown target function g : Rd → R. We are given a set of noisy samples y = [y1...., yT ]T at points X = [x1, ..., xT ]T , X ⊂ D, where D is compact and convex, yi = g(xi) + i with i ∼ N(0, σ2) i.i.d. Gaussian noise ∀i ∈ {1, · · · , T}. We model g as a sample from a Gaussian ProcessGP (µ(x),K(x, x′)), specified by the mean function µ(x) = E[g(x)] and the covariance (kernel) function K(x, x′) = E[(g(x)− µ(x)(g(x′)− µ(x′)]. Let the prior distribution without any data as GP (0,K(x, x′)). Then the posterior distribution over g given X and y is also a GP with mean µT (x), covariance KT (x, x′), and variance σ2T (x): µT (x) = K(x,X)(K(X,X) + σ2I)−1y,KT (x, x′) = K(x, x′) − K(X,x)T (K(X,X) + σ2I)−1K(X,x), σ2T (x) = KT (x, x), where K(X,x) = [K(x1, x), ...,K(xT , x)]T , K(X,X) = [K(xi, xj)]1≤i≤T,1≤j≤T .
We model our reward function r̄M as a Gaussian Process with noise σ2r . For transition models, we treat each dimension independently: each fi(s, a), i = 1, .., dS is modeled independently as above, and with the same noise level σ2f in each dimension. Thus it corresponds to our formulation in the RL setting. Since the posterior covariance matrix is only dependent on the input rather than the target value, the distribution of each fi(s, a) shares the same covariance matrice and only differs in the mean function.
4 BAYESIAN REGRET ANALYSIS
4.1 LINEAR CASE
Theorem 1 In the RL problem formulated in Section 3.1, under the assumption of Section 3.2 with linear kernels4, we have BayesRegret(T, πps,M∗) = Õ(H3/2d √ T ), where d is the dimension of the state-action space, H is the episode length, and T is the time elapsed. 4GP with linear kernel correspond to Bayesian linear regression f(x) = wTx, where the prior distribution of the weight is w ∼ N (0,Σp).
Proof The regret in episode k can be rearranged as:
∆k = ∫ ρ(s1)(V M∗ µ∗, (s1)− VM k µk,1(s1)) + (V Mk µk,1(s1)− V M∗ µk,1(s1)))ds1 (4)
Note that conditioned upon historyHk for any k, Mk and M∗ are identically distributed. Osband & Van Roy (2014) showed that VM ∗
µ∗, − VM k
µk,1 is zero in expectation, and that only the second part of the regret decomposition need to be bounded when deriving the Bayesian regret of PSRL. Thus we can focus on the policy µk, the sampled Mk and real environment data generated by M∗. For clarity, the value function VM k
µk,1 is simplified to V k k,1 and V
M∗
µk,1 to V ∗ k,1. It suffices to derive bounds for any initial
state s1 as the regret bound will still hold through integration of the initial distribution ρ(s1).
We can rewrite the regret from concentration via the Bellman operator (see Section 5.1 in Osband et al. (2013)):
E[∆̃k|Hk] := E[V kk,1(s1)− V ∗k,1(s1)|Hk] = E[r̄k(s1, a1)− r̄∗(s1, a1) + ∫ P k(s′|s1, a1)V kk,2(s′)ds′ − ∫ P ∗(s′, |s1, a1)V ∗k,2(s′)ds′|Hk]
= E[ΣHi=1r̄k(si, ai)− r̄∗(si, ai) + ΣHi=1( ∫ (P k(s′|si, ai)− P ∗(s′|si, ai))V kk,i+1(s′)ds′)|Hk]
= E[∆̃k(r) + ∆̃k(f)|Hk] (5)
where ai = µk(si), si+1 ∼ P ∗(si+1|si, ai), ∆̃k(r) = ΣHi=1r̄k(si, ai) − r̄∗(si, ai), ∆̃k(f) = ΣHi=1( ∫ (P k(s′|si, ai)− P ∗(s′|si, ai))V kk,i+1(s′)ds′). Thus, here (si, ai) is the state-action pair that the agent encounters in the kth episode while using µk for interaction in the real MDP M∗. We can define Vk,H+1 = 0 to keep consistency. Note that we cannot treat si and ai as deterministic and only take the expectation directly on random reward and transition functions. Instead, we need to bound the difference using concentration properties of reward and transition functions modeled as Gaussian Processes (which also applies to any state-action pair), and then derive bounds of this expectation. For all i, we have ∫ (P k(s′|si, ai) − P ∗(s′|si, ai))V kk,i+1(s′)ds′ ≤
maxs |V kk,i+1(s)| ∫ |P k(s′|si, ai)−P ∗(s′|si, ai)|ds′ ≤ HRmax ∫ |P k(s′|si, ai)−P ∗(s′|si, ai)|ds′.
Now we present a lemma which enables us to derive a regret bound with explicit dependency on the episode length H .
Lemma 1 For two multivariate Gaussian distribution N (µ, σ2I), N (µ′, σ2I) with probability density function p1(x) and p2(x) respectively, x ∈ Rd ,∫
|p1(x)− p2(x)|dx ≤ √ 2
πσ2 ||µ− µ′||2.
The proof is in Appendix A.1. Clearly, this result can also be extended to sub-Gaussian noises.
Recall that P k(s′|si, ai) = N (fk(si, ai), σ2fI) and P ∗(s′|si, ai) = N (f∗(si, ai), σ2fI). By Lemma 1 we have ∫
|P k(s′|si, ai)− P ∗(s′|si, ai)|ds′ ≤ √ 2
πσ2f ||fk(si, ai)− f∗(si, ai)||2 (6)
Lemma 2 (Rigollet & Hütter, 2015) Let X1, ..., XN be N sub-Gaussian random variables with variance σ2 (not required to be independent). Then for any t > 0, P(max1≤i≤N |Xi| > t) ≤ 2Ne− t2 2σ2 .
Given history Hk, let f̄k(s, a) indicate the posterior mean of fk(s, a) in episode k, and σ2k(s, a) denotes the posterior variance of fk in each dimension. Note that f∗ and fk share the same variance in each dimension given history Hk, as described in Section 3. Consider all dimensions of the state space, by Lemma 2, we have that with probability at least 1 − δ, max1≤i≤ds |fki (s, a) −
f̄ki (s, a)| ≤ √ 2σ2k(s, a)log 2ds δ . Also, we can derive an upper bound for the norm of the state difference ||fk(s, a)− f̄k(s, a)||2 ≤ √ ds max1≤i≤ds |fki (s, a)− f̄ki (s, a)|, and so does ||f∗(s, a)− f̄k(s, a)||2 since f∗ and fk share the same posterior distribution. By the union bound, we have that with probability at least 1− 2δ ||fk(s, a)− f∗(s, a)||2 ≤ 2 √ 2dsσ2k(s, a)log 2ds δ .
Then we look at the sum of the differences over horizon H , without requiring each variable in the sum to be independent:
P(ΣHi=1||fk(si, ai)− f∗(si, ai)||2 > ΣHi=12 √
2dsσ2k(si, ai)log 2ds δ )
≤ P( H⋃ i=1 {||fk(si, ai)− f∗(si, ai)||2 > 2 √ 2dsσ2k(si, ai)log 2ds δ })
≤ ΣHi=1P(||fk(si, ai)− f∗(si, ai)||2 > 2 √
2dsσ2k(si, ai)log 2ds δ )
(7)
Thus, with probability at least 1 − 2Hδ, we have ΣHi=1||fk(si, ai) − f∗(si, ai)||2 ≤ ΣHi=12 √ 2dsσ2k(si, ai)log 2ds δ . Let δ ′ = 2Hδ, we have that with probability 1−δ, ΣHi=1||fk(si, ai)−
f∗(si, ai)||2 ≤ ΣHi=12 √ 2dsσ2k(si, ai)log 4Hds δ ≤ 2H √ 2dsσ2k(skmax , akmax)log 4Hds δ , where the index kmax = arg maxi σk(si, ai), i = 1, ...,H in episode k. Here, since the posterior distribution is only updated every H steps, we have to use data points with the max variance in each episode to bound the result. Similarly, using the union bound for [ TH ] episodes, and let C = √ 2 πσ2f , we have that with probability at least 1− δ, Σ[ T H ]
k=1[∆̃k(f)|Hk] ≤ Σ [ TH ] k=1Σ H i=12CHRmax||fk(si, ai)− f∗(si, ai)||2 ≤
Σ [ TH ]
k=14CH 2Rmax √ 2dsσ2k(skmax , akmax)log 4Tds δ .
In each episode k, let σ ′2 k (s, a) denote the posterior variance given only a subset of data points {(s1max , a1max), ..., (sk−1max , ak−1max)}, where each element has the max variance in the corresponding episode. By Eq.(6) in Williams & Vivarelli (2000), we know that the posterior variance reduces as the number of data points grows. Hence ∀(s, a), σ2k(s, a) ≤ σ ′2 k (s, a). By Theorem 5 in Srinivas et al. (2012) which provides a bound on the information gain, and Lemma 2 in Russo & Van Roy (2014) that bounds the sum of variances by the information gain, we have that Σ [ TH ]
k=1σ ′2 k (skmax , akmax) = O((ds + da)log[ TH ]) for linear kernels with bounded variances. Note that the bounded variance property for linear kernels only requires the range of all state-action pairs actually encountered in M∗ not to expand to infinity as T grows, which holds in general episodic MDPs.
Thus with probability 1− δ, and let δ = 1T ,
Σ [ TH ] k=1[∆̃k(f)|Hk] ≤ Σ [ TH ] k=14CH 2Rmax √ 2dsσ2k(skmax , akmax)log
4Tds δ
≤ Σ[ T H ]
k=18CH 2Rmax √ dsσ ′2 k (skmax , akmax)log(2Tds)
≤ 8CH2Rmax √ Σ [ TH ] k=1σ ′2 k (skmax , akmax) √ [ T H ] √ dslog(2Tds)
= 8CH 3 2Rmax √ T √ dslog(2Tds) ∗ √ O((ds + da)log[ T
H ]) = Õ((ds + da)H
3 2 √ T )
(8)
where Õ ignores logarithmic factors.
Therefore, E[Σ[ T H ]
k=1∆̃k(f)|Hk] ≤ (1− 1 T )Õ((ds + sa)H 3 2T ) + 1T 2HRmax ∗ [ T H ] = Õ(H
3 2 d √ T ),
where 2HRmax is the upper bound on the difference of value functions, and d = ds + da. By similar derivation, E[Σ[ T H ] k=1∆̃k(r)|Hk] = Õ( √ dHT ). Finally, through the tower property we have
BayesRegret(T, πps,M∗) = Õ(H 32 d √ T ).
Algorithm 1 MPC-PSRL Initialize data D with random actions for one episode repeat
Sample a transition model and a cost model at the beginning of each episode for i = 1 to H steps do
Obtain action using MPC with planning horizon τ : ai ∈ arg maxai:i+τ ∑i+τ t=i E[r(st, at)]
D = D ∪ {(si, ai, ri, si+1)} end for Train cost and dynamics representations φr and φf using data in D Update φr(s, a), φf (s, a) for all (s, a) collected Perform posterior update of wr and wf in cost and dynamics models using updated representations φr(s, a), φf (s, a) for all (s, a) collected
until convergence
4.2 NONLINEAR CASE VIA FEATURE REPRESENTATION
We can slightly modify the previous proof to derive the bound in settings that use feature representations. We can transform the state-action pair (s, a) to φf (s, a) ∈ Rdφ as the input of the transition model , and transform the newly transitioned state s′ to ψf (s′) ∈ Rdψ as the target, then the transition model can be established with respect to this feature embedding. We further assume dψ = O(dφ) as Assumption 1 in Yang & Wang (2019). Besides, we assume dφ′ = O(dφ) in the feature representation φr(s, a) ∈ Rdφ′ , then the reward model can also be established with respect to the feature embedding. Following similar steps, we can derive a Bayesian regret of Õ(H3/2dφ √ T ).
5 ALGORITHM DESCRIPTION
In this section, we elaborate our proposed algorithm, MPC-PSRL, as shown in Algorithm 1.
5.1 PREDICTIVE MODEL
When model the rewards and transitions, we use features extracted from the penultimate layer of fitted neural networks, and perform Bayesian linear regression on the feature vectors to update posterior distributions.
Feature representation: we first fit neural networks for transitions and rewards, using the same network architecture as Chua et al. (2018). Let xi denote the state-action pair (si, ai) and yi denote the target value. Specifically, we use reward ri as yi to fit rewards, and we take the difference between two consecutive states si+1 − si as yi to fit transitions. The penultimate layer of fitted neural networks is extracted as the feature representation, denoted as φf and φr for transitions and rewards, respectively. Note that in the transition feature embedding, we only use one neural network to extract features of state-action pairs from the penultimate layer to serve as φ, and leave the target states without further feature representation (the general setting is discussed in Section 4.2 where feature representations are used for both inputs and outputs), so the dimension of the target in the transition model d(ψ) equals to ds. Thus we have a modified regret bound of Õ(H3/2 √ ddφT ). We do not find the necessity to further extract feature representations in the target space, as it might introduce additional computational overhead. Although higher dimensionality of the hidden layers might imply better representation, we find that only modifying the width of the penultimate layer to dφ = ds + sa suffices in our experiments for both reward and transition models. Note that how to optimize the dimension of the penultimate layer for more efficient feature representation deserves further exploration.
Bayesian update and posterior sampling: here we describe the Bayesian update of transition and reward models using extracted features. Recall that Gaussian process with linear kernels is equivalent to Bayesian linear regression. By extracting the penultimate layer as feature representation φ, the target value y and the representation φ(x) could be seen as linearly related: y = w>φ(x) + , where is a zero-mean Gaussian noise with variance σ2 (which is σ2f for the transition model and σ 2 r for the reward model as defined in Section 3.1). We choose the prior distribution of weights w as zero-mean
Gaussian with covariance matrix Σp, then the posterior distribution of w is also multivariate Gaussian (Rasmussen (2003)): p(w|D) ∼ N ( σ−2A−1ΦY,A−1 ) where A = σ−2ΦΦ> + Σ−1p , Φ ∈ Rd×N is the concatenation of feature representations {φ(xi)}Ni=1, and Y ∈ RN is the concatenation of target values. At the beginning of each episode, we sample w from the posterior distribution to build the model, collect new data during the whole episode, and update the posterior distribution of w at the end of the episode using all the data collected.
Besides the posterior distribution of w, the feature representation φ is also updated in each episode with new data collected. We adopt a similar dual-update procedure as Riquelme et al. (2018): after representations for rewards and transitions are updated, feature vectors of all state-action pairs collected are re-computed. Then we apply Bayesian update on these feature vectors. See the description of Algorithm 1 for details.
5.2 PLANNING
During interaction with the environment, we use a MPC controller (Camacho & Alba (2013)) for planning. At each time step i, the controller takes state si and an action sequence ai:i+τ = {ai, ai+1, · · · , ai+τ} as the input, where τ is the planning horizon. We use transition and reward models to produce the first action ai of the sequence of optimized actions arg maxai:i+τ ∑i+τ t=i E[r(st, at)], where the expected return of a series of actions can be approximated using the mean return of several particles propagated with noises of our sampled reward and transition models. To compute the optimal action sequence, we use CEM (Botev et al. (2013)), which samples actions from a distribution closer to previous action samples with high rewards.
6 EXPERIMENTS
We compare our method with the following state-of-the art model-based and model-free algorithms on benchmark control tasks.
Model-free: Soft Actor Critic (SAC) from Haarnoja et al. (2018) is an off-policy deep actor-critic algorithm that utilizes entropy maximization to guide exploration. Deep Deterministic Policy Gradient (DDPG) from Barth-Maron et al. (2018) is an off-policy algorithm that concurrently learns a Qfunction and a policy, with a discount factor to guide exploration.
Model-based: Probabilistic Ensembles with Trajectory Sampling (PETS) from Chua et al. (2018) models the dynamics via an ensemble of probabilistic neural networks to capture epistemic uncertainty for exploration, and uses MPC for action selection, with a requirement to have access to oracle rewards for planning. Model-Based Policy Optimization (MBPO) from Janner et al. (2019) uses the same bootstrap ensemble techniques as PETS in modeling, but differs from PETS in policy optimization with a large amount of short model-generated rollouts, and can cope with environments with no oracle rewards provided. We do not compare with Gal et al. (2016), which adopts a single Bayesian neural network (BNN) with moment matching, as it is outperformed by PETS that uses an ensemble of BNNs with trajectory sampling. And we don’t compare with GP-based trajectory optimization methods with real rewards provided (Deisenroth & Rasmussen, 2011; Kamthe & Deisenroth, 2018), which are not only outperformed by PETS, but also computationally expensive and thus are limited to very small state-action spaces.
We use environments with various complexity and dimensionality for evaluation. Low-dimensional environments: continuous Cartpole (ds = 4, da = 1, H = 200, with a continuous action space compared to the classic Cartpole, which makes it harder to learn) and Pendulum Swing Up (ds = 3, da = 1, H = 200, a modified version of Pendulum where we limit the start state to make it harder for exploration). Trajectory optimization with oracle rewards in these two environments is easy and there is almost no difference in the performances for all model-based algorithms we compare, so we omit showing these learning curves. Higher dimensional environments: 7-DOF Reacher (ds = 17, da = 7, H = 150) and 7-DOF pusher (ds = 20, da = 7, H = 150) are two more challenging tasks as provided in Chua et al. (2018), where we conduct experiments both with and without true rewards, to compare with all baseline algorithms mentioned.
The learning curves of these algorithms are showed in Figure 1. When the oracle rewards are provided in Pusher and Reacher, our method outperforms PETS and MBPO: it converges more quickly with similar performance at convergence in Pusher, while in Reacher, not only does it learn faster but also performs better at convergence. As we use the same planning method (MPC) as PETS, results indicate that our model better captures uncertainty, which is beneficial to improving sample efficiency. When exploring in environments where both rewards and transition are unknown, our method learns significantly faster than previous model-based and model-free methods which do no require oracle rewards. Meanwhile, it matches the performance of SAC at convergence. Moreover, the performances of our algorithm in environments with and without oracle rewards can be similar, or even faster convergence (see Pusher with and without rewards), indicating that our algorithm excels at exploring both rewards and transitions.
From experimental results, it can be verified that our algorithm better captures the model uncertainty, and makes better use of uncertainty using posterior sampling. In our methods, by sampling from a Bayesian linear regression on a fitted feature space, and optimizing under the same sampled MDP in the whole episode instead of re-sampling at every step, the performance of our algorithm is guaranteed from a Bayesian view as analysed in Section 4. While PETS and MBPO use bootstrap ensembles of models with a limited ensemble size to "simulate" a Bayesian model, in which the convergence of the uncertainty is not guaranteed and is highly dependent on the training of the neural network. However, in our method there is a limitation of using MPC, which might fail in even higher-dimensional tasks shown in Janner et al. (2019). Incorporating policy gradient techniques for action-selection might further improve the performance and we leave it for future work.
7 CONCLUSION
In our paper, we derive a novel Bayesian regret for PSRL algorithm in continuous spaces with the assumption that true rewards and transitions (with or without feature embedding) can be modeled by GP with linear kernels. While matching the best-known bounds in previous works from a Bayesian view, PSRL also enjoys computational tractability. Moreover, we propose MPC-PSRL in continuous environments, and experiments show that our algorithm exceeds existing model-based and model-free methods with more efficient exploration.
A APPENDIX
A.1 PROOF OF LEMMA 1
Here we provide a proof of Lemma 1.
We first prove the results in Rd with d = 1: p1(x) ∼ N (µ, σ2), p2(x) ∼ N (µ′, σ2), without loss of generality, assume µ′ ≥ µ. The probabilistic distribution is symmetric with regard to µ+µ ′
2 . Note that p1(x) = p2(x) at x = µ+µ ′
2 . Thus the integration of absolute difference between pdf of p1 and p2 can be simplified as twice the integration of one side:∫ ∞
−∞ |p2(x)− p1(x)|dx = 2√ 2πσ2 ∫ ∞ µ+µ′
2
e −(x−µ′)2 2σ2 − e −(x−µ)2 2σ2 dx (9)
let z1 = x− µ, z2 = x− µ′, we have: 2√
2πσ2
∫ ∞ µ+µ′
2
e −(x−µ′)2 2σ2 − e −(x−µ)2 2σ2 dx
=
√ 2
πσ2 ∫ ∞ µ−µ′
2
e− z22 2σ2 dz2 −
√ 2
πσ2 ∫ ∞ µ′−µ
2
e− z21 2σ2 dz1
=
√ 2
πσ2
∫ µ′−µ 2
µ−µ′ 2
e− z2 2σ2 dz
= 2
√ 2
πσ2
∫ µ′−µ 2
0
e− z2 2σ2 dz ≤ 2 √ 2
πσ2
∫ µ′−µ 2
0
1dz =
√ 2
πσ2 |µ′ − µ|.
(10)
Now we extend the result to Rd(d ≥ 2): p1(x) ∼ N (µ, σ2I), p2(x) ∼ N (µ′, σ2I). We can rotate the coordinate system recursively to align the last axis with vector µ − µ′, such that the coordinates of µ and µ′ can be written as (0, 0, · · · , 0, µ̂), and (0, 0, · · · , 0, µ̂′) respectively, with |µ̂′ − µ̂| = ‖µ− µ′‖2. Without loss of generality, let µ̂ ≥ µ̂′.
Clearly, all points with equal distance to µ̂′ and µ̂ define a hyperplane P : xd = µ̂+µ̂ ′
2 where p1(x) = p2(x),∀x ∈ P . More specifically, the probabilistic distribution is symmetric with regard to P . Similar to the analysis in R1:
∫ ∞ −∞ ∫ ∞ −∞ · · · ∫ ∞ −∞ |p1(x)− p2(x)|dx1dx2 · · · dxd = 2√
(2π)dσ2d ∫ ∞ −∞ ∫ ∞ −∞ · · · ∫ ∞ µ̂+µ̂′
2
e −x21 2σ2 · · · e −x2d−1 2σ2 e −(xd−µ̂) 2 2σ2 dx1dx2 · · · dxd
− 2√ (2π)dσ2d ∫ ∞ −∞ ∫ ∞ −∞ · · · ∫ ∞ µ̂+µ̂′
2
e −x21 2σ2 · · · e −x2d−1 2σ2 e −(xd−µ̂ ′)2 2σ2 dx1dx2 · · · dxd
= 2√
(2π)dσ2d ∫ ∞ −∞ e −x21 2σ2 dx1 ∫ ∞ −∞ e −x22 2σ2 dx2 · · · ∫ ∞ −∞ e −x2d−1 2σ2 dxd−1 ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂)
2
2σ2 dxd
− 2√ (2π)dσ2d ∫ ∞ −∞ e −x21 2σ2 dx1 ∫ ∞ −∞ e −x22 2σ2 dx2 · · · ∫ ∞ −∞ e −x2d−1 2σ2 dxd−1 ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂
′)2
2σ2 dxd
=
√ 2
πσ2 ( ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂)
2 2σ2 dxd − ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂
′)2
2σ2 dxd)
(11)
let z1 = xd − µ̂,z2 = xd − µ̂′, we have:
∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂)
2 2σ2 dxd − ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂
′)2
2σ2 dxd
= ∫ ∞ µ̂′−µ̂
2
e −z21 2σ2 dz1 − ∫ ∞ µ̂−µ̂′
2
e −z22 2σ2 dz2
=
∫ µ̂−µ̂′ 2
µ̂′−µ̂ 2
e −z22 2σ2 dz
= 2
∫ µ̂−µ̂′ 2
0
e −z22 2σ dz ≤ 2
∫ µ̂−µ̂′ 2
0
1dz
= |µ̂− µ̂′|
(12)
Thus ∫∞ −∞ ∫∞ −∞ · · · ∫∞ −∞ |p1(x)− p2(x)|dx1dx2 · · · dxd ≤ √ 2 πσ2 ‖µ− µ ′‖2.
A.2 EXPERIMENTAL DETAILS
Here we provide hyperparameters for MBPO:
And we provide hyperparamters for MPC and Neural Networks in PETS:
Here are hyperparameters of our algorithm, which is similar with PETS, except for ensemble size(since we do not use ensembled models):
For SAC and DDPG, we use the open source code ( https://github.com/dongminlee94/ deep_rl) for implementation without changing their hyperparameters. We appreciate the authors for sharing the code! | 1. What is the main contribution of the paper regarding balancing exploration and exploitation in reinforcement learning?
2. What are the strengths and weaknesses of the proposed approach compared to prior works, particularly in terms of theoretical novelty and computational tractability?
3. How does the reviewer assess the clarity and completeness of the paper's content, including definitions, proofs, and experimental settings?
4. What are some suggestions and remarks provided by the reviewer to improve the paper, such as discussing the nonlinear case, comparing to other exploration strategies, and providing more details in the proof?
5. Are there any typos or minor issues in the paper that the reviewer has identified? | Review | Review
Pros
The paper proposes a method to balance exploration and exploitation in reinforcement learning problems whose transitions and rewards are assumed to be sampled from Gaussian processes, and provide a Bayesian regret bound. Also, the paper shows how the proposed approach can be implemented in practice using model predictive control.
Cons
It is not clear what is the novelty of the theoretical results in this paper when compared to the regret bounds by Chowdhury & Gopalan (2019), who provide both frequentist and Bayesian bounds when the transitions and rewards are in an RKHS or sampled according to a GP. The bound of Chowdhury & Gopalan (2019) seem to be polynomial in the horizon H (their paper mention a
H
S
A
T
bound in the particular case of finite MDPs), whereas the current paper says that "H is still unbounded" in their result. Hence, further clarification is needed regarding this point.
The method is claimed to be computationally tractable: “it can be easily implemented by only optimizing a single sampled MDP”. However, the sampled MDP is continuous, and solving a continuous MDP is hard in general. Experimentally, the paper proposes the cross entropy method (CEM) for planning: in this case, planning is not exact and the regret bound does not hold anymore. I believe this issue should be made clear in the introduction/related work section.
Although my main concern is the theoretical novelty, the experimental section can be improved: it would be interesting to compare the proposed approach to other strategies for exploration in deep RL, for instance
Bellemare et al. (2016), Unifying Count-Based Exploration and Intrinsic Motivation
Tang et al. (2017), #Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
Azizzadenesheli et al. (2018), Efficient Exploration through Bayesian Deep Q-Networks; and also perform experiments in small simple environments that satisfy the assumptions to check if the regret is sublinear (e.g. a continuous “grid world” with noisy transitions).
In the nonlinear case (Section 4.2) it is not clear how to follow the steps of Yang & Wang (2019) to derive a Bayesian regret bound with feature representation, since their assumption is related to low-rank MDPs instead of Gaussian processes. In addition, it would be interesting to discuss how the model would be sampled (in Algorithm 1) in this case.
Suggestions & remarks
Introduction: mention which of the cited papers proves the
H
S
A
T
upper bound on the Bayesian regret, clarify whether
T
is the number of episodes, or
H
times the number of episodes.
Some definitions are missing:
The linear kernel should be defined before Theorem 1
M
k
is not defined before appearing in Eq. 5
Increase the font size of the text in Figure 1.
In Algorithm 1, include what are the input parameters (e.g.
σ
r
,
σ
f
).
Some suggestions for the proof:
Write the relation between
Δ
k
and
Δ
~
k
.
Include (possibly in the appendix) more details about the arguments in the paragraph below Eq. 9. For instance, there is an argument about a bound on the information gain, which is not defined in the paper. Also, it might be useful (for the reader) to restate (in the appendix) the results by Williams & Vivarelli (2000), Srinivas et al. (2012) and Russo & Van Roy (2014) required for the proof.
Typos
Abstract:
T
instead of
T
in the regret bound with feature representation
Page 9: Definition of MBPO, it should be “Model-Based Policy...”
Typo in integration limits in Eq. 12 (the
μ
′
−
μ
at the bottom should be
μ
−
μ
′
). |
ICLR | Title
Efficient Exploration for Model-based Reinforcement Learning with Continuous States and Actions
Abstract
Balancing exploration and exploitation is crucial in reinforcement learning (RL). In this paper, we study the model-based posterior sampling algorithm in continuous state-action spaces theoretically and empirically. First, we improve the regret bound: with the assumption that reward and transition functions can be modeled as Gaussian Processes with linear kernels, we develop a Bayesian regret bound of Õ(Hd √ T ), where H is the episode length, d is the dimension of the stateaction space, and T indicates the total time steps. Our bound can be extended to nonlinear cases as well: using linear kernels on the feature representation φ, the Bayesian regret bound becomes Õ(Hdφ √ T ), where dφ is the dimension of the representation space. Moreover, we present MPC-PSRL, a model-based posterior sampling algorithm with model predictive control for action selection. To capture the uncertainty in models and realize posterior sampling, we use Bayesian linear regression on the penultimate layer (the feature representation layer φ) of neural networks. Empirical results show that our algorithm achieves the best sample efficiency in benchmark control tasks compared to prior model-based algorithms, and matches the asymptotic performance of model-free algorithms.
N/A
√ T ), where H is the episode length, d is the dimension of the state-
action space, and T indicates the total time steps. Our bound can be extended to nonlinear cases as well: using linear kernels on the feature representation φ, the Bayesian regret bound becomes Õ(H3/2dφ √ T ), where dφ is the dimension of the representation space. Moreover, we present MPC-PSRL, a model-based posterior sampling algorithm with model predictive control for action selection. To capture the uncertainty in models and realize posterior sampling, we use Bayesian linear regression on the penultimate layer (the feature representation layer φ) of neural networks. Empirical results show that our algorithm achieves the best sample efficiency in benchmark control tasks compared to prior model-based algorithms, and matches the asymptotic performance of model-free algorithms.
1 INTRODUCTION
In reinforcement learning (RL), an agent interacts with an unknown environment which is typically modeled as a Markov Decision Process (MDP). Efficient exploration has been one of the main challenges in RL: the agent is expected to balance between exploring unseen state-action pairs to gain more knowledge about the environment, and exploiting existing knowledge to optimize rewards in the presence of known data.
To achieve efficient exploration, Bayesian reinforcement learning is proposed, where the MDP itself is treated as a random variable with a prior distribution. This prior distribution of the MDP provides an initial uncertainty estimate of the environment, which generally contains distributions of transition dynamics and reward functions. The epistemic uncertainty (subjective uncertainty due to limited data) in reinforcement learning can be captured by posterior distributions given the data collected by the agent.
Posterior sampling reinforcement learning (PSRL), motivated by Thompson sampling in bandit problems (Thompson, 1933), serves as a provably efficient algorithm under Bayesian settings. In PSRL, the agent maintains a posterior distribution for the MDP and follows an optimal policy with respect to a single MDP sampled from the posterior distribution for interaction in each episode. Appealing results of PSRL in tabular RL were presented by both model-based (Osband et al., 2013; Osband & Van Roy, 2017) and model free approaches (Osband et al., 2019) in terms of the Bayesian regret. For H-horizon episodic RL, PSRL was proved to achieve a regret bound of Õ(H √ SAT ), where S and A denote the number of states and actions, respectively. However, in continuous state-action spaces S and A can be infinite, hence the above results do not apply.
Although PSRL in continuous spaces has also been studied in episodic RL, existing results either provide no guarantee or suffer from an exponential order of H . In this paper, we achieve the first Bayesian regret bound for posterior sampling algorithms that is near optimal in T (i.e. √ T ) and
polynomial in the episode lengthH for continuous state-action spaces. We will explain the limitations of previous works in Section 1.1, then summarize our approach and contributions in Section 1.2.
1.1 LIMITATIONS OF PREVIOUS BAYESIAN REGRETS IN CONTINUOUS SPACES
The exponetial order ofH: In model-based settings, Osband & Van Roy (2014) derive a regret bound of Õ(σR √ dK(R)dE(R)T + E[L∗]σp √ dK(P )dE(P )), where L∗ is a global Lipschitz constant for the future value function defined in their eq. (3). However, L∗ is dependent on H: the difference between input states will propagate in H steps, which results in a term dependent of H in the value function. The authors do not mention this dependency, so there is no clear dependency on H in their regret. Moreover, they use the Lipschitz constant of the underlying value function as an upper bound of L∗ in the corollaries, which yields an exponential order in H . Take their Corollary 2 of linear quadratic systems as an example: the regret bound is Õ(σCλ1n2 √ T ), where λ1 is the largest eigenvalue of the matrix Q in the optimal value function V1(s) = sTQs. 1 However, the largest eigenvalue of Q is actually exponential in H 2. Even if we change the reward function from quadratic to linear,the Lipschitz constant of the optimal value function is still exponential in H 3. Chowdhury & Gopalan (2019) maintains the assumption of this Lipschitz property, thus there exists E[L∗] with no clear dependency on H in their regret, and in their Corollary 2 of LQR, they follow the same steps as Osband & Van Roy (2014), and still maintain a term with λ1, which is actually exponential in H as discussed. Although Osband & Van Roy (2014) mentions that system noise helps to smooth future values, but they do not explore it although the noise is assumed to be subgaussian. The authors directly use the Lipschitz continuity of the underlying function in the analysis of LQR, thus they cannot avoid the exponential term in H . Chowdhury & Gopalan (2019) do not explore how the system noise can improve the theoretical bound either. In model-free settings, Azizzadenesheli et al. (2018) develops a regret bound of Õ(dφ √ T ) using a linear function approximator in the Q-network, where dφ is the dimension of the feature representation vector of the state-action space, but their bound is still exponential in H as mentioned in their paper.
High dimensionality: The eluder dimension of neural networks in Osband & Van Roy (2014) can be infinite, and the information gain (Srinivas et al., 2012) used in Chowdhury & Gopalan (2019) yields exponential order of the state-action spaces dimension d if nonlinear kernels are used, such as SE kernels. However, linear kernels can only model linear functions, thus the representation power is highly restricted if the polynomial order of d is desired.
1.2 OUR APPROACH AND MAIN CONTRIBUTIONS
To further imporve the regret bound for PSRL in continuous spaces, especially with explicit dependency on H , we study model-based posterior sampling algorithms in episodic RL. We assume that rewards and transitions can be modeled as Gaussian Processes with linear kernels, and extend the assumption to non-linear settings utilizing features extracted by neural networks. For the linear case, we develop a Bayesian regret bound of Õ(H3/2d √ T ). Using feature embedding technique as mentioned in Yang & Wang (2019), we derive a bound of Õ(H3/2dφ √ T ). Our Bayesian regret is the best-known Bayesian regret for posterior sampling algorithms in continuous state-action spaces, and it also matches the best-known frequentist regret (Zanette et al. (2020), will be discussed in Section 2). Explicitly dependent on d,H, T , our result achieves a significant improvement in terms of the Bayesian regret of PSRL algorithms compared to previous works:
1. We significantly improved the order of H to polynomial: In our analysis, we use the property of subgaussian noise, which is already assumed in Osband & Van Roy (2014) and Chowdhury & Gopalan (2019), to develop a bound with clear polynomial dependency on H , without assuming the Lipschitz continuity of the underlying value function. More specifically, we prove Lemma 1, and use
1V1 denotes the value function counting from step 1 to H within an episode, s is the initial state, reward at the i-th step ri = sTi Psi + a T i Rai + P,i, and the state at the i+ 1-th step si+1 = Asi +Bai + P,i , i ∈ [H].
2Recall the Bellman equation we have Vi(si) = minai s T i Psi + a T i Rai + P,i + Vi+1(Asi +Bai + P,i), VH+1(s) = 0 . Thus in V1(s), there is a term of (AH−1s)TP (AH−1s), and the eigenvalue of the matrix (AH−1)TPAH−1 is exponential in H .
3For example, if ri = sTi P + a T i R+ P,i, there would still exist term of (A H−1s)TP in V1(s).
it to develop a clear dependency on H , thus we can avoid handling the Lipschitz continuity of the underlying value function.
2. Lower dimensionality compared to Osband & Van Roy (2014) and Chowdhury & Gopalan (2019): We first derive results for linear kernels, and increase the representation power of the linear model by building a Bayesian linear regression model on the feature representation space instead of the original state-action space. As a result, we can use the result of linear kernels to derive a bound linear in the feature dimension. The feature dimension, which in practice is dimension of the last hidden layers in the neural networks required for learning, is much lower than exponential of the input dimension, so we avoid the exponential order of the dimension from the use of nonlinear kernels in Chowdhury & Gopalan (2019).
3. Fewer assumptions and different proof strategy compared to Chowdhury & Gopalan (2019): Although we also use kernelized MDPs like Chowdhury & Gopalan (2019), we omit their assumption A1 (Lipschitz assumption) and A2 (Regularity assumption), only use A3 (subgaussian noise). We avoid A1 since it could be derived from our Lemma 1. Moreover, We directly analyze the regret bound of PSRL using the fact that the sampled and the real unknown MDP share the same distribution conditioned on history. In contrast, Chowdhury & Gopalan (2019) first analyze UCRL (Upper confidence bound in RL) with an extra assumption A2, then transfer it to PSRL.
Empirically, we implement PSRL using Bayesian linear regression (BLR) on the penultimate layer (for feature representation) of neural networks when fitting transition and reward models. We use model predictive control (MPC,Camacho & Alba (2013)) to optimize the policy under the sampled models in each episode as an approximate solution of the sampled MDP as described in Section 5. Experiments show that our algorithm achieves more efficient exploration compared with previous model-based algorithms in control benchmark tasks.
2 RELATED WORK ON FREQUENIST REGRETS
Besides the aforementioned works on Bayesian regret bounds, the majority of papers in efficient RL choose the non-Bayesian perspective and develop frequentist regret bounds where the regret for any MDP M∗ ∈M is bounded and M∗ ∈M holds with high probability. frequentist regret bounds can be expressed in the Bayesian view: for a given confidence setM, the frequentist regret bound implies an identical Bayes regret bound for any prior distribution with support onM. Note that frequentist regret is extensively studied in tabular RL (see Jaksch et al. (2010), Azar et al. (2017), and Jin et al. (2018) as examples), among which the best bound for episodic settings is Õ(H √ SAT ).
There is also a line of work that develops frequentist bounds with feature representation. Most recently, MatrixRL proposed by (Yang & Wang, 2019) uses low dimensional representation and achieves a regret bound of Õ(H2dφ √ T ), which is the best-known frequentist bound in model based settings. While our method is also model-based, we achieve a tighter regret bound when compared in the Bayesian view. In model-free settings, Jin et al. (2020) developed a bound of Õ(H3/2d3/2φ √ T ). Zanette et al. (2020) further improved the regret to Õ(H3/2dφ √ T ) by the proposed an algorithm called ELEANOR, which achieves the best-known frequentist bound in model-free settings. They showed that it is unimprovable with the help of a lower bound established in the bandit literature. Despite that our regret is developed in model-based settings, it matches their bound with the same order of H , dφ and T in the Bayesian view. Moreover, their algorithm involves optimization over all MDPs in the confidence set, and thus can be computationally prohibitive. Our method is computationally tractable as it is much easier to optimize a single sampled MDP, while matching their regret bound in the Bayesian view.
3 PRELIMINARIES
3.1 PROBLEM FORMULATION
We model an episodic finite-horizon Markov Decision Process (MDP) M as {S,A, RM , PM , H, σr, σf , Rmax, ρ}, where S ⊂ Rds and A ⊂ Rda denote state and action spaces, respectively. Each episode with length H has an initial state distribution ρ. At time step i ∈ [1, H] within an episode, the agent observes si ∈ S, selects ai ∈ A, receives a noised
reward ri ∼ RM (si, ai) and transitions to a noised new state si+1 ∼ PM (si, ai). More specifically, r(si, ai) = r̄
M (si, ai) + r and si+1 = fM (si, ai) + f , where r ∼ N (0, σ2r), f ∼ N (0, σ2fIds). Variances σ2r and σ 2 f are fixed to control the noise level. Without loss of generality, we assume the expected reward an agent receives at a single step is bounded |r̄M (s, a)| ≤ Rmax, ∀s ∈ S, a ∈ A.Let µ : S → A be a deterministic policy. Here we define the value function for state s at time step i with policy µ as VMµ,i(s) = E[ΣHj=i[r̄M (sj , aj)|si = s], where sj+1 ∼ PM (sj , aj) and aj = µ(sj). With the bound expected reward, we have that |V (s)| ≤ HRmax, ∀s. We use M∗ to indicate the real unknown MDP which includes R∗ and P ∗, and M∗ itself is treated as a random variable. Thus, we can treat the real noiseless reward function r̄∗ and transition function f∗ as random processes as well. In the posterior sampling algorithm πPS , Mk is a random sample from the posterior distribution of the real unknown MDP M∗ in the kth episode, which includes the posterior samples of Rk and P k , given history prior to the kth episode: Hk := {s1,1, a1,1, r1,1, · · · , sk−1,H , ak−1,H , rk−1,H}, where sk,i, ak,i and rk,i indicate the state, action, and reward at time step i in episode k. We define the the optimal policy under M as µM ∈argmaxµ VMµ,i(s) for all s ∈ S and i ∈ [H]. In particular, µ∗ indicates the optimal policy under M∗ and µk represents the optimal policy under Mk. Let ∆k denote the regret over the kth episode:
∆k = ∫ ρ(s1)(V M∗ µ∗,1(s1)− VM ∗ µk,1(s1))ds1 (1)
Then we can express the regret of πps up to time step T as:
Regret(T, πps,M∗) := Σ d TH e k=1 ∆k, (2)
Let BayesRegret(T, πps, φ) denote the Beyesian regret of πps as defined in Osband & Van Roy (2017), where φ is the prior distribution of M∗:
BayesRegret(T, πps, φ) = E[Regret(T, πps,M∗)]. (3)
3.2 ASSUMPTIONS
Generally, we consider modeling an unknown target function g : Rd → R. We are given a set of noisy samples y = [y1...., yT ]T at points X = [x1, ..., xT ]T , X ⊂ D, where D is compact and convex, yi = g(xi) + i with i ∼ N(0, σ2) i.i.d. Gaussian noise ∀i ∈ {1, · · · , T}. We model g as a sample from a Gaussian ProcessGP (µ(x),K(x, x′)), specified by the mean function µ(x) = E[g(x)] and the covariance (kernel) function K(x, x′) = E[(g(x)− µ(x)(g(x′)− µ(x′)]. Let the prior distribution without any data as GP (0,K(x, x′)). Then the posterior distribution over g given X and y is also a GP with mean µT (x), covariance KT (x, x′), and variance σ2T (x): µT (x) = K(x,X)(K(X,X) + σ2I)−1y,KT (x, x′) = K(x, x′) − K(X,x)T (K(X,X) + σ2I)−1K(X,x), σ2T (x) = KT (x, x), where K(X,x) = [K(x1, x), ...,K(xT , x)]T , K(X,X) = [K(xi, xj)]1≤i≤T,1≤j≤T .
We model our reward function r̄M as a Gaussian Process with noise σ2r . For transition models, we treat each dimension independently: each fi(s, a), i = 1, .., dS is modeled independently as above, and with the same noise level σ2f in each dimension. Thus it corresponds to our formulation in the RL setting. Since the posterior covariance matrix is only dependent on the input rather than the target value, the distribution of each fi(s, a) shares the same covariance matrice and only differs in the mean function.
4 BAYESIAN REGRET ANALYSIS
4.1 LINEAR CASE
Theorem 1 In the RL problem formulated in Section 3.1, under the assumption of Section 3.2 with linear kernels4, we have BayesRegret(T, πps,M∗) = Õ(H3/2d √ T ), where d is the dimension of the state-action space, H is the episode length, and T is the time elapsed. 4GP with linear kernel correspond to Bayesian linear regression f(x) = wTx, where the prior distribution of the weight is w ∼ N (0,Σp).
Proof The regret in episode k can be rearranged as:
∆k = ∫ ρ(s1)(V M∗ µ∗, (s1)− VM k µk,1(s1)) + (V Mk µk,1(s1)− V M∗ µk,1(s1)))ds1 (4)
Note that conditioned upon historyHk for any k, Mk and M∗ are identically distributed. Osband & Van Roy (2014) showed that VM ∗
µ∗, − VM k
µk,1 is zero in expectation, and that only the second part of the regret decomposition need to be bounded when deriving the Bayesian regret of PSRL. Thus we can focus on the policy µk, the sampled Mk and real environment data generated by M∗. For clarity, the value function VM k
µk,1 is simplified to V k k,1 and V
M∗
µk,1 to V ∗ k,1. It suffices to derive bounds for any initial
state s1 as the regret bound will still hold through integration of the initial distribution ρ(s1).
We can rewrite the regret from concentration via the Bellman operator (see Section 5.1 in Osband et al. (2013)):
E[∆̃k|Hk] := E[V kk,1(s1)− V ∗k,1(s1)|Hk] = E[r̄k(s1, a1)− r̄∗(s1, a1) + ∫ P k(s′|s1, a1)V kk,2(s′)ds′ − ∫ P ∗(s′, |s1, a1)V ∗k,2(s′)ds′|Hk]
= E[ΣHi=1r̄k(si, ai)− r̄∗(si, ai) + ΣHi=1( ∫ (P k(s′|si, ai)− P ∗(s′|si, ai))V kk,i+1(s′)ds′)|Hk]
= E[∆̃k(r) + ∆̃k(f)|Hk] (5)
where ai = µk(si), si+1 ∼ P ∗(si+1|si, ai), ∆̃k(r) = ΣHi=1r̄k(si, ai) − r̄∗(si, ai), ∆̃k(f) = ΣHi=1( ∫ (P k(s′|si, ai)− P ∗(s′|si, ai))V kk,i+1(s′)ds′). Thus, here (si, ai) is the state-action pair that the agent encounters in the kth episode while using µk for interaction in the real MDP M∗. We can define Vk,H+1 = 0 to keep consistency. Note that we cannot treat si and ai as deterministic and only take the expectation directly on random reward and transition functions. Instead, we need to bound the difference using concentration properties of reward and transition functions modeled as Gaussian Processes (which also applies to any state-action pair), and then derive bounds of this expectation. For all i, we have ∫ (P k(s′|si, ai) − P ∗(s′|si, ai))V kk,i+1(s′)ds′ ≤
maxs |V kk,i+1(s)| ∫ |P k(s′|si, ai)−P ∗(s′|si, ai)|ds′ ≤ HRmax ∫ |P k(s′|si, ai)−P ∗(s′|si, ai)|ds′.
Now we present a lemma which enables us to derive a regret bound with explicit dependency on the episode length H .
Lemma 1 For two multivariate Gaussian distribution N (µ, σ2I), N (µ′, σ2I) with probability density function p1(x) and p2(x) respectively, x ∈ Rd ,∫
|p1(x)− p2(x)|dx ≤ √ 2
πσ2 ||µ− µ′||2.
The proof is in Appendix A.1. Clearly, this result can also be extended to sub-Gaussian noises.
Recall that P k(s′|si, ai) = N (fk(si, ai), σ2fI) and P ∗(s′|si, ai) = N (f∗(si, ai), σ2fI). By Lemma 1 we have ∫
|P k(s′|si, ai)− P ∗(s′|si, ai)|ds′ ≤ √ 2
πσ2f ||fk(si, ai)− f∗(si, ai)||2 (6)
Lemma 2 (Rigollet & Hütter, 2015) Let X1, ..., XN be N sub-Gaussian random variables with variance σ2 (not required to be independent). Then for any t > 0, P(max1≤i≤N |Xi| > t) ≤ 2Ne− t2 2σ2 .
Given history Hk, let f̄k(s, a) indicate the posterior mean of fk(s, a) in episode k, and σ2k(s, a) denotes the posterior variance of fk in each dimension. Note that f∗ and fk share the same variance in each dimension given history Hk, as described in Section 3. Consider all dimensions of the state space, by Lemma 2, we have that with probability at least 1 − δ, max1≤i≤ds |fki (s, a) −
f̄ki (s, a)| ≤ √ 2σ2k(s, a)log 2ds δ . Also, we can derive an upper bound for the norm of the state difference ||fk(s, a)− f̄k(s, a)||2 ≤ √ ds max1≤i≤ds |fki (s, a)− f̄ki (s, a)|, and so does ||f∗(s, a)− f̄k(s, a)||2 since f∗ and fk share the same posterior distribution. By the union bound, we have that with probability at least 1− 2δ ||fk(s, a)− f∗(s, a)||2 ≤ 2 √ 2dsσ2k(s, a)log 2ds δ .
Then we look at the sum of the differences over horizon H , without requiring each variable in the sum to be independent:
P(ΣHi=1||fk(si, ai)− f∗(si, ai)||2 > ΣHi=12 √
2dsσ2k(si, ai)log 2ds δ )
≤ P( H⋃ i=1 {||fk(si, ai)− f∗(si, ai)||2 > 2 √ 2dsσ2k(si, ai)log 2ds δ })
≤ ΣHi=1P(||fk(si, ai)− f∗(si, ai)||2 > 2 √
2dsσ2k(si, ai)log 2ds δ )
(7)
Thus, with probability at least 1 − 2Hδ, we have ΣHi=1||fk(si, ai) − f∗(si, ai)||2 ≤ ΣHi=12 √ 2dsσ2k(si, ai)log 2ds δ . Let δ ′ = 2Hδ, we have that with probability 1−δ, ΣHi=1||fk(si, ai)−
f∗(si, ai)||2 ≤ ΣHi=12 √ 2dsσ2k(si, ai)log 4Hds δ ≤ 2H √ 2dsσ2k(skmax , akmax)log 4Hds δ , where the index kmax = arg maxi σk(si, ai), i = 1, ...,H in episode k. Here, since the posterior distribution is only updated every H steps, we have to use data points with the max variance in each episode to bound the result. Similarly, using the union bound for [ TH ] episodes, and let C = √ 2 πσ2f , we have that with probability at least 1− δ, Σ[ T H ]
k=1[∆̃k(f)|Hk] ≤ Σ [ TH ] k=1Σ H i=12CHRmax||fk(si, ai)− f∗(si, ai)||2 ≤
Σ [ TH ]
k=14CH 2Rmax √ 2dsσ2k(skmax , akmax)log 4Tds δ .
In each episode k, let σ ′2 k (s, a) denote the posterior variance given only a subset of data points {(s1max , a1max), ..., (sk−1max , ak−1max)}, where each element has the max variance in the corresponding episode. By Eq.(6) in Williams & Vivarelli (2000), we know that the posterior variance reduces as the number of data points grows. Hence ∀(s, a), σ2k(s, a) ≤ σ ′2 k (s, a). By Theorem 5 in Srinivas et al. (2012) which provides a bound on the information gain, and Lemma 2 in Russo & Van Roy (2014) that bounds the sum of variances by the information gain, we have that Σ [ TH ]
k=1σ ′2 k (skmax , akmax) = O((ds + da)log[ TH ]) for linear kernels with bounded variances. Note that the bounded variance property for linear kernels only requires the range of all state-action pairs actually encountered in M∗ not to expand to infinity as T grows, which holds in general episodic MDPs.
Thus with probability 1− δ, and let δ = 1T ,
Σ [ TH ] k=1[∆̃k(f)|Hk] ≤ Σ [ TH ] k=14CH 2Rmax √ 2dsσ2k(skmax , akmax)log
4Tds δ
≤ Σ[ T H ]
k=18CH 2Rmax √ dsσ ′2 k (skmax , akmax)log(2Tds)
≤ 8CH2Rmax √ Σ [ TH ] k=1σ ′2 k (skmax , akmax) √ [ T H ] √ dslog(2Tds)
= 8CH 3 2Rmax √ T √ dslog(2Tds) ∗ √ O((ds + da)log[ T
H ]) = Õ((ds + da)H
3 2 √ T )
(8)
where Õ ignores logarithmic factors.
Therefore, E[Σ[ T H ]
k=1∆̃k(f)|Hk] ≤ (1− 1 T )Õ((ds + sa)H 3 2T ) + 1T 2HRmax ∗ [ T H ] = Õ(H
3 2 d √ T ),
where 2HRmax is the upper bound on the difference of value functions, and d = ds + da. By similar derivation, E[Σ[ T H ] k=1∆̃k(r)|Hk] = Õ( √ dHT ). Finally, through the tower property we have
BayesRegret(T, πps,M∗) = Õ(H 32 d √ T ).
Algorithm 1 MPC-PSRL Initialize data D with random actions for one episode repeat
Sample a transition model and a cost model at the beginning of each episode for i = 1 to H steps do
Obtain action using MPC with planning horizon τ : ai ∈ arg maxai:i+τ ∑i+τ t=i E[r(st, at)]
D = D ∪ {(si, ai, ri, si+1)} end for Train cost and dynamics representations φr and φf using data in D Update φr(s, a), φf (s, a) for all (s, a) collected Perform posterior update of wr and wf in cost and dynamics models using updated representations φr(s, a), φf (s, a) for all (s, a) collected
until convergence
4.2 NONLINEAR CASE VIA FEATURE REPRESENTATION
We can slightly modify the previous proof to derive the bound in settings that use feature representations. We can transform the state-action pair (s, a) to φf (s, a) ∈ Rdφ as the input of the transition model , and transform the newly transitioned state s′ to ψf (s′) ∈ Rdψ as the target, then the transition model can be established with respect to this feature embedding. We further assume dψ = O(dφ) as Assumption 1 in Yang & Wang (2019). Besides, we assume dφ′ = O(dφ) in the feature representation φr(s, a) ∈ Rdφ′ , then the reward model can also be established with respect to the feature embedding. Following similar steps, we can derive a Bayesian regret of Õ(H3/2dφ √ T ).
5 ALGORITHM DESCRIPTION
In this section, we elaborate our proposed algorithm, MPC-PSRL, as shown in Algorithm 1.
5.1 PREDICTIVE MODEL
When model the rewards and transitions, we use features extracted from the penultimate layer of fitted neural networks, and perform Bayesian linear regression on the feature vectors to update posterior distributions.
Feature representation: we first fit neural networks for transitions and rewards, using the same network architecture as Chua et al. (2018). Let xi denote the state-action pair (si, ai) and yi denote the target value. Specifically, we use reward ri as yi to fit rewards, and we take the difference between two consecutive states si+1 − si as yi to fit transitions. The penultimate layer of fitted neural networks is extracted as the feature representation, denoted as φf and φr for transitions and rewards, respectively. Note that in the transition feature embedding, we only use one neural network to extract features of state-action pairs from the penultimate layer to serve as φ, and leave the target states without further feature representation (the general setting is discussed in Section 4.2 where feature representations are used for both inputs and outputs), so the dimension of the target in the transition model d(ψ) equals to ds. Thus we have a modified regret bound of Õ(H3/2 √ ddφT ). We do not find the necessity to further extract feature representations in the target space, as it might introduce additional computational overhead. Although higher dimensionality of the hidden layers might imply better representation, we find that only modifying the width of the penultimate layer to dφ = ds + sa suffices in our experiments for both reward and transition models. Note that how to optimize the dimension of the penultimate layer for more efficient feature representation deserves further exploration.
Bayesian update and posterior sampling: here we describe the Bayesian update of transition and reward models using extracted features. Recall that Gaussian process with linear kernels is equivalent to Bayesian linear regression. By extracting the penultimate layer as feature representation φ, the target value y and the representation φ(x) could be seen as linearly related: y = w>φ(x) + , where is a zero-mean Gaussian noise with variance σ2 (which is σ2f for the transition model and σ 2 r for the reward model as defined in Section 3.1). We choose the prior distribution of weights w as zero-mean
Gaussian with covariance matrix Σp, then the posterior distribution of w is also multivariate Gaussian (Rasmussen (2003)): p(w|D) ∼ N ( σ−2A−1ΦY,A−1 ) where A = σ−2ΦΦ> + Σ−1p , Φ ∈ Rd×N is the concatenation of feature representations {φ(xi)}Ni=1, and Y ∈ RN is the concatenation of target values. At the beginning of each episode, we sample w from the posterior distribution to build the model, collect new data during the whole episode, and update the posterior distribution of w at the end of the episode using all the data collected.
Besides the posterior distribution of w, the feature representation φ is also updated in each episode with new data collected. We adopt a similar dual-update procedure as Riquelme et al. (2018): after representations for rewards and transitions are updated, feature vectors of all state-action pairs collected are re-computed. Then we apply Bayesian update on these feature vectors. See the description of Algorithm 1 for details.
5.2 PLANNING
During interaction with the environment, we use a MPC controller (Camacho & Alba (2013)) for planning. At each time step i, the controller takes state si and an action sequence ai:i+τ = {ai, ai+1, · · · , ai+τ} as the input, where τ is the planning horizon. We use transition and reward models to produce the first action ai of the sequence of optimized actions arg maxai:i+τ ∑i+τ t=i E[r(st, at)], where the expected return of a series of actions can be approximated using the mean return of several particles propagated with noises of our sampled reward and transition models. To compute the optimal action sequence, we use CEM (Botev et al. (2013)), which samples actions from a distribution closer to previous action samples with high rewards.
6 EXPERIMENTS
We compare our method with the following state-of-the art model-based and model-free algorithms on benchmark control tasks.
Model-free: Soft Actor Critic (SAC) from Haarnoja et al. (2018) is an off-policy deep actor-critic algorithm that utilizes entropy maximization to guide exploration. Deep Deterministic Policy Gradient (DDPG) from Barth-Maron et al. (2018) is an off-policy algorithm that concurrently learns a Qfunction and a policy, with a discount factor to guide exploration.
Model-based: Probabilistic Ensembles with Trajectory Sampling (PETS) from Chua et al. (2018) models the dynamics via an ensemble of probabilistic neural networks to capture epistemic uncertainty for exploration, and uses MPC for action selection, with a requirement to have access to oracle rewards for planning. Model-Based Policy Optimization (MBPO) from Janner et al. (2019) uses the same bootstrap ensemble techniques as PETS in modeling, but differs from PETS in policy optimization with a large amount of short model-generated rollouts, and can cope with environments with no oracle rewards provided. We do not compare with Gal et al. (2016), which adopts a single Bayesian neural network (BNN) with moment matching, as it is outperformed by PETS that uses an ensemble of BNNs with trajectory sampling. And we don’t compare with GP-based trajectory optimization methods with real rewards provided (Deisenroth & Rasmussen, 2011; Kamthe & Deisenroth, 2018), which are not only outperformed by PETS, but also computationally expensive and thus are limited to very small state-action spaces.
We use environments with various complexity and dimensionality for evaluation. Low-dimensional environments: continuous Cartpole (ds = 4, da = 1, H = 200, with a continuous action space compared to the classic Cartpole, which makes it harder to learn) and Pendulum Swing Up (ds = 3, da = 1, H = 200, a modified version of Pendulum where we limit the start state to make it harder for exploration). Trajectory optimization with oracle rewards in these two environments is easy and there is almost no difference in the performances for all model-based algorithms we compare, so we omit showing these learning curves. Higher dimensional environments: 7-DOF Reacher (ds = 17, da = 7, H = 150) and 7-DOF pusher (ds = 20, da = 7, H = 150) are two more challenging tasks as provided in Chua et al. (2018), where we conduct experiments both with and without true rewards, to compare with all baseline algorithms mentioned.
The learning curves of these algorithms are showed in Figure 1. When the oracle rewards are provided in Pusher and Reacher, our method outperforms PETS and MBPO: it converges more quickly with similar performance at convergence in Pusher, while in Reacher, not only does it learn faster but also performs better at convergence. As we use the same planning method (MPC) as PETS, results indicate that our model better captures uncertainty, which is beneficial to improving sample efficiency. When exploring in environments where both rewards and transition are unknown, our method learns significantly faster than previous model-based and model-free methods which do no require oracle rewards. Meanwhile, it matches the performance of SAC at convergence. Moreover, the performances of our algorithm in environments with and without oracle rewards can be similar, or even faster convergence (see Pusher with and without rewards), indicating that our algorithm excels at exploring both rewards and transitions.
From experimental results, it can be verified that our algorithm better captures the model uncertainty, and makes better use of uncertainty using posterior sampling. In our methods, by sampling from a Bayesian linear regression on a fitted feature space, and optimizing under the same sampled MDP in the whole episode instead of re-sampling at every step, the performance of our algorithm is guaranteed from a Bayesian view as analysed in Section 4. While PETS and MBPO use bootstrap ensembles of models with a limited ensemble size to "simulate" a Bayesian model, in which the convergence of the uncertainty is not guaranteed and is highly dependent on the training of the neural network. However, in our method there is a limitation of using MPC, which might fail in even higher-dimensional tasks shown in Janner et al. (2019). Incorporating policy gradient techniques for action-selection might further improve the performance and we leave it for future work.
7 CONCLUSION
In our paper, we derive a novel Bayesian regret for PSRL algorithm in continuous spaces with the assumption that true rewards and transitions (with or without feature embedding) can be modeled by GP with linear kernels. While matching the best-known bounds in previous works from a Bayesian view, PSRL also enjoys computational tractability. Moreover, we propose MPC-PSRL in continuous environments, and experiments show that our algorithm exceeds existing model-based and model-free methods with more efficient exploration.
A APPENDIX
A.1 PROOF OF LEMMA 1
Here we provide a proof of Lemma 1.
We first prove the results in Rd with d = 1: p1(x) ∼ N (µ, σ2), p2(x) ∼ N (µ′, σ2), without loss of generality, assume µ′ ≥ µ. The probabilistic distribution is symmetric with regard to µ+µ ′
2 . Note that p1(x) = p2(x) at x = µ+µ ′
2 . Thus the integration of absolute difference between pdf of p1 and p2 can be simplified as twice the integration of one side:∫ ∞
−∞ |p2(x)− p1(x)|dx = 2√ 2πσ2 ∫ ∞ µ+µ′
2
e −(x−µ′)2 2σ2 − e −(x−µ)2 2σ2 dx (9)
let z1 = x− µ, z2 = x− µ′, we have: 2√
2πσ2
∫ ∞ µ+µ′
2
e −(x−µ′)2 2σ2 − e −(x−µ)2 2σ2 dx
=
√ 2
πσ2 ∫ ∞ µ−µ′
2
e− z22 2σ2 dz2 −
√ 2
πσ2 ∫ ∞ µ′−µ
2
e− z21 2σ2 dz1
=
√ 2
πσ2
∫ µ′−µ 2
µ−µ′ 2
e− z2 2σ2 dz
= 2
√ 2
πσ2
∫ µ′−µ 2
0
e− z2 2σ2 dz ≤ 2 √ 2
πσ2
∫ µ′−µ 2
0
1dz =
√ 2
πσ2 |µ′ − µ|.
(10)
Now we extend the result to Rd(d ≥ 2): p1(x) ∼ N (µ, σ2I), p2(x) ∼ N (µ′, σ2I). We can rotate the coordinate system recursively to align the last axis with vector µ − µ′, such that the coordinates of µ and µ′ can be written as (0, 0, · · · , 0, µ̂), and (0, 0, · · · , 0, µ̂′) respectively, with |µ̂′ − µ̂| = ‖µ− µ′‖2. Without loss of generality, let µ̂ ≥ µ̂′.
Clearly, all points with equal distance to µ̂′ and µ̂ define a hyperplane P : xd = µ̂+µ̂ ′
2 where p1(x) = p2(x),∀x ∈ P . More specifically, the probabilistic distribution is symmetric with regard to P . Similar to the analysis in R1:
∫ ∞ −∞ ∫ ∞ −∞ · · · ∫ ∞ −∞ |p1(x)− p2(x)|dx1dx2 · · · dxd = 2√
(2π)dσ2d ∫ ∞ −∞ ∫ ∞ −∞ · · · ∫ ∞ µ̂+µ̂′
2
e −x21 2σ2 · · · e −x2d−1 2σ2 e −(xd−µ̂) 2 2σ2 dx1dx2 · · · dxd
− 2√ (2π)dσ2d ∫ ∞ −∞ ∫ ∞ −∞ · · · ∫ ∞ µ̂+µ̂′
2
e −x21 2σ2 · · · e −x2d−1 2σ2 e −(xd−µ̂ ′)2 2σ2 dx1dx2 · · · dxd
= 2√
(2π)dσ2d ∫ ∞ −∞ e −x21 2σ2 dx1 ∫ ∞ −∞ e −x22 2σ2 dx2 · · · ∫ ∞ −∞ e −x2d−1 2σ2 dxd−1 ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂)
2
2σ2 dxd
− 2√ (2π)dσ2d ∫ ∞ −∞ e −x21 2σ2 dx1 ∫ ∞ −∞ e −x22 2σ2 dx2 · · · ∫ ∞ −∞ e −x2d−1 2σ2 dxd−1 ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂
′)2
2σ2 dxd
=
√ 2
πσ2 ( ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂)
2 2σ2 dxd − ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂
′)2
2σ2 dxd)
(11)
let z1 = xd − µ̂,z2 = xd − µ̂′, we have:
∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂)
2 2σ2 dxd − ∫ ∞ µ̂+µ̂′
2
e −(xd−µ̂
′)2
2σ2 dxd
= ∫ ∞ µ̂′−µ̂
2
e −z21 2σ2 dz1 − ∫ ∞ µ̂−µ̂′
2
e −z22 2σ2 dz2
=
∫ µ̂−µ̂′ 2
µ̂′−µ̂ 2
e −z22 2σ2 dz
= 2
∫ µ̂−µ̂′ 2
0
e −z22 2σ dz ≤ 2
∫ µ̂−µ̂′ 2
0
1dz
= |µ̂− µ̂′|
(12)
Thus ∫∞ −∞ ∫∞ −∞ · · · ∫∞ −∞ |p1(x)− p2(x)|dx1dx2 · · · dxd ≤ √ 2 πσ2 ‖µ− µ ′‖2.
A.2 EXPERIMENTAL DETAILS
Here we provide hyperparameters for MBPO:
And we provide hyperparamters for MPC and Neural Networks in PETS:
Here are hyperparameters of our algorithm, which is similar with PETS, except for ensemble size(since we do not use ensembled models):
For SAC and DDPG, we use the open source code ( https://github.com/dongminlee94/ deep_rl) for implementation without changing their hyperparameters. We appreciate the authors for sharing the code! | 1. What is the main contribution of the paper in model-based reinforcement learning?
2. What are the strengths and weaknesses of the proposed algorithm, particularly in comparison to prior works like PSRL?
3. What are the major difficulties or differences in the paper's approach compared to previous research?
4. Are there any missing empirical results or comparisons that could help highlight the benefits of the proposed method?
5. How do the authors address the issue of exploration in their algorithm, and how does it impact performance?
6. Are there any typos or unclear definitions in the paper, specifically in the theorem and proof sections? | Review | Review
Review This paper proposes a new model-based reinforcement learning algorithm named MPC-PSRL. Theoretically, the authors provide regret analysis of the proposed algorithm. The authors also provide empirical results that MPC-PSRL outperforms other previous model-based RL algorithms, such as PETS or MBPO. However, it is not clear that the main contribution of this paper. Osband & Van Roy (2014) already provides the algorithm posterior sampling RL algorithm, named PSRL for continuous domain. If posterior distribution of MDP is modeled as GP and optimal policy is computed by MPC, then it is the same as the method in this paper. They also provide regret analysis for continuous domain with sub-Gaussian noise model. What are the major difficulties / differences compared to the previous work? Questions: Is there any reason there are no empirical results on Pendulum and Cartpole with oracle rewards? To emphasize the effect of posterior sampling (maybe exploration), would you provide the results that using just mean instead of sampling? It is not clear whether both GP modeling and exploration via posterior sampling have a significant impact on performance. Typos: Use \sigma_R with \sigma_r / \sigma_P with \sigma_f Theorem 1 in page 3, d_\phi should be d. In proof of theorem 1, what is the definition of \delta_k(r) and \delta_k(f)?
score: 5 -> 6 |
ICLR | Title
Computation Reallocation for Object Detection
Abstract
The allocation of computation resources in the backbone is a crucial issue in object detection. However, classification allocation pattern is usually adopted directly to object detector, which is proved to be sub-optimal. In order to reallocate the engaged computation resources in a more efficient way, we present CR-NAS (Computation Reallocation Neural Architecture Search) that can learn computation reallocation strategies across different feature resolution and spatial position diectly on the target detection dataset. A two-level reallocation space is proposed for both stage and spatial reallocation. A novel hierarchical search procedure is adopted to cope with the complex search space. We apply CR-NAS to multiple backbones and achieve consistent improvements. Our CR-ResNet50 and CRMobileNetV2 outperforms the baseline by 1.9% and 1.7% COCO AP respectively without any additional computation budget. The models discovered by CR-NAS can be equiped to other powerful detection neck/head and be easily transferred to other dataset, e.g. PASCAL VOC, and other vision tasks, e.g. instance segmentation. Our CR-NAS can be used as a plugin to improve the performance of various networks, which is demanding.
1 INTRODUCTION
Object detection is one of the fundamental tasks in computer vision. The backbone feature extractor is usually taken directly from classification literature (Girshick, 2015; Ren et al., 2015; Lin et al., 2017a; Lu et al., 2019). However, comparing with classification, object detection aims to know not only what but also where the object is. Directly taking the backbone of classification network for object detectors is sub-optimal, which has been observed in Li et al. (2018). To address this issue, there are many approaches either manually or automatically modify the backbone network. Chen et al. (2019) proposes a neural architecture search (NAS) framework for detection backbone to avoid expert efforts and design trails. However, previous works rely on the prior knowledge for classification task, either inheriting the backbone for classification, or designing search space similar to NAS on classification. This raises a natural question: How to design an effective backbone dedicated to detection tasks?
To answer this question, we first draw a link between the Effective Receptive Field (ERF) and the computation allocation of backbone. The ERF is only small Gaussian-like factor of theoretical receptive field (TRF), but it dominates the output (Luo et al., 2016). The ERF of image classification task can be easily fulfilled, e.g. the input size is 224×224 for the ImageNet data, while the ERF of object detection task need more capacities to handle scale variance across the instances, e.g. the input size is 800×1333 and the sizes of objects vary from 32 to 800 for the COCO dataset. Lin et al. (2017a) allocates objects of different scales into different feature resolutions to capture the appropriate ERF in each stage. Here we conduct an experiment to study the differences between the ERF of several FPN features. As shown in Figure 1, we notice the allocation of computation across different resolutions has a great impact on the ERF. Furthermore, appropriate computation allocation across spacial position (Dai et al., 2017) boost the performance of detector by affecting the ERF.
Based on the above observation, in this paper, we aim to automatically design the computation allocation of backbone for object detectors. Different from existing detection NAS works (Ghiasi et al., 2019; Ning Wang & Shen, 2019) which achieve accuracy improvement by introducing higher computation complexity, we reallocate the engaged computation cost in a more efficient way. We propose computation reallocation NAS (CR-NAS) to search the allocation strategy directly on the detection task. A two-level reallocation space is conducted to reallocate the computation across different resolution and spatial position. In stage level, we search for the best strategy to distribute the computation among different resolution. In operation level, we reallocate the computation by introducing a powerful search space designed specially for object detection. The details about search space can be found in Sec. 3.2. We propose a hierarchical search algorithm to cope with the complex search space. Typically in stage reallocation, we exploit a reusable search space to reduce stage-level searching cost and adapt different computational requirements.
Extensive experiments show the effectiveness of our approach. Our CR-NAS offers improvements for both fast mobile model and accurate model, such as ResNet (He et al., 2016), MobileNetV2 (Sandler et al., 2018), ResNeXt (Xie et al., 2017). On the COCO dataset, our CR-ResNet50 and CR-MobileNetV2 can achieve 38.3% and 33.9% AP, outperforming the baseline by 1.9% and 1.7% respectively without any additional computation budget. Furthermore, we transfer our CR-ResNet and CR-MobileNetV2 into the another ERF-sensitive task, instance segmentation, by using the Mask RCNN (He et al., 2017) framework. Our CR-ResNet50 and CR-MobileNetV2 yields 1.3% and 1.2% COCO segmentation AP improvement over baseline.
To summarize, the contributions of our paper are three-fold:
• We propose computation reallocation NAS(CR-NAS) to reallocate engaged computation resources. To our knowledge, we are the first to dig inside the computation allocation across different resolution.
• We develop a two-level reallocation space and hierarchical search paradigm to cope with the complex search space. Typically in stage reallocation, we exploit a reusable model to reduce stage-level searching cost and adapt different computational requirements.
• Our CR-NAS offers significant improvements for various types of networks. The discovered models show great transferablity over other detection neck/head, e.g. NAS-FPN (Cai & Vasconcelos, 2018), other dataset, e.g. PASCAL VOC (Everingham et al., 2015) and other vision tasks, e.g. instance segmentation (He et al., 2017).
2 RELATED WORK
Neural Architecture Search(NAS) Neural architecture search focus on automating the network architecture design which requires great expert knowledge and tremendous trails. Early NAS approaches (Zoph & Le, 2016; Zoph et al., 2018) are computational expensive due to the evaluating of each candidate. Recently, weight sharing strategy (Pham et al., 2018; Liu et al., 2018; Cai et al., 2018; Guo et al., 2019) is proposed to reduce searing cost. One-shot NAS method (Brock et al., 2017; Bender et al., 2018; Guo et al., 2019) build a directed acyclic graph G (a.k.a. supernet) to subsume all architectures in the search space and decouple the weights training with architectures searching. NAS works only search for operation in the certain layer. our work is different from them by searching for the computation allocation across different resolution. Computation allocation across feature resolutions is an obvious issue that has not been studied by NAS. We carefully design a search space that facilitates the use of existing search for finding good solution.
NAS on object detection. There are some work use NAS methods on object detection task (Chen et al., 2019; Ning Wang & Shen, 2019; Ghiasi et al., 2019). Ghiasi et al. (2019) search for scalable feature pyramid architectures and Ning Wang & Shen (2019) search for feature pyramid network and the prediction heads together by fixing the architecture of backbone CNN. These two work both introduce additional computation budget. The search space of Chen et al. (2019) is directly inherited from the classification task which is suboptimal for object detection task. Peng et al. (2019) search for dilated rate on channel level in the CNN backbone. These two approaches assume the fixed number of blocks in each resolution, while we search the number of blocks in each stage that is important for object detection and complementary to these approaches.
3 METHOD
3.1 BASIC SETTINGS
Our search method is based on the Faster RCNN (Ren et al., 2015) with FPN (Lin et al., 2017a) for its excellent performance. We only reallocate the computation within the backbone, while fix other components for fair comparison.
For more efficient search, we adopt the idea of one-shot NAS method (Brock et al., 2017; Bender et al., 2018; Guo et al., 2019). In one-shot NAS, a directed acyclic graph G (a.k.a. supernet) is built to subsume all architectures in the search space and is trained only once. Each architecture g is a subgraph of G and can inherit weights from the trained supernet. For a specific subgraph g ∈ G, its corresponding network can be denoted as N (g, w) with network weights w.
3.2 TWO-LEVEL ARCHITECTURE SEARCH SPACE
We propose Computation Reallocation NAS (CR-NAS) to distribute the computation resources in two dimensions: stage allocation in different resolution, convolution allocation in spatial position.
3.2.1 STAGE REALLOCATION SPACE
The backbone aims to generate intermediate-level features C with increasing downsampling rates 4×, 8×, 16×, and 32×, which can be regarded as 4 stages. The blocks in the same stage share the same spatial resolution. Note that the FLOPs of a single block in two adjacent spatial resolutions remain the same because a downsampling/pooling layer doubles the number of channels. So given the number of total blocks of a backbone N , we can reallocate the number of blocks for each stage while keep the total FLOPs the same. Figure 2 shows our stage reallocation space. In this search space, each stage contains several branches, and each branch has certain number of blocks. The numbers of blocks in different branches are different, corresponding to different computational budget for the stage. For example, there are 5 branches for the stage 1 in Figure 2, the numbers of blocks for these 5 branches are, respectively, 1, 2, 3, 4, and 5. We consider the whole network as a supernet T = {T1, T2, T3, T4}, where Ti at the ith stage hasKi branches, i.e. Ti = {tki |k = 1...Ki}. Then an allocation strategy can be represented as τ = [τ1, τ2, τ3, τ4], where τi denote the number of blocks in the ith branch. All blocks in the same stage have the same structure. ∑4 i=1 τi = N for a network with N blocks. For example, the original ResNet101 has τ = [3, 4, 23, 3] and N = 33
residual blocks. We make a constraint that each stage at least has one convolutional block. We would like to find the best allocation strategy of ResNet101 is among the ( 32 3 ) possible choices. Since validating a single detection architecture requires hundreds of GPU-hours, it not realist to find the optimal architecture by human trails.
On the other hand, we would like to learn stage reallocation strategy for different computation budgets simultaneously. Different applications require CNNs of different numbers of layers for achieving different latency requirements. This is why we have ReseNet18, ReseNet50, ReseNet101, etc. We build a search space to cover all the candidate instances in a certain series, e.g. ResNet series. After considering the trade off between granularity and range, we set the numbers of blocks for T1 and T2 as {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, and set the numbers of blocks for T3 as {2, 3, 5, 6, 9, 11, 14, 17, 20, 23}, for T4 as {2, 3, 4, 6, 7, 9, 11, 13, 15, 17} for the ResNet series. The stage reallocation space of MobileNetV2 (Sandler et al., 2018) and ResNeXt (Xie et al., 2017) can be found in Appendix A.2.
3.2.2 CONVOLUTION REALLOCATION SPACE
To reallocate the computation across spatial position, we utilize dilated convolution Li et al. (2019), Li et al. (2018). Dilated convolution effects the ERF by performing convolution at sparsely sampled locations. Another good feature of dilated convolution is that dilation introduce no extra parameter and computation. We define a choice block to be a basic unit which consists of multiple dilations and search for the best computation allocation. For ResNet Bottleneck, we modify the center 3 × 3 convolution. For ResNet BasicBlock, we only modify the second 3 × 3 convolution to reduce search space and searching time. We have three candidates in our operation set O: {dilated convolution 3 × 3 with dilation rate i|i = 1, 2, 3}. Across the entire ResNet50 search space, there are therefore 316 ≈ 4× 107 possible architectures.
3.3 HIERARCHICAL SEARCH FOR OBJECT DETECTION
We propose a hierarchical search procedure to cope with the complex reallocation space. Firstly, the stage space is explored to find the best computation allocation for different resolution. Then, the operation space is explored to further improve the architecture with better spatial allocation.
3.3.1 STAGE REALLOCATION SEARCH
To reduce the side effect of weights coupling, we adopt the uniform sampling in supernet training(a.k.a single-path one-shot) (Guo et al., 2019). After the supernet training, we can validate the allocation strategies τ ∈ T directly on the task detection task. Model accuracy(COCO AP) is defined as APval(N (τ, w)). We set the block number constraint N . We can find the best allocation strategy in the following equation:
τ∗ = argmax∑4 i=1 τi=N APval(N (τ, w)). (1)
3.3.2 BLOCK OPERATION SEARCH
Algorithm 1: Greedy operation search algorithm Input: Number of blocks B; Possible operations set of each blocks O = {Oi | i = 1, 2, ..., B}; Supernet with trained weights N (O,W ∗); Dataset for validation Dval; Evaluation metric APval;. Output: Best architecture o∗ Initialize top K partial architecture p = Ø for i = 1, 2, ..., B do
pextend = p×Oi . × denotes Cartesian product result = {(arch,AP ) | arch ∈ pextend, AP = evaluate(arch)} p = choose topK(result)
end Output: Best architecture o∗ = choose top1(p).
By introducing the operation allocation space as in Sec. 3.2.2, we can reallocate the computation across spatial position. Same as stage reallocation search, we train an operation supernet adopting random sampling in each choice block (Guo et al., 2019). For architecture search process, previous one-shot works use random search (Brock et al., 2017; Bender et al., 2018) or evolutionary search (Guo et al., 2019). In our approach, We propose a greedy algorithm to make sequential decisions to obtain the final result. We decode network architecture o as a sequential of choices [o1, o2, ..., oB ]. In each choice step, the top K partial architectures are maintained to shrink the search space. We evaluate each candidate operation from the first choice block to the last. The greedy operation search algorithm is shown in Algorithm 1.
The hyper-parameter K is set equal to 3 in our experiment. We first extend the partial architecture in the first block choice which contains three partial architectures in pextend. Then we expand the top 3 partial architectures into the whole length B, which means that there are 3 × 3 = 9 partial architectures in other block choice. For a specific partial architecture arch, we sample the operation of the unselected blocks uniformly for c architectures where c denotes mini batch number of Dval. We validate each architecture on a mini batch and combine the results to generate evaluate(arch). We finally choose the best architecture to obtain o∗.
4 EXPERIMENTS AND RESULTS
4.1 DATASET AND IMPLEMENTATION DETAILS
Dataset We evaluate our method on the challenging MS COCO benchmark (Lin et al., 2014). We split the 135K training images trainval135 into 130K images archtrain and 5K images archval. First, we train the supernet using archtrain and evaluate the architecture using archval. After the architecture is obtained, we follow other standard detectors (Ren et al., 2015; Lin et al., 2017a) on using ImageNet (Russakovsky et al., 2015) for pre-training the weights of this architecture. The final model
is fine-tuned on the whole COCO trainval135 and validated on COCO minival. Another detection dataset VOC (Everingham et al., 2015) is also used. We use VOC trainval2007+trainval2012 as our training dataset and VOC test2007 as our vaildation dataset.
Implementation details The supernet training setting details can be found in Appendix A.1. For the training of our searched models, the input images are resized to have a short side of 800 pixels or a long side of 1333 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. For fair comparison, all our models are trained for 13 epochs, known as 1× schedule (Girshick et al., 2018). We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 8 and 11 epochs. Warm-up and synchronized BatchNorm (SyncBN) (Peng et al., 2018) are adopted for both baselines and our searched models.
4.2 MAIN RESULTS
4.2.1 COMPUTATION REALLOCATION PERFORMANCE
We denote the architecture using our computation reallocation by prefix ’CR-’, e.g. CR-ResNet50. Our final architectures have the almost the same FLOPs as the original network(the negligible difference in FLOPs is from the BatchNorm layer and activation layer). As shown in Table 1, our CR-ResNet50 and CR-ResNet101 outperforms the baseline by 1.9% and 1.6% respectively. It is worth mentioning that many mile-stone backbone improvements also only has around 1.5% gain. For example, the gain is 1.5% from ResNet50 to ResNeXt50-32x4d as indicated in Table 4. In addition, we run the baselines and searched models under longer 2× setting (results shown in Appendix A.4). It can be concluded that the improvement from our approach is consistent.
Our CR-ResNet50 and CR-ResNet101 are especially effective for large objects(3.5%, 4.8% improvement for APl). To understand these improvements, we depict the architecture sketches in Figure 4. We can find in the stage-level, our Stage CR-ResNet50 reallocate more capacity in deep stage. It reveals the fact that the budget in shallow stage is redundant while the resources in deep stage is limited. This pattern is consistent with ERF as in Figure 1. In operation-level, dilated convolution with large rates tends to appear in the deep stage. We explain the shallow stage needs more dense sampling to gather exact information while deep stage aims to recognize large object by more sparse sampling. The dilated convolutions in deep stage further explore the network potential to detect large objects, it is an adaptive way to balance the ERF. For light backbone, our CR-ResNet18 and CR-MobileNetV2 both improves 1.7% AP over the baselines with all-round APs to APl improvements. For light network, it is a more efficient way to allocate the limited capacity in the deep stage for the discriminative feature captured in the deep stage can benefit the shallow small object by the FPN top-down pathway.
4.2.2 TRANSFERABILITY VERIFICATION
Different dataset We transfer our searched model to another object detection dataset VOC (Everingham et al., 2015). Training details can be found in Appendix A.3. We denote the VOC metric [email protected] as AP50 for consistency. As shown in Table 2, our CR-ResNet50 and CR-ResNet101 achieves AP50 improvement 1.0% and 0.7% comparing with the already high baseline.
Different task Segmentation is another task that is highly sensitive to the ERF (Hamaguchi et al., 2018; Wang et al., 2018). Therefore, we transfer our computation reallocation network into the instance segmentation task by using the Mask RCNN (He et al., 2017) framework. The experimental results on COCO are shown in Table 3. The instance segmentation AP of our CR-MobileNetV2, CR-ResNet50 and CR-ResNet101 outperform the baseline respectively by 1.2%, 1.3% and 1.1% absolute AP. We also achieve bounding box AP improvement by 1.5%, 1.5% and 1.8% respectively.
Different head/neck Our work is orthogonal to other improvements on object detection. We exploit the SOTA detector Cascade Mask RCNN (Cai & Vasconcelos, 2018) for further verification. The detector equipped with our CR-Res101 can achieve 44.5% AP, better than the regular Res101 43.3% baseline by a significant 1.2% gain. Additionally, we evaluate replacing the original FPN with a searched NAS-FPN (Ghiasi et al., 2019) neck to strength our results. The Res50 with NASFPN neck can achieve 39.6% AP while our CR-Res50 with NAS-FPN can achieve 41.0% AP using the same 1× setting. More detailed results can be found in Appendix A.4.
Table 4: COCO minival AP (%) evaluating stage reallocation performance for different networks. Res50 denotes ResNet50, similarly for Res101. ReX50 denotes ResNeXt50, similarly for ReXt101.
MbileNetV2 Res18 Res50 Res101 ReX50-32×4d ReX101-32×4d Baseline AP 32.2 32.1 36.4 38.6 37.9 40.6 Stage-CR AP 33.5 33.4 37.4 39.5 38.9 41.5
100 120 140 160 180 200 220 240 260 280 FLOPs(G)
30.0
32.0
34.0
36.0
38.0
40.0
42.0
AP SCR-Backbone(stage) ResNet SCR-ResNet ResNeXt SCR-ResNeXt MobileNetV2 SCR-MobileNetV2
Figure 5: Detector FLOPs(G) versus AP on COCO minival. The bold lines and dotted lines are the baselines and our stage computation reallocation models(SCR-) respectively.
75.0 75.5 76.0 76.5 77.0 77.5 78.0 Top1 Acc.
36.0
37.0
38.0
39.0
40.0
41.0
AP (76.5, 38.3)
(77.3, 40.2)
FLOPs equivalent R50 FLOPs R50 FLOPs (best) R101 FLOPs R101 FLOPs (best)
Figure 6: Top1 accuracy on ImageNet validation set versus AP on COCO minival. Each dot is a model which has equivalent FLOPs as the baseline.
4.3 ANALYSIS
4.3.1 EFFECT OF STAGE REALLOCATION
Our design includes two parts, stage reallocation search and block operation search. In this section, we analyse the effectiveness of stage reallocation search alone. Table 4 shows the performance comparison between the baseline and the baseline with our stage reallocation search. From light MobileNetV2 model to heavy ResNeXt101, our stage reallocation brings a solid average 1.0% AP improvement. Figure 5 shows that our Stage-CR network series yield overall improvements over baselines with negligible difference in computation. The stage reallocation results for more models are shown in Appendix A.2. There is a trend to reallocate the computation from shallow stage to deep stage. The intuitive explanation is that reallocating more capacity in deep stage results in a balanced ERF as Figure 1 shows and can enhance the ability to detect medium and large object.
4.3.2 CORRELATIONS BETWEEN CLS. AND DET. PERFORMANCE
Often, a large AP increase could be obtained by simply replacing backbone with stronger network, e.g. from ResNet50 to ResNet101 and then to ResNeXt101. The assumption is that strong network can perform well on both classification and detection tasks. We further explore the performance correlation between these two tasks by a lot of experiments. We draw ImageNet top1 accuracy versus COCO AP correlation in Figure 6 for different architectures of the same FLOPS. Each dot is a single network architecture. We can easily find that although the performance correlation between these two tasks is basically positive, better classification accuracy may not always lead to better detection accuracy. This study further shows the gap between these two tasks.
5 CONCLUSION
In this paper, we present CR-NAS (Computation Reallocation Neural Architecture Search) that can learn computation reallocation strategies across different resolution and spatial position. We design
a two-level reallocation space and a novel hierarchical search procedure to cope with the complex search space. Extensive experiments show the effectiveness of our approach. The discovered model has great transfer-ability to other detection neck/head, other dataset and other vision tasks. Our CRNAS can be used as a plugin to other detection backbones to further booster the performance under certain computation resources.
A APPENDIX
A.1 SUPERNET TRAINING
Both stage and operation supernets use exactly the same setting. The supernet training process adopt the ’pre-training and fine-tuning’ paradigm. For ResNet and ResNeXt, the supernet channel distribution is [32, 64, 128, 256].
Supernet pre-training. We use ImageNet-1k for supernet pre-training. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. The supnet are trained for 150 epochs with the batch size 1024. To smooth the jittering in the training process, we adopt the cosine learning rate decay (Loshchilov & Hutter, 2016) with the initial learning rate 0.4. Warming up and synchronized-BN (Peng et al., 2018) are adopted to help convergence.
Supernet fine-tuning. We fine tune the pretrained supernet on archtrain. The input images are resized to have a short side of 800 pixels or a long side of 1333 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. Supernet is trained for 25 epochs (known as 2× schedule (Girshick et al., 2018)). We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 22 epochs. Warm-up and synchronized BatchNorm (SyncBN) (Peng et al., 2018) are adopted to help convergence.
A.2 REALLOCATION SETTINGS AND RESULTS
stage allocation space For ResNeXt, the stage allocation space is exactly the same as ResNet series. For MobileNetV2, original block numbers in Sandler et al. (2018) is defined by n=[1, 1, 2, 3, 4, 3, 3, 1, 1, 1]. We build our allocation space on the the bottleneck operator by fixing stem and tail components. A architecture is represented as m = [1, 1,m1,m2,m3,m4,m5, 1, 1, 1]. The allocation space is M = [M1,M2,M3,M4,M5]. M1,M2 = {1, 2, 3, 4, 5}, M3 = {3, 4, 5, 6, 7}, M4,M5 = {2, 3, 4, 5, 6}. It’s worth to mention the computation cost in different stage of m is not exactly the same because of the abnormal channels. We format the weight as [1.5, 1, 1, 0.75, 1.25] for [m1,m2,m3,m4,m5].
computation reallocation results We propose our CR-NAS in a sequential way. At first we reallocate the computation across different resolution. The Stage CR results is shown in Table A.2
Then we search for the spatial allocation by adopting the dilated convolution with different rates. the operation code as. we denote our final model as
[0 ] dilated conv with rate 1(normal conv) [1 ] dilated conv with rate 2 [2 ] dilated conv with rate 3
Our final model can be represnted as a series of allocation codes.
A.3 IMPLEMENTATION DETAILS OF VOC
We use the VOC trainval2007+trainval2012 to server as our whole training set. We conduct our results on the VOC test2007. The pretrained model is apoted. The input images are resized to have a short side of 600 pixels or a long side of 1000 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. We train for 18 whole epochs for all models. We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 15 and 17 epochs. Warm-up and synchronized BatchNorm (SyncBN) (Peng et al., 2018) are adopted to help convergence.
A.4 MORE EXPERIMENTS
longer schedule 2× schedule means training totally 25 epochs as indicated in Girshick et al. (2018). The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 22 epochs. Other training settings is exactly the same as in 1×.
Powerful detector The Cascade Mask RCNN (Cai & Vasconcelos, 2018) is a SOTA multi-stage object detector. The detector is trained for 20 epochs. The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 19 epochs. Warming up and synchronized-BN (Peng et al., 2018) are adopted to help convergence.
Powerful searched neck NAS-FPN (Ghiasi et al., 2019) is a powerful scalable feature pyramid architecture searched for object detection. We reimplement NAS-FPN (7 @ 384) in Faster RCNN (The original paper is implemented in RetinaNet (Lin et al., 2017b)). The detector is training under 1× setting as described in 4.1. | 1. What is the main contribution of the paper regarding architecture search for object detection?
2. What are the strengths and weaknesses of the proposed approach, particularly in its application to a new domain of problem?
3. Do you have any concerns about the experimental results presented in the paper?
4. How does the reviewer assess the novelty of the work compared to existing works in NAS research and object detection?
5. What are some potential improvements that could be made to the paper's writing and experimental comparisons? | Review | Review
<Strengths>
+ This paper performs architecture search for object detection, especially for the computation allocation across different resolutions. It is a new application for NAS research.
+ The proposed approach shows some marginal improvement of object detection accuracy across multiple backbones and datasets.
+ This work proposes a new formulation to apply NAS approach to object detectors, including linking between the ERF and the computation allocation of backbone and two-level hierarchical search for stages and convolution operations.
<Weakness>
1. This paper can be regarded as an engineering work for a new domain of problem with little technical novelty.
- From the perspective of NAS research, the proposed approach has little technical novelty; instead, it seems like an application of existing techniques (or even simpler ones) (e.g. to a new domain of problem - object detection.
- Given that the NAS is only applied to CNN backbones in this paper, the novelty (of proposing a new task) may be further weakened.
- The novelty of this work over existing works of “NAS on detection” (Chen et al 2019, Wang et al 2019 and Ghiasi et al 2019) is not justified.
2. Experimental results are rather weak.
- This paper only focuses on showing that the proposed method marginally improves the performance of basic backbones (MobileNet V2 and ResNet 18/50/101). However, the improvement gaps are rather marginal (about 1.0% in average as shown in Fig.5).
- This paper does not compare its performance with other SOTA detection methods but only compare with some baselines instead. However, the performances of baselines are too low; for example, in Table 1, the mAP of the ResNet101 baseline is 38.6, which is lower by about 10 than the SOTA detector with the same ResNet101+FRCNN+FPN. It may not be convincing to use the baselines that have more than 20% lower accuracy compared to SOTA and show only ~1.x improvement over it.
- Given that the proposed approach is applicable to any backbones and detectors, more experiments should be done using recent stronger baselines (including many recent tricks like RoIAlign and DCN).
- Another experimental weakness is lack of comparison with existing NAS methods on object detections such as (Chen et al 2019, Wang et al 2019 and Ghiasi et al 2019). Even though experiment settings here may be different from those of these papers, they should be compared in any reasonable ways. Moreover, the detection accuracies reported in this paper are not as good as the numbers of these papers. This paper argues that the proposed approach is complementary to these methods, so it would be good to report the results of the combined model with them.
3. The paper is written poorly.
- There are many grammatically wrong and awkward expressions. The draft should be thoroughly proofread.
<Conclusion>
Although this work shows a new application of NAS for object detection, my initial decision is ‘weak reject’ mainly due to lack of technical novelty, limited experiments and poor writing.
<Post-rebuttal comments>
Authors' responses partly resolved my concerns on the experiments. I have no object to accept this paper. |
ICLR | Title
Computation Reallocation for Object Detection
Abstract
The allocation of computation resources in the backbone is a crucial issue in object detection. However, classification allocation pattern is usually adopted directly to object detector, which is proved to be sub-optimal. In order to reallocate the engaged computation resources in a more efficient way, we present CR-NAS (Computation Reallocation Neural Architecture Search) that can learn computation reallocation strategies across different feature resolution and spatial position diectly on the target detection dataset. A two-level reallocation space is proposed for both stage and spatial reallocation. A novel hierarchical search procedure is adopted to cope with the complex search space. We apply CR-NAS to multiple backbones and achieve consistent improvements. Our CR-ResNet50 and CRMobileNetV2 outperforms the baseline by 1.9% and 1.7% COCO AP respectively without any additional computation budget. The models discovered by CR-NAS can be equiped to other powerful detection neck/head and be easily transferred to other dataset, e.g. PASCAL VOC, and other vision tasks, e.g. instance segmentation. Our CR-NAS can be used as a plugin to improve the performance of various networks, which is demanding.
1 INTRODUCTION
Object detection is one of the fundamental tasks in computer vision. The backbone feature extractor is usually taken directly from classification literature (Girshick, 2015; Ren et al., 2015; Lin et al., 2017a; Lu et al., 2019). However, comparing with classification, object detection aims to know not only what but also where the object is. Directly taking the backbone of classification network for object detectors is sub-optimal, which has been observed in Li et al. (2018). To address this issue, there are many approaches either manually or automatically modify the backbone network. Chen et al. (2019) proposes a neural architecture search (NAS) framework for detection backbone to avoid expert efforts and design trails. However, previous works rely on the prior knowledge for classification task, either inheriting the backbone for classification, or designing search space similar to NAS on classification. This raises a natural question: How to design an effective backbone dedicated to detection tasks?
To answer this question, we first draw a link between the Effective Receptive Field (ERF) and the computation allocation of backbone. The ERF is only small Gaussian-like factor of theoretical receptive field (TRF), but it dominates the output (Luo et al., 2016). The ERF of image classification task can be easily fulfilled, e.g. the input size is 224×224 for the ImageNet data, while the ERF of object detection task need more capacities to handle scale variance across the instances, e.g. the input size is 800×1333 and the sizes of objects vary from 32 to 800 for the COCO dataset. Lin et al. (2017a) allocates objects of different scales into different feature resolutions to capture the appropriate ERF in each stage. Here we conduct an experiment to study the differences between the ERF of several FPN features. As shown in Figure 1, we notice the allocation of computation across different resolutions has a great impact on the ERF. Furthermore, appropriate computation allocation across spacial position (Dai et al., 2017) boost the performance of detector by affecting the ERF.
Based on the above observation, in this paper, we aim to automatically design the computation allocation of backbone for object detectors. Different from existing detection NAS works (Ghiasi et al., 2019; Ning Wang & Shen, 2019) which achieve accuracy improvement by introducing higher computation complexity, we reallocate the engaged computation cost in a more efficient way. We propose computation reallocation NAS (CR-NAS) to search the allocation strategy directly on the detection task. A two-level reallocation space is conducted to reallocate the computation across different resolution and spatial position. In stage level, we search for the best strategy to distribute the computation among different resolution. In operation level, we reallocate the computation by introducing a powerful search space designed specially for object detection. The details about search space can be found in Sec. 3.2. We propose a hierarchical search algorithm to cope with the complex search space. Typically in stage reallocation, we exploit a reusable search space to reduce stage-level searching cost and adapt different computational requirements.
Extensive experiments show the effectiveness of our approach. Our CR-NAS offers improvements for both fast mobile model and accurate model, such as ResNet (He et al., 2016), MobileNetV2 (Sandler et al., 2018), ResNeXt (Xie et al., 2017). On the COCO dataset, our CR-ResNet50 and CR-MobileNetV2 can achieve 38.3% and 33.9% AP, outperforming the baseline by 1.9% and 1.7% respectively without any additional computation budget. Furthermore, we transfer our CR-ResNet and CR-MobileNetV2 into the another ERF-sensitive task, instance segmentation, by using the Mask RCNN (He et al., 2017) framework. Our CR-ResNet50 and CR-MobileNetV2 yields 1.3% and 1.2% COCO segmentation AP improvement over baseline.
To summarize, the contributions of our paper are three-fold:
• We propose computation reallocation NAS(CR-NAS) to reallocate engaged computation resources. To our knowledge, we are the first to dig inside the computation allocation across different resolution.
• We develop a two-level reallocation space and hierarchical search paradigm to cope with the complex search space. Typically in stage reallocation, we exploit a reusable model to reduce stage-level searching cost and adapt different computational requirements.
• Our CR-NAS offers significant improvements for various types of networks. The discovered models show great transferablity over other detection neck/head, e.g. NAS-FPN (Cai & Vasconcelos, 2018), other dataset, e.g. PASCAL VOC (Everingham et al., 2015) and other vision tasks, e.g. instance segmentation (He et al., 2017).
2 RELATED WORK
Neural Architecture Search(NAS) Neural architecture search focus on automating the network architecture design which requires great expert knowledge and tremendous trails. Early NAS approaches (Zoph & Le, 2016; Zoph et al., 2018) are computational expensive due to the evaluating of each candidate. Recently, weight sharing strategy (Pham et al., 2018; Liu et al., 2018; Cai et al., 2018; Guo et al., 2019) is proposed to reduce searing cost. One-shot NAS method (Brock et al., 2017; Bender et al., 2018; Guo et al., 2019) build a directed acyclic graph G (a.k.a. supernet) to subsume all architectures in the search space and decouple the weights training with architectures searching. NAS works only search for operation in the certain layer. our work is different from them by searching for the computation allocation across different resolution. Computation allocation across feature resolutions is an obvious issue that has not been studied by NAS. We carefully design a search space that facilitates the use of existing search for finding good solution.
NAS on object detection. There are some work use NAS methods on object detection task (Chen et al., 2019; Ning Wang & Shen, 2019; Ghiasi et al., 2019). Ghiasi et al. (2019) search for scalable feature pyramid architectures and Ning Wang & Shen (2019) search for feature pyramid network and the prediction heads together by fixing the architecture of backbone CNN. These two work both introduce additional computation budget. The search space of Chen et al. (2019) is directly inherited from the classification task which is suboptimal for object detection task. Peng et al. (2019) search for dilated rate on channel level in the CNN backbone. These two approaches assume the fixed number of blocks in each resolution, while we search the number of blocks in each stage that is important for object detection and complementary to these approaches.
3 METHOD
3.1 BASIC SETTINGS
Our search method is based on the Faster RCNN (Ren et al., 2015) with FPN (Lin et al., 2017a) for its excellent performance. We only reallocate the computation within the backbone, while fix other components for fair comparison.
For more efficient search, we adopt the idea of one-shot NAS method (Brock et al., 2017; Bender et al., 2018; Guo et al., 2019). In one-shot NAS, a directed acyclic graph G (a.k.a. supernet) is built to subsume all architectures in the search space and is trained only once. Each architecture g is a subgraph of G and can inherit weights from the trained supernet. For a specific subgraph g ∈ G, its corresponding network can be denoted as N (g, w) with network weights w.
3.2 TWO-LEVEL ARCHITECTURE SEARCH SPACE
We propose Computation Reallocation NAS (CR-NAS) to distribute the computation resources in two dimensions: stage allocation in different resolution, convolution allocation in spatial position.
3.2.1 STAGE REALLOCATION SPACE
The backbone aims to generate intermediate-level features C with increasing downsampling rates 4×, 8×, 16×, and 32×, which can be regarded as 4 stages. The blocks in the same stage share the same spatial resolution. Note that the FLOPs of a single block in two adjacent spatial resolutions remain the same because a downsampling/pooling layer doubles the number of channels. So given the number of total blocks of a backbone N , we can reallocate the number of blocks for each stage while keep the total FLOPs the same. Figure 2 shows our stage reallocation space. In this search space, each stage contains several branches, and each branch has certain number of blocks. The numbers of blocks in different branches are different, corresponding to different computational budget for the stage. For example, there are 5 branches for the stage 1 in Figure 2, the numbers of blocks for these 5 branches are, respectively, 1, 2, 3, 4, and 5. We consider the whole network as a supernet T = {T1, T2, T3, T4}, where Ti at the ith stage hasKi branches, i.e. Ti = {tki |k = 1...Ki}. Then an allocation strategy can be represented as τ = [τ1, τ2, τ3, τ4], where τi denote the number of blocks in the ith branch. All blocks in the same stage have the same structure. ∑4 i=1 τi = N for a network with N blocks. For example, the original ResNet101 has τ = [3, 4, 23, 3] and N = 33
residual blocks. We make a constraint that each stage at least has one convolutional block. We would like to find the best allocation strategy of ResNet101 is among the ( 32 3 ) possible choices. Since validating a single detection architecture requires hundreds of GPU-hours, it not realist to find the optimal architecture by human trails.
On the other hand, we would like to learn stage reallocation strategy for different computation budgets simultaneously. Different applications require CNNs of different numbers of layers for achieving different latency requirements. This is why we have ReseNet18, ReseNet50, ReseNet101, etc. We build a search space to cover all the candidate instances in a certain series, e.g. ResNet series. After considering the trade off between granularity and range, we set the numbers of blocks for T1 and T2 as {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, and set the numbers of blocks for T3 as {2, 3, 5, 6, 9, 11, 14, 17, 20, 23}, for T4 as {2, 3, 4, 6, 7, 9, 11, 13, 15, 17} for the ResNet series. The stage reallocation space of MobileNetV2 (Sandler et al., 2018) and ResNeXt (Xie et al., 2017) can be found in Appendix A.2.
3.2.2 CONVOLUTION REALLOCATION SPACE
To reallocate the computation across spatial position, we utilize dilated convolution Li et al. (2019), Li et al. (2018). Dilated convolution effects the ERF by performing convolution at sparsely sampled locations. Another good feature of dilated convolution is that dilation introduce no extra parameter and computation. We define a choice block to be a basic unit which consists of multiple dilations and search for the best computation allocation. For ResNet Bottleneck, we modify the center 3 × 3 convolution. For ResNet BasicBlock, we only modify the second 3 × 3 convolution to reduce search space and searching time. We have three candidates in our operation set O: {dilated convolution 3 × 3 with dilation rate i|i = 1, 2, 3}. Across the entire ResNet50 search space, there are therefore 316 ≈ 4× 107 possible architectures.
3.3 HIERARCHICAL SEARCH FOR OBJECT DETECTION
We propose a hierarchical search procedure to cope with the complex reallocation space. Firstly, the stage space is explored to find the best computation allocation for different resolution. Then, the operation space is explored to further improve the architecture with better spatial allocation.
3.3.1 STAGE REALLOCATION SEARCH
To reduce the side effect of weights coupling, we adopt the uniform sampling in supernet training(a.k.a single-path one-shot) (Guo et al., 2019). After the supernet training, we can validate the allocation strategies τ ∈ T directly on the task detection task. Model accuracy(COCO AP) is defined as APval(N (τ, w)). We set the block number constraint N . We can find the best allocation strategy in the following equation:
τ∗ = argmax∑4 i=1 τi=N APval(N (τ, w)). (1)
3.3.2 BLOCK OPERATION SEARCH
Algorithm 1: Greedy operation search algorithm Input: Number of blocks B; Possible operations set of each blocks O = {Oi | i = 1, 2, ..., B}; Supernet with trained weights N (O,W ∗); Dataset for validation Dval; Evaluation metric APval;. Output: Best architecture o∗ Initialize top K partial architecture p = Ø for i = 1, 2, ..., B do
pextend = p×Oi . × denotes Cartesian product result = {(arch,AP ) | arch ∈ pextend, AP = evaluate(arch)} p = choose topK(result)
end Output: Best architecture o∗ = choose top1(p).
By introducing the operation allocation space as in Sec. 3.2.2, we can reallocate the computation across spatial position. Same as stage reallocation search, we train an operation supernet adopting random sampling in each choice block (Guo et al., 2019). For architecture search process, previous one-shot works use random search (Brock et al., 2017; Bender et al., 2018) or evolutionary search (Guo et al., 2019). In our approach, We propose a greedy algorithm to make sequential decisions to obtain the final result. We decode network architecture o as a sequential of choices [o1, o2, ..., oB ]. In each choice step, the top K partial architectures are maintained to shrink the search space. We evaluate each candidate operation from the first choice block to the last. The greedy operation search algorithm is shown in Algorithm 1.
The hyper-parameter K is set equal to 3 in our experiment. We first extend the partial architecture in the first block choice which contains three partial architectures in pextend. Then we expand the top 3 partial architectures into the whole length B, which means that there are 3 × 3 = 9 partial architectures in other block choice. For a specific partial architecture arch, we sample the operation of the unselected blocks uniformly for c architectures where c denotes mini batch number of Dval. We validate each architecture on a mini batch and combine the results to generate evaluate(arch). We finally choose the best architecture to obtain o∗.
4 EXPERIMENTS AND RESULTS
4.1 DATASET AND IMPLEMENTATION DETAILS
Dataset We evaluate our method on the challenging MS COCO benchmark (Lin et al., 2014). We split the 135K training images trainval135 into 130K images archtrain and 5K images archval. First, we train the supernet using archtrain and evaluate the architecture using archval. After the architecture is obtained, we follow other standard detectors (Ren et al., 2015; Lin et al., 2017a) on using ImageNet (Russakovsky et al., 2015) for pre-training the weights of this architecture. The final model
is fine-tuned on the whole COCO trainval135 and validated on COCO minival. Another detection dataset VOC (Everingham et al., 2015) is also used. We use VOC trainval2007+trainval2012 as our training dataset and VOC test2007 as our vaildation dataset.
Implementation details The supernet training setting details can be found in Appendix A.1. For the training of our searched models, the input images are resized to have a short side of 800 pixels or a long side of 1333 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. For fair comparison, all our models are trained for 13 epochs, known as 1× schedule (Girshick et al., 2018). We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 8 and 11 epochs. Warm-up and synchronized BatchNorm (SyncBN) (Peng et al., 2018) are adopted for both baselines and our searched models.
4.2 MAIN RESULTS
4.2.1 COMPUTATION REALLOCATION PERFORMANCE
We denote the architecture using our computation reallocation by prefix ’CR-’, e.g. CR-ResNet50. Our final architectures have the almost the same FLOPs as the original network(the negligible difference in FLOPs is from the BatchNorm layer and activation layer). As shown in Table 1, our CR-ResNet50 and CR-ResNet101 outperforms the baseline by 1.9% and 1.6% respectively. It is worth mentioning that many mile-stone backbone improvements also only has around 1.5% gain. For example, the gain is 1.5% from ResNet50 to ResNeXt50-32x4d as indicated in Table 4. In addition, we run the baselines and searched models under longer 2× setting (results shown in Appendix A.4). It can be concluded that the improvement from our approach is consistent.
Our CR-ResNet50 and CR-ResNet101 are especially effective for large objects(3.5%, 4.8% improvement for APl). To understand these improvements, we depict the architecture sketches in Figure 4. We can find in the stage-level, our Stage CR-ResNet50 reallocate more capacity in deep stage. It reveals the fact that the budget in shallow stage is redundant while the resources in deep stage is limited. This pattern is consistent with ERF as in Figure 1. In operation-level, dilated convolution with large rates tends to appear in the deep stage. We explain the shallow stage needs more dense sampling to gather exact information while deep stage aims to recognize large object by more sparse sampling. The dilated convolutions in deep stage further explore the network potential to detect large objects, it is an adaptive way to balance the ERF. For light backbone, our CR-ResNet18 and CR-MobileNetV2 both improves 1.7% AP over the baselines with all-round APs to APl improvements. For light network, it is a more efficient way to allocate the limited capacity in the deep stage for the discriminative feature captured in the deep stage can benefit the shallow small object by the FPN top-down pathway.
4.2.2 TRANSFERABILITY VERIFICATION
Different dataset We transfer our searched model to another object detection dataset VOC (Everingham et al., 2015). Training details can be found in Appendix A.3. We denote the VOC metric [email protected] as AP50 for consistency. As shown in Table 2, our CR-ResNet50 and CR-ResNet101 achieves AP50 improvement 1.0% and 0.7% comparing with the already high baseline.
Different task Segmentation is another task that is highly sensitive to the ERF (Hamaguchi et al., 2018; Wang et al., 2018). Therefore, we transfer our computation reallocation network into the instance segmentation task by using the Mask RCNN (He et al., 2017) framework. The experimental results on COCO are shown in Table 3. The instance segmentation AP of our CR-MobileNetV2, CR-ResNet50 and CR-ResNet101 outperform the baseline respectively by 1.2%, 1.3% and 1.1% absolute AP. We also achieve bounding box AP improvement by 1.5%, 1.5% and 1.8% respectively.
Different head/neck Our work is orthogonal to other improvements on object detection. We exploit the SOTA detector Cascade Mask RCNN (Cai & Vasconcelos, 2018) for further verification. The detector equipped with our CR-Res101 can achieve 44.5% AP, better than the regular Res101 43.3% baseline by a significant 1.2% gain. Additionally, we evaluate replacing the original FPN with a searched NAS-FPN (Ghiasi et al., 2019) neck to strength our results. The Res50 with NASFPN neck can achieve 39.6% AP while our CR-Res50 with NAS-FPN can achieve 41.0% AP using the same 1× setting. More detailed results can be found in Appendix A.4.
Table 4: COCO minival AP (%) evaluating stage reallocation performance for different networks. Res50 denotes ResNet50, similarly for Res101. ReX50 denotes ResNeXt50, similarly for ReXt101.
MbileNetV2 Res18 Res50 Res101 ReX50-32×4d ReX101-32×4d Baseline AP 32.2 32.1 36.4 38.6 37.9 40.6 Stage-CR AP 33.5 33.4 37.4 39.5 38.9 41.5
100 120 140 160 180 200 220 240 260 280 FLOPs(G)
30.0
32.0
34.0
36.0
38.0
40.0
42.0
AP SCR-Backbone(stage) ResNet SCR-ResNet ResNeXt SCR-ResNeXt MobileNetV2 SCR-MobileNetV2
Figure 5: Detector FLOPs(G) versus AP on COCO minival. The bold lines and dotted lines are the baselines and our stage computation reallocation models(SCR-) respectively.
75.0 75.5 76.0 76.5 77.0 77.5 78.0 Top1 Acc.
36.0
37.0
38.0
39.0
40.0
41.0
AP (76.5, 38.3)
(77.3, 40.2)
FLOPs equivalent R50 FLOPs R50 FLOPs (best) R101 FLOPs R101 FLOPs (best)
Figure 6: Top1 accuracy on ImageNet validation set versus AP on COCO minival. Each dot is a model which has equivalent FLOPs as the baseline.
4.3 ANALYSIS
4.3.1 EFFECT OF STAGE REALLOCATION
Our design includes two parts, stage reallocation search and block operation search. In this section, we analyse the effectiveness of stage reallocation search alone. Table 4 shows the performance comparison between the baseline and the baseline with our stage reallocation search. From light MobileNetV2 model to heavy ResNeXt101, our stage reallocation brings a solid average 1.0% AP improvement. Figure 5 shows that our Stage-CR network series yield overall improvements over baselines with negligible difference in computation. The stage reallocation results for more models are shown in Appendix A.2. There is a trend to reallocate the computation from shallow stage to deep stage. The intuitive explanation is that reallocating more capacity in deep stage results in a balanced ERF as Figure 1 shows and can enhance the ability to detect medium and large object.
4.3.2 CORRELATIONS BETWEEN CLS. AND DET. PERFORMANCE
Often, a large AP increase could be obtained by simply replacing backbone with stronger network, e.g. from ResNet50 to ResNet101 and then to ResNeXt101. The assumption is that strong network can perform well on both classification and detection tasks. We further explore the performance correlation between these two tasks by a lot of experiments. We draw ImageNet top1 accuracy versus COCO AP correlation in Figure 6 for different architectures of the same FLOPS. Each dot is a single network architecture. We can easily find that although the performance correlation between these two tasks is basically positive, better classification accuracy may not always lead to better detection accuracy. This study further shows the gap between these two tasks.
5 CONCLUSION
In this paper, we present CR-NAS (Computation Reallocation Neural Architecture Search) that can learn computation reallocation strategies across different resolution and spatial position. We design
a two-level reallocation space and a novel hierarchical search procedure to cope with the complex search space. Extensive experiments show the effectiveness of our approach. The discovered model has great transfer-ability to other detection neck/head, other dataset and other vision tasks. Our CRNAS can be used as a plugin to other detection backbones to further booster the performance under certain computation resources.
A APPENDIX
A.1 SUPERNET TRAINING
Both stage and operation supernets use exactly the same setting. The supernet training process adopt the ’pre-training and fine-tuning’ paradigm. For ResNet and ResNeXt, the supernet channel distribution is [32, 64, 128, 256].
Supernet pre-training. We use ImageNet-1k for supernet pre-training. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. The supnet are trained for 150 epochs with the batch size 1024. To smooth the jittering in the training process, we adopt the cosine learning rate decay (Loshchilov & Hutter, 2016) with the initial learning rate 0.4. Warming up and synchronized-BN (Peng et al., 2018) are adopted to help convergence.
Supernet fine-tuning. We fine tune the pretrained supernet on archtrain. The input images are resized to have a short side of 800 pixels or a long side of 1333 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. Supernet is trained for 25 epochs (known as 2× schedule (Girshick et al., 2018)). We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 22 epochs. Warm-up and synchronized BatchNorm (SyncBN) (Peng et al., 2018) are adopted to help convergence.
A.2 REALLOCATION SETTINGS AND RESULTS
stage allocation space For ResNeXt, the stage allocation space is exactly the same as ResNet series. For MobileNetV2, original block numbers in Sandler et al. (2018) is defined by n=[1, 1, 2, 3, 4, 3, 3, 1, 1, 1]. We build our allocation space on the the bottleneck operator by fixing stem and tail components. A architecture is represented as m = [1, 1,m1,m2,m3,m4,m5, 1, 1, 1]. The allocation space is M = [M1,M2,M3,M4,M5]. M1,M2 = {1, 2, 3, 4, 5}, M3 = {3, 4, 5, 6, 7}, M4,M5 = {2, 3, 4, 5, 6}. It’s worth to mention the computation cost in different stage of m is not exactly the same because of the abnormal channels. We format the weight as [1.5, 1, 1, 0.75, 1.25] for [m1,m2,m3,m4,m5].
computation reallocation results We propose our CR-NAS in a sequential way. At first we reallocate the computation across different resolution. The Stage CR results is shown in Table A.2
Then we search for the spatial allocation by adopting the dilated convolution with different rates. the operation code as. we denote our final model as
[0 ] dilated conv with rate 1(normal conv) [1 ] dilated conv with rate 2 [2 ] dilated conv with rate 3
Our final model can be represnted as a series of allocation codes.
A.3 IMPLEMENTATION DETAILS OF VOC
We use the VOC trainval2007+trainval2012 to server as our whole training set. We conduct our results on the VOC test2007. The pretrained model is apoted. The input images are resized to have a short side of 600 pixels or a long side of 1000 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. We train for 18 whole epochs for all models. We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 15 and 17 epochs. Warm-up and synchronized BatchNorm (SyncBN) (Peng et al., 2018) are adopted to help convergence.
A.4 MORE EXPERIMENTS
longer schedule 2× schedule means training totally 25 epochs as indicated in Girshick et al. (2018). The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 22 epochs. Other training settings is exactly the same as in 1×.
Powerful detector The Cascade Mask RCNN (Cai & Vasconcelos, 2018) is a SOTA multi-stage object detector. The detector is trained for 20 epochs. The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 19 epochs. Warming up and synchronized-BN (Peng et al., 2018) are adopted to help convergence.
Powerful searched neck NAS-FPN (Ghiasi et al., 2019) is a powerful scalable feature pyramid architecture searched for object detection. We reimplement NAS-FPN (7 @ 384) in Faster RCNN (The original paper is implemented in RetinaNet (Lin et al., 2017b)). The detector is training under 1× setting as described in 4.1. | 1. What are the main contributions and proposed approaches in the paper on neural architecture search?
2. What are the strengths and weaknesses of the paper regarding its ideas, results, and comparisons with other works?
3. How does the reviewer assess the novelty and significance of the paper's content?
4. Are there any suggestions or recommendations for improving the paper or its research direction? | Review | Review
This paper works on neural architecture search for object detection. Two search directions are proposed: 1) searching the number of conv blocks at each resolution (or "stage"). 2) searching the dilations for each conv block. A greedy neighbor-based search algorithm is adopted. The results show healthy improvements among different network architectures. And the searched architecture also performs well on other tasks or datasets.
Overall it is a valid paper with reasonable ideas and decent results. I like the conclusion that the searched architecture also works well on other tasks. This can be a universal replacement of the regular Resnets if people are willing to switch. However, the results are not exciting enough. The baseline models are old and it is not surprising doing an architecture search can improve. It seems that the major improvements are from re-arranging the convolutional blocks (comparing Table. 4 and Table. 1), which is one of the most straightforward directions for architecture search. The improvements of adding dilation on earlier layers are not exciting. Also, the authors do not compare to any other neural architecture search methods, which makes the improvements less convincing.
I vote for a weak rejection for now, mainly based on the limited novelty. A more interesting improvement will be (manually) comparing the searched architecture for different tasks. E.g., will all tasks prefer more layers in deeper stages or does classification prefer more layers in the middle, and segmentation prefers more layers in the beginning. I will be happy to alter my rating if the authors show more exciting observations (not limited to the above direction). |
ICLR | Title
Computation Reallocation for Object Detection
Abstract
The allocation of computation resources in the backbone is a crucial issue in object detection. However, classification allocation pattern is usually adopted directly to object detector, which is proved to be sub-optimal. In order to reallocate the engaged computation resources in a more efficient way, we present CR-NAS (Computation Reallocation Neural Architecture Search) that can learn computation reallocation strategies across different feature resolution and spatial position diectly on the target detection dataset. A two-level reallocation space is proposed for both stage and spatial reallocation. A novel hierarchical search procedure is adopted to cope with the complex search space. We apply CR-NAS to multiple backbones and achieve consistent improvements. Our CR-ResNet50 and CRMobileNetV2 outperforms the baseline by 1.9% and 1.7% COCO AP respectively without any additional computation budget. The models discovered by CR-NAS can be equiped to other powerful detection neck/head and be easily transferred to other dataset, e.g. PASCAL VOC, and other vision tasks, e.g. instance segmentation. Our CR-NAS can be used as a plugin to improve the performance of various networks, which is demanding.
1 INTRODUCTION
Object detection is one of the fundamental tasks in computer vision. The backbone feature extractor is usually taken directly from classification literature (Girshick, 2015; Ren et al., 2015; Lin et al., 2017a; Lu et al., 2019). However, comparing with classification, object detection aims to know not only what but also where the object is. Directly taking the backbone of classification network for object detectors is sub-optimal, which has been observed in Li et al. (2018). To address this issue, there are many approaches either manually or automatically modify the backbone network. Chen et al. (2019) proposes a neural architecture search (NAS) framework for detection backbone to avoid expert efforts and design trails. However, previous works rely on the prior knowledge for classification task, either inheriting the backbone for classification, or designing search space similar to NAS on classification. This raises a natural question: How to design an effective backbone dedicated to detection tasks?
To answer this question, we first draw a link between the Effective Receptive Field (ERF) and the computation allocation of backbone. The ERF is only small Gaussian-like factor of theoretical receptive field (TRF), but it dominates the output (Luo et al., 2016). The ERF of image classification task can be easily fulfilled, e.g. the input size is 224×224 for the ImageNet data, while the ERF of object detection task need more capacities to handle scale variance across the instances, e.g. the input size is 800×1333 and the sizes of objects vary from 32 to 800 for the COCO dataset. Lin et al. (2017a) allocates objects of different scales into different feature resolutions to capture the appropriate ERF in each stage. Here we conduct an experiment to study the differences between the ERF of several FPN features. As shown in Figure 1, we notice the allocation of computation across different resolutions has a great impact on the ERF. Furthermore, appropriate computation allocation across spacial position (Dai et al., 2017) boost the performance of detector by affecting the ERF.
Based on the above observation, in this paper, we aim to automatically design the computation allocation of backbone for object detectors. Different from existing detection NAS works (Ghiasi et al., 2019; Ning Wang & Shen, 2019) which achieve accuracy improvement by introducing higher computation complexity, we reallocate the engaged computation cost in a more efficient way. We propose computation reallocation NAS (CR-NAS) to search the allocation strategy directly on the detection task. A two-level reallocation space is conducted to reallocate the computation across different resolution and spatial position. In stage level, we search for the best strategy to distribute the computation among different resolution. In operation level, we reallocate the computation by introducing a powerful search space designed specially for object detection. The details about search space can be found in Sec. 3.2. We propose a hierarchical search algorithm to cope with the complex search space. Typically in stage reallocation, we exploit a reusable search space to reduce stage-level searching cost and adapt different computational requirements.
Extensive experiments show the effectiveness of our approach. Our CR-NAS offers improvements for both fast mobile model and accurate model, such as ResNet (He et al., 2016), MobileNetV2 (Sandler et al., 2018), ResNeXt (Xie et al., 2017). On the COCO dataset, our CR-ResNet50 and CR-MobileNetV2 can achieve 38.3% and 33.9% AP, outperforming the baseline by 1.9% and 1.7% respectively without any additional computation budget. Furthermore, we transfer our CR-ResNet and CR-MobileNetV2 into the another ERF-sensitive task, instance segmentation, by using the Mask RCNN (He et al., 2017) framework. Our CR-ResNet50 and CR-MobileNetV2 yields 1.3% and 1.2% COCO segmentation AP improvement over baseline.
To summarize, the contributions of our paper are three-fold:
• We propose computation reallocation NAS(CR-NAS) to reallocate engaged computation resources. To our knowledge, we are the first to dig inside the computation allocation across different resolution.
• We develop a two-level reallocation space and hierarchical search paradigm to cope with the complex search space. Typically in stage reallocation, we exploit a reusable model to reduce stage-level searching cost and adapt different computational requirements.
• Our CR-NAS offers significant improvements for various types of networks. The discovered models show great transferablity over other detection neck/head, e.g. NAS-FPN (Cai & Vasconcelos, 2018), other dataset, e.g. PASCAL VOC (Everingham et al., 2015) and other vision tasks, e.g. instance segmentation (He et al., 2017).
2 RELATED WORK
Neural Architecture Search(NAS) Neural architecture search focus on automating the network architecture design which requires great expert knowledge and tremendous trails. Early NAS approaches (Zoph & Le, 2016; Zoph et al., 2018) are computational expensive due to the evaluating of each candidate. Recently, weight sharing strategy (Pham et al., 2018; Liu et al., 2018; Cai et al., 2018; Guo et al., 2019) is proposed to reduce searing cost. One-shot NAS method (Brock et al., 2017; Bender et al., 2018; Guo et al., 2019) build a directed acyclic graph G (a.k.a. supernet) to subsume all architectures in the search space and decouple the weights training with architectures searching. NAS works only search for operation in the certain layer. our work is different from them by searching for the computation allocation across different resolution. Computation allocation across feature resolutions is an obvious issue that has not been studied by NAS. We carefully design a search space that facilitates the use of existing search for finding good solution.
NAS on object detection. There are some work use NAS methods on object detection task (Chen et al., 2019; Ning Wang & Shen, 2019; Ghiasi et al., 2019). Ghiasi et al. (2019) search for scalable feature pyramid architectures and Ning Wang & Shen (2019) search for feature pyramid network and the prediction heads together by fixing the architecture of backbone CNN. These two work both introduce additional computation budget. The search space of Chen et al. (2019) is directly inherited from the classification task which is suboptimal for object detection task. Peng et al. (2019) search for dilated rate on channel level in the CNN backbone. These two approaches assume the fixed number of blocks in each resolution, while we search the number of blocks in each stage that is important for object detection and complementary to these approaches.
3 METHOD
3.1 BASIC SETTINGS
Our search method is based on the Faster RCNN (Ren et al., 2015) with FPN (Lin et al., 2017a) for its excellent performance. We only reallocate the computation within the backbone, while fix other components for fair comparison.
For more efficient search, we adopt the idea of one-shot NAS method (Brock et al., 2017; Bender et al., 2018; Guo et al., 2019). In one-shot NAS, a directed acyclic graph G (a.k.a. supernet) is built to subsume all architectures in the search space and is trained only once. Each architecture g is a subgraph of G and can inherit weights from the trained supernet. For a specific subgraph g ∈ G, its corresponding network can be denoted as N (g, w) with network weights w.
3.2 TWO-LEVEL ARCHITECTURE SEARCH SPACE
We propose Computation Reallocation NAS (CR-NAS) to distribute the computation resources in two dimensions: stage allocation in different resolution, convolution allocation in spatial position.
3.2.1 STAGE REALLOCATION SPACE
The backbone aims to generate intermediate-level features C with increasing downsampling rates 4×, 8×, 16×, and 32×, which can be regarded as 4 stages. The blocks in the same stage share the same spatial resolution. Note that the FLOPs of a single block in two adjacent spatial resolutions remain the same because a downsampling/pooling layer doubles the number of channels. So given the number of total blocks of a backbone N , we can reallocate the number of blocks for each stage while keep the total FLOPs the same. Figure 2 shows our stage reallocation space. In this search space, each stage contains several branches, and each branch has certain number of blocks. The numbers of blocks in different branches are different, corresponding to different computational budget for the stage. For example, there are 5 branches for the stage 1 in Figure 2, the numbers of blocks for these 5 branches are, respectively, 1, 2, 3, 4, and 5. We consider the whole network as a supernet T = {T1, T2, T3, T4}, where Ti at the ith stage hasKi branches, i.e. Ti = {tki |k = 1...Ki}. Then an allocation strategy can be represented as τ = [τ1, τ2, τ3, τ4], where τi denote the number of blocks in the ith branch. All blocks in the same stage have the same structure. ∑4 i=1 τi = N for a network with N blocks. For example, the original ResNet101 has τ = [3, 4, 23, 3] and N = 33
residual blocks. We make a constraint that each stage at least has one convolutional block. We would like to find the best allocation strategy of ResNet101 is among the ( 32 3 ) possible choices. Since validating a single detection architecture requires hundreds of GPU-hours, it not realist to find the optimal architecture by human trails.
On the other hand, we would like to learn stage reallocation strategy for different computation budgets simultaneously. Different applications require CNNs of different numbers of layers for achieving different latency requirements. This is why we have ReseNet18, ReseNet50, ReseNet101, etc. We build a search space to cover all the candidate instances in a certain series, e.g. ResNet series. After considering the trade off between granularity and range, we set the numbers of blocks for T1 and T2 as {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, and set the numbers of blocks for T3 as {2, 3, 5, 6, 9, 11, 14, 17, 20, 23}, for T4 as {2, 3, 4, 6, 7, 9, 11, 13, 15, 17} for the ResNet series. The stage reallocation space of MobileNetV2 (Sandler et al., 2018) and ResNeXt (Xie et al., 2017) can be found in Appendix A.2.
3.2.2 CONVOLUTION REALLOCATION SPACE
To reallocate the computation across spatial position, we utilize dilated convolution Li et al. (2019), Li et al. (2018). Dilated convolution effects the ERF by performing convolution at sparsely sampled locations. Another good feature of dilated convolution is that dilation introduce no extra parameter and computation. We define a choice block to be a basic unit which consists of multiple dilations and search for the best computation allocation. For ResNet Bottleneck, we modify the center 3 × 3 convolution. For ResNet BasicBlock, we only modify the second 3 × 3 convolution to reduce search space and searching time. We have three candidates in our operation set O: {dilated convolution 3 × 3 with dilation rate i|i = 1, 2, 3}. Across the entire ResNet50 search space, there are therefore 316 ≈ 4× 107 possible architectures.
3.3 HIERARCHICAL SEARCH FOR OBJECT DETECTION
We propose a hierarchical search procedure to cope with the complex reallocation space. Firstly, the stage space is explored to find the best computation allocation for different resolution. Then, the operation space is explored to further improve the architecture with better spatial allocation.
3.3.1 STAGE REALLOCATION SEARCH
To reduce the side effect of weights coupling, we adopt the uniform sampling in supernet training(a.k.a single-path one-shot) (Guo et al., 2019). After the supernet training, we can validate the allocation strategies τ ∈ T directly on the task detection task. Model accuracy(COCO AP) is defined as APval(N (τ, w)). We set the block number constraint N . We can find the best allocation strategy in the following equation:
τ∗ = argmax∑4 i=1 τi=N APval(N (τ, w)). (1)
3.3.2 BLOCK OPERATION SEARCH
Algorithm 1: Greedy operation search algorithm Input: Number of blocks B; Possible operations set of each blocks O = {Oi | i = 1, 2, ..., B}; Supernet with trained weights N (O,W ∗); Dataset for validation Dval; Evaluation metric APval;. Output: Best architecture o∗ Initialize top K partial architecture p = Ø for i = 1, 2, ..., B do
pextend = p×Oi . × denotes Cartesian product result = {(arch,AP ) | arch ∈ pextend, AP = evaluate(arch)} p = choose topK(result)
end Output: Best architecture o∗ = choose top1(p).
By introducing the operation allocation space as in Sec. 3.2.2, we can reallocate the computation across spatial position. Same as stage reallocation search, we train an operation supernet adopting random sampling in each choice block (Guo et al., 2019). For architecture search process, previous one-shot works use random search (Brock et al., 2017; Bender et al., 2018) or evolutionary search (Guo et al., 2019). In our approach, We propose a greedy algorithm to make sequential decisions to obtain the final result. We decode network architecture o as a sequential of choices [o1, o2, ..., oB ]. In each choice step, the top K partial architectures are maintained to shrink the search space. We evaluate each candidate operation from the first choice block to the last. The greedy operation search algorithm is shown in Algorithm 1.
The hyper-parameter K is set equal to 3 in our experiment. We first extend the partial architecture in the first block choice which contains three partial architectures in pextend. Then we expand the top 3 partial architectures into the whole length B, which means that there are 3 × 3 = 9 partial architectures in other block choice. For a specific partial architecture arch, we sample the operation of the unselected blocks uniformly for c architectures where c denotes mini batch number of Dval. We validate each architecture on a mini batch and combine the results to generate evaluate(arch). We finally choose the best architecture to obtain o∗.
4 EXPERIMENTS AND RESULTS
4.1 DATASET AND IMPLEMENTATION DETAILS
Dataset We evaluate our method on the challenging MS COCO benchmark (Lin et al., 2014). We split the 135K training images trainval135 into 130K images archtrain and 5K images archval. First, we train the supernet using archtrain and evaluate the architecture using archval. After the architecture is obtained, we follow other standard detectors (Ren et al., 2015; Lin et al., 2017a) on using ImageNet (Russakovsky et al., 2015) for pre-training the weights of this architecture. The final model
is fine-tuned on the whole COCO trainval135 and validated on COCO minival. Another detection dataset VOC (Everingham et al., 2015) is also used. We use VOC trainval2007+trainval2012 as our training dataset and VOC test2007 as our vaildation dataset.
Implementation details The supernet training setting details can be found in Appendix A.1. For the training of our searched models, the input images are resized to have a short side of 800 pixels or a long side of 1333 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. For fair comparison, all our models are trained for 13 epochs, known as 1× schedule (Girshick et al., 2018). We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 8 and 11 epochs. Warm-up and synchronized BatchNorm (SyncBN) (Peng et al., 2018) are adopted for both baselines and our searched models.
4.2 MAIN RESULTS
4.2.1 COMPUTATION REALLOCATION PERFORMANCE
We denote the architecture using our computation reallocation by prefix ’CR-’, e.g. CR-ResNet50. Our final architectures have the almost the same FLOPs as the original network(the negligible difference in FLOPs is from the BatchNorm layer and activation layer). As shown in Table 1, our CR-ResNet50 and CR-ResNet101 outperforms the baseline by 1.9% and 1.6% respectively. It is worth mentioning that many mile-stone backbone improvements also only has around 1.5% gain. For example, the gain is 1.5% from ResNet50 to ResNeXt50-32x4d as indicated in Table 4. In addition, we run the baselines and searched models under longer 2× setting (results shown in Appendix A.4). It can be concluded that the improvement from our approach is consistent.
Our CR-ResNet50 and CR-ResNet101 are especially effective for large objects(3.5%, 4.8% improvement for APl). To understand these improvements, we depict the architecture sketches in Figure 4. We can find in the stage-level, our Stage CR-ResNet50 reallocate more capacity in deep stage. It reveals the fact that the budget in shallow stage is redundant while the resources in deep stage is limited. This pattern is consistent with ERF as in Figure 1. In operation-level, dilated convolution with large rates tends to appear in the deep stage. We explain the shallow stage needs more dense sampling to gather exact information while deep stage aims to recognize large object by more sparse sampling. The dilated convolutions in deep stage further explore the network potential to detect large objects, it is an adaptive way to balance the ERF. For light backbone, our CR-ResNet18 and CR-MobileNetV2 both improves 1.7% AP over the baselines with all-round APs to APl improvements. For light network, it is a more efficient way to allocate the limited capacity in the deep stage for the discriminative feature captured in the deep stage can benefit the shallow small object by the FPN top-down pathway.
4.2.2 TRANSFERABILITY VERIFICATION
Different dataset We transfer our searched model to another object detection dataset VOC (Everingham et al., 2015). Training details can be found in Appendix A.3. We denote the VOC metric [email protected] as AP50 for consistency. As shown in Table 2, our CR-ResNet50 and CR-ResNet101 achieves AP50 improvement 1.0% and 0.7% comparing with the already high baseline.
Different task Segmentation is another task that is highly sensitive to the ERF (Hamaguchi et al., 2018; Wang et al., 2018). Therefore, we transfer our computation reallocation network into the instance segmentation task by using the Mask RCNN (He et al., 2017) framework. The experimental results on COCO are shown in Table 3. The instance segmentation AP of our CR-MobileNetV2, CR-ResNet50 and CR-ResNet101 outperform the baseline respectively by 1.2%, 1.3% and 1.1% absolute AP. We also achieve bounding box AP improvement by 1.5%, 1.5% and 1.8% respectively.
Different head/neck Our work is orthogonal to other improvements on object detection. We exploit the SOTA detector Cascade Mask RCNN (Cai & Vasconcelos, 2018) for further verification. The detector equipped with our CR-Res101 can achieve 44.5% AP, better than the regular Res101 43.3% baseline by a significant 1.2% gain. Additionally, we evaluate replacing the original FPN with a searched NAS-FPN (Ghiasi et al., 2019) neck to strength our results. The Res50 with NASFPN neck can achieve 39.6% AP while our CR-Res50 with NAS-FPN can achieve 41.0% AP using the same 1× setting. More detailed results can be found in Appendix A.4.
Table 4: COCO minival AP (%) evaluating stage reallocation performance for different networks. Res50 denotes ResNet50, similarly for Res101. ReX50 denotes ResNeXt50, similarly for ReXt101.
MbileNetV2 Res18 Res50 Res101 ReX50-32×4d ReX101-32×4d Baseline AP 32.2 32.1 36.4 38.6 37.9 40.6 Stage-CR AP 33.5 33.4 37.4 39.5 38.9 41.5
100 120 140 160 180 200 220 240 260 280 FLOPs(G)
30.0
32.0
34.0
36.0
38.0
40.0
42.0
AP SCR-Backbone(stage) ResNet SCR-ResNet ResNeXt SCR-ResNeXt MobileNetV2 SCR-MobileNetV2
Figure 5: Detector FLOPs(G) versus AP on COCO minival. The bold lines and dotted lines are the baselines and our stage computation reallocation models(SCR-) respectively.
75.0 75.5 76.0 76.5 77.0 77.5 78.0 Top1 Acc.
36.0
37.0
38.0
39.0
40.0
41.0
AP (76.5, 38.3)
(77.3, 40.2)
FLOPs equivalent R50 FLOPs R50 FLOPs (best) R101 FLOPs R101 FLOPs (best)
Figure 6: Top1 accuracy on ImageNet validation set versus AP on COCO minival. Each dot is a model which has equivalent FLOPs as the baseline.
4.3 ANALYSIS
4.3.1 EFFECT OF STAGE REALLOCATION
Our design includes two parts, stage reallocation search and block operation search. In this section, we analyse the effectiveness of stage reallocation search alone. Table 4 shows the performance comparison between the baseline and the baseline with our stage reallocation search. From light MobileNetV2 model to heavy ResNeXt101, our stage reallocation brings a solid average 1.0% AP improvement. Figure 5 shows that our Stage-CR network series yield overall improvements over baselines with negligible difference in computation. The stage reallocation results for more models are shown in Appendix A.2. There is a trend to reallocate the computation from shallow stage to deep stage. The intuitive explanation is that reallocating more capacity in deep stage results in a balanced ERF as Figure 1 shows and can enhance the ability to detect medium and large object.
4.3.2 CORRELATIONS BETWEEN CLS. AND DET. PERFORMANCE
Often, a large AP increase could be obtained by simply replacing backbone with stronger network, e.g. from ResNet50 to ResNet101 and then to ResNeXt101. The assumption is that strong network can perform well on both classification and detection tasks. We further explore the performance correlation between these two tasks by a lot of experiments. We draw ImageNet top1 accuracy versus COCO AP correlation in Figure 6 for different architectures of the same FLOPS. Each dot is a single network architecture. We can easily find that although the performance correlation between these two tasks is basically positive, better classification accuracy may not always lead to better detection accuracy. This study further shows the gap between these two tasks.
5 CONCLUSION
In this paper, we present CR-NAS (Computation Reallocation Neural Architecture Search) that can learn computation reallocation strategies across different resolution and spatial position. We design
a two-level reallocation space and a novel hierarchical search procedure to cope with the complex search space. Extensive experiments show the effectiveness of our approach. The discovered model has great transfer-ability to other detection neck/head, other dataset and other vision tasks. Our CRNAS can be used as a plugin to other detection backbones to further booster the performance under certain computation resources.
A APPENDIX
A.1 SUPERNET TRAINING
Both stage and operation supernets use exactly the same setting. The supernet training process adopt the ’pre-training and fine-tuning’ paradigm. For ResNet and ResNeXt, the supernet channel distribution is [32, 64, 128, 256].
Supernet pre-training. We use ImageNet-1k for supernet pre-training. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. The supnet are trained for 150 epochs with the batch size 1024. To smooth the jittering in the training process, we adopt the cosine learning rate decay (Loshchilov & Hutter, 2016) with the initial learning rate 0.4. Warming up and synchronized-BN (Peng et al., 2018) are adopted to help convergence.
Supernet fine-tuning. We fine tune the pretrained supernet on archtrain. The input images are resized to have a short side of 800 pixels or a long side of 1333 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. Supernet is trained for 25 epochs (known as 2× schedule (Girshick et al., 2018)). We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 22 epochs. Warm-up and synchronized BatchNorm (SyncBN) (Peng et al., 2018) are adopted to help convergence.
A.2 REALLOCATION SETTINGS AND RESULTS
stage allocation space For ResNeXt, the stage allocation space is exactly the same as ResNet series. For MobileNetV2, original block numbers in Sandler et al. (2018) is defined by n=[1, 1, 2, 3, 4, 3, 3, 1, 1, 1]. We build our allocation space on the the bottleneck operator by fixing stem and tail components. A architecture is represented as m = [1, 1,m1,m2,m3,m4,m5, 1, 1, 1]. The allocation space is M = [M1,M2,M3,M4,M5]. M1,M2 = {1, 2, 3, 4, 5}, M3 = {3, 4, 5, 6, 7}, M4,M5 = {2, 3, 4, 5, 6}. It’s worth to mention the computation cost in different stage of m is not exactly the same because of the abnormal channels. We format the weight as [1.5, 1, 1, 0.75, 1.25] for [m1,m2,m3,m4,m5].
computation reallocation results We propose our CR-NAS in a sequential way. At first we reallocate the computation across different resolution. The Stage CR results is shown in Table A.2
Then we search for the spatial allocation by adopting the dilated convolution with different rates. the operation code as. we denote our final model as
[0 ] dilated conv with rate 1(normal conv) [1 ] dilated conv with rate 2 [2 ] dilated conv with rate 3
Our final model can be represnted as a series of allocation codes.
A.3 IMPLEMENTATION DETAILS OF VOC
We use the VOC trainval2007+trainval2012 to server as our whole training set. We conduct our results on the VOC test2007. The pretrained model is apoted. The input images are resized to have a short side of 600 pixels or a long side of 1000 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. We train for 18 whole epochs for all models. We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 15 and 17 epochs. Warm-up and synchronized BatchNorm (SyncBN) (Peng et al., 2018) are adopted to help convergence.
A.4 MORE EXPERIMENTS
longer schedule 2× schedule means training totally 25 epochs as indicated in Girshick et al. (2018). The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 22 epochs. Other training settings is exactly the same as in 1×.
Powerful detector The Cascade Mask RCNN (Cai & Vasconcelos, 2018) is a SOTA multi-stage object detector. The detector is trained for 20 epochs. The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 19 epochs. Warming up and synchronized-BN (Peng et al., 2018) are adopted to help convergence.
Powerful searched neck NAS-FPN (Ghiasi et al., 2019) is a powerful scalable feature pyramid architecture searched for object detection. We reimplement NAS-FPN (7 @ 384) in Faster RCNN (The original paper is implemented in RetinaNet (Lin et al., 2017b)). The detector is training under 1× setting as described in 4.1. | 1. What is the novelty of the paper's approach to neural architecture search for object detection?
2. What are the strengths and weaknesses of the proposed method, particularly in its ability to improve detection performance and balance effective receptive fields?
3. Do you have any concerns regarding the paper's results and their significance compared to prior works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
The paper attempts to apply neural architecture search (NAS) to re-arrange, or re-allocate the network backbone blocks and the convolution filters for object detection. The search space is two-fold: 1) the network is allowed to search over allocation of different number of blocks in the backbone (e.g. ResNet, MobileNet); 2) the network is allowed to choose the dilation of each of the block. A one-shot NAS method is adopted for efficient search. After search, the model is shown to have 1) better AP results; and 2) more balanced effective receptive field (ERF).
+ I am not aware of any work that performs search on backbone architectures for object detection yet. So the idea itself is novel;
+ The visualization of the ERF is interesting -- it reals that ERF is more balanced after searching.
- My biggest concern is in results. It seems for Faster R-CNN with FPN, the detection results should be higher in general (e.g. R-50-FPN should at least give ~37 Ap with 1x training, and can reach 38 if it trains longer -- the same as CR-R-50-FPN in Table 1). Therefore I am not fully convinced that the searched results are obtaining meaningful gains -- maybe a result that trains longer can help here.
- Related -- I think while the idea is interesting, the limited improvement is hurting the significance of the work. In fact, to me the most important result would be on Fig 5, where it compares the speed/accuracy trade-off (directly comparing accuracy is meaning less unless the paper reaches state-of-the-art -- which is around 50 now); however, again no significant gains. Here it is because many "improvements" have been proposed after Faster/Mask R-CNN as baseline.
- (Minor) I am not sure the computation of possible choices 33^3 is accurate for the search space, because some of these 33^3 blocks are identical and therefore redundant.
- 4.3.2 is a bit misleading. At least improving backbone helps improves object detection performance (as far as I know), the plot also shows quite a bit of correlation between classification and detection performance -- please report at least the correlation for the points on the figure.
Question:
* From Fig 4, it seems for baseline R-50, it would be best to allocate more computation on the later stages (e.g. 1st one only has 3, and last has 7), Is it true for other search results? Is it true for other head (instead of FPN)? Are there intuitive explanations for that?
* Also from Fig 4, it seems the network tend to favor more dilated convolutions toward the end? Does it have something to do with the ERF balancing?
Despite the concerns, I am still in favor of accepting the paper, as the paper concerns both accuracy and speed for detection, and applying NAS to such kind of search reals some interesting patterns (please answer the questions above). |
ICLR | Title
Computation Reallocation for Object Detection
Abstract
The allocation of computation resources in the backbone is a crucial issue in object detection. However, classification allocation pattern is usually adopted directly to object detector, which is proved to be sub-optimal. In order to reallocate the engaged computation resources in a more efficient way, we present CR-NAS (Computation Reallocation Neural Architecture Search) that can learn computation reallocation strategies across different feature resolution and spatial position diectly on the target detection dataset. A two-level reallocation space is proposed for both stage and spatial reallocation. A novel hierarchical search procedure is adopted to cope with the complex search space. We apply CR-NAS to multiple backbones and achieve consistent improvements. Our CR-ResNet50 and CRMobileNetV2 outperforms the baseline by 1.9% and 1.7% COCO AP respectively without any additional computation budget. The models discovered by CR-NAS can be equiped to other powerful detection neck/head and be easily transferred to other dataset, e.g. PASCAL VOC, and other vision tasks, e.g. instance segmentation. Our CR-NAS can be used as a plugin to improve the performance of various networks, which is demanding.
1 INTRODUCTION
Object detection is one of the fundamental tasks in computer vision. The backbone feature extractor is usually taken directly from classification literature (Girshick, 2015; Ren et al., 2015; Lin et al., 2017a; Lu et al., 2019). However, comparing with classification, object detection aims to know not only what but also where the object is. Directly taking the backbone of classification network for object detectors is sub-optimal, which has been observed in Li et al. (2018). To address this issue, there are many approaches either manually or automatically modify the backbone network. Chen et al. (2019) proposes a neural architecture search (NAS) framework for detection backbone to avoid expert efforts and design trails. However, previous works rely on the prior knowledge for classification task, either inheriting the backbone for classification, or designing search space similar to NAS on classification. This raises a natural question: How to design an effective backbone dedicated to detection tasks?
To answer this question, we first draw a link between the Effective Receptive Field (ERF) and the computation allocation of backbone. The ERF is only small Gaussian-like factor of theoretical receptive field (TRF), but it dominates the output (Luo et al., 2016). The ERF of image classification task can be easily fulfilled, e.g. the input size is 224×224 for the ImageNet data, while the ERF of object detection task need more capacities to handle scale variance across the instances, e.g. the input size is 800×1333 and the sizes of objects vary from 32 to 800 for the COCO dataset. Lin et al. (2017a) allocates objects of different scales into different feature resolutions to capture the appropriate ERF in each stage. Here we conduct an experiment to study the differences between the ERF of several FPN features. As shown in Figure 1, we notice the allocation of computation across different resolutions has a great impact on the ERF. Furthermore, appropriate computation allocation across spacial position (Dai et al., 2017) boost the performance of detector by affecting the ERF.
Based on the above observation, in this paper, we aim to automatically design the computation allocation of backbone for object detectors. Different from existing detection NAS works (Ghiasi et al., 2019; Ning Wang & Shen, 2019) which achieve accuracy improvement by introducing higher computation complexity, we reallocate the engaged computation cost in a more efficient way. We propose computation reallocation NAS (CR-NAS) to search the allocation strategy directly on the detection task. A two-level reallocation space is conducted to reallocate the computation across different resolution and spatial position. In stage level, we search for the best strategy to distribute the computation among different resolution. In operation level, we reallocate the computation by introducing a powerful search space designed specially for object detection. The details about search space can be found in Sec. 3.2. We propose a hierarchical search algorithm to cope with the complex search space. Typically in stage reallocation, we exploit a reusable search space to reduce stage-level searching cost and adapt different computational requirements.
Extensive experiments show the effectiveness of our approach. Our CR-NAS offers improvements for both fast mobile model and accurate model, such as ResNet (He et al., 2016), MobileNetV2 (Sandler et al., 2018), ResNeXt (Xie et al., 2017). On the COCO dataset, our CR-ResNet50 and CR-MobileNetV2 can achieve 38.3% and 33.9% AP, outperforming the baseline by 1.9% and 1.7% respectively without any additional computation budget. Furthermore, we transfer our CR-ResNet and CR-MobileNetV2 into the another ERF-sensitive task, instance segmentation, by using the Mask RCNN (He et al., 2017) framework. Our CR-ResNet50 and CR-MobileNetV2 yields 1.3% and 1.2% COCO segmentation AP improvement over baseline.
To summarize, the contributions of our paper are three-fold:
• We propose computation reallocation NAS(CR-NAS) to reallocate engaged computation resources. To our knowledge, we are the first to dig inside the computation allocation across different resolution.
• We develop a two-level reallocation space and hierarchical search paradigm to cope with the complex search space. Typically in stage reallocation, we exploit a reusable model to reduce stage-level searching cost and adapt different computational requirements.
• Our CR-NAS offers significant improvements for various types of networks. The discovered models show great transferablity over other detection neck/head, e.g. NAS-FPN (Cai & Vasconcelos, 2018), other dataset, e.g. PASCAL VOC (Everingham et al., 2015) and other vision tasks, e.g. instance segmentation (He et al., 2017).
2 RELATED WORK
Neural Architecture Search(NAS) Neural architecture search focus on automating the network architecture design which requires great expert knowledge and tremendous trails. Early NAS approaches (Zoph & Le, 2016; Zoph et al., 2018) are computational expensive due to the evaluating of each candidate. Recently, weight sharing strategy (Pham et al., 2018; Liu et al., 2018; Cai et al., 2018; Guo et al., 2019) is proposed to reduce searing cost. One-shot NAS method (Brock et al., 2017; Bender et al., 2018; Guo et al., 2019) build a directed acyclic graph G (a.k.a. supernet) to subsume all architectures in the search space and decouple the weights training with architectures searching. NAS works only search for operation in the certain layer. our work is different from them by searching for the computation allocation across different resolution. Computation allocation across feature resolutions is an obvious issue that has not been studied by NAS. We carefully design a search space that facilitates the use of existing search for finding good solution.
NAS on object detection. There are some work use NAS methods on object detection task (Chen et al., 2019; Ning Wang & Shen, 2019; Ghiasi et al., 2019). Ghiasi et al. (2019) search for scalable feature pyramid architectures and Ning Wang & Shen (2019) search for feature pyramid network and the prediction heads together by fixing the architecture of backbone CNN. These two work both introduce additional computation budget. The search space of Chen et al. (2019) is directly inherited from the classification task which is suboptimal for object detection task. Peng et al. (2019) search for dilated rate on channel level in the CNN backbone. These two approaches assume the fixed number of blocks in each resolution, while we search the number of blocks in each stage that is important for object detection and complementary to these approaches.
3 METHOD
3.1 BASIC SETTINGS
Our search method is based on the Faster RCNN (Ren et al., 2015) with FPN (Lin et al., 2017a) for its excellent performance. We only reallocate the computation within the backbone, while fix other components for fair comparison.
For more efficient search, we adopt the idea of one-shot NAS method (Brock et al., 2017; Bender et al., 2018; Guo et al., 2019). In one-shot NAS, a directed acyclic graph G (a.k.a. supernet) is built to subsume all architectures in the search space and is trained only once. Each architecture g is a subgraph of G and can inherit weights from the trained supernet. For a specific subgraph g ∈ G, its corresponding network can be denoted as N (g, w) with network weights w.
3.2 TWO-LEVEL ARCHITECTURE SEARCH SPACE
We propose Computation Reallocation NAS (CR-NAS) to distribute the computation resources in two dimensions: stage allocation in different resolution, convolution allocation in spatial position.
3.2.1 STAGE REALLOCATION SPACE
The backbone aims to generate intermediate-level features C with increasing downsampling rates 4×, 8×, 16×, and 32×, which can be regarded as 4 stages. The blocks in the same stage share the same spatial resolution. Note that the FLOPs of a single block in two adjacent spatial resolutions remain the same because a downsampling/pooling layer doubles the number of channels. So given the number of total blocks of a backbone N , we can reallocate the number of blocks for each stage while keep the total FLOPs the same. Figure 2 shows our stage reallocation space. In this search space, each stage contains several branches, and each branch has certain number of blocks. The numbers of blocks in different branches are different, corresponding to different computational budget for the stage. For example, there are 5 branches for the stage 1 in Figure 2, the numbers of blocks for these 5 branches are, respectively, 1, 2, 3, 4, and 5. We consider the whole network as a supernet T = {T1, T2, T3, T4}, where Ti at the ith stage hasKi branches, i.e. Ti = {tki |k = 1...Ki}. Then an allocation strategy can be represented as τ = [τ1, τ2, τ3, τ4], where τi denote the number of blocks in the ith branch. All blocks in the same stage have the same structure. ∑4 i=1 τi = N for a network with N blocks. For example, the original ResNet101 has τ = [3, 4, 23, 3] and N = 33
residual blocks. We make a constraint that each stage at least has one convolutional block. We would like to find the best allocation strategy of ResNet101 is among the ( 32 3 ) possible choices. Since validating a single detection architecture requires hundreds of GPU-hours, it not realist to find the optimal architecture by human trails.
On the other hand, we would like to learn stage reallocation strategy for different computation budgets simultaneously. Different applications require CNNs of different numbers of layers for achieving different latency requirements. This is why we have ReseNet18, ReseNet50, ReseNet101, etc. We build a search space to cover all the candidate instances in a certain series, e.g. ResNet series. After considering the trade off between granularity and range, we set the numbers of blocks for T1 and T2 as {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, and set the numbers of blocks for T3 as {2, 3, 5, 6, 9, 11, 14, 17, 20, 23}, for T4 as {2, 3, 4, 6, 7, 9, 11, 13, 15, 17} for the ResNet series. The stage reallocation space of MobileNetV2 (Sandler et al., 2018) and ResNeXt (Xie et al., 2017) can be found in Appendix A.2.
3.2.2 CONVOLUTION REALLOCATION SPACE
To reallocate the computation across spatial position, we utilize dilated convolution Li et al. (2019), Li et al. (2018). Dilated convolution effects the ERF by performing convolution at sparsely sampled locations. Another good feature of dilated convolution is that dilation introduce no extra parameter and computation. We define a choice block to be a basic unit which consists of multiple dilations and search for the best computation allocation. For ResNet Bottleneck, we modify the center 3 × 3 convolution. For ResNet BasicBlock, we only modify the second 3 × 3 convolution to reduce search space and searching time. We have three candidates in our operation set O: {dilated convolution 3 × 3 with dilation rate i|i = 1, 2, 3}. Across the entire ResNet50 search space, there are therefore 316 ≈ 4× 107 possible architectures.
3.3 HIERARCHICAL SEARCH FOR OBJECT DETECTION
We propose a hierarchical search procedure to cope with the complex reallocation space. Firstly, the stage space is explored to find the best computation allocation for different resolution. Then, the operation space is explored to further improve the architecture with better spatial allocation.
3.3.1 STAGE REALLOCATION SEARCH
To reduce the side effect of weights coupling, we adopt the uniform sampling in supernet training(a.k.a single-path one-shot) (Guo et al., 2019). After the supernet training, we can validate the allocation strategies τ ∈ T directly on the task detection task. Model accuracy(COCO AP) is defined as APval(N (τ, w)). We set the block number constraint N . We can find the best allocation strategy in the following equation:
τ∗ = argmax∑4 i=1 τi=N APval(N (τ, w)). (1)
3.3.2 BLOCK OPERATION SEARCH
Algorithm 1: Greedy operation search algorithm Input: Number of blocks B; Possible operations set of each blocks O = {Oi | i = 1, 2, ..., B}; Supernet with trained weights N (O,W ∗); Dataset for validation Dval; Evaluation metric APval;. Output: Best architecture o∗ Initialize top K partial architecture p = Ø for i = 1, 2, ..., B do
pextend = p×Oi . × denotes Cartesian product result = {(arch,AP ) | arch ∈ pextend, AP = evaluate(arch)} p = choose topK(result)
end Output: Best architecture o∗ = choose top1(p).
By introducing the operation allocation space as in Sec. 3.2.2, we can reallocate the computation across spatial position. Same as stage reallocation search, we train an operation supernet adopting random sampling in each choice block (Guo et al., 2019). For architecture search process, previous one-shot works use random search (Brock et al., 2017; Bender et al., 2018) or evolutionary search (Guo et al., 2019). In our approach, We propose a greedy algorithm to make sequential decisions to obtain the final result. We decode network architecture o as a sequential of choices [o1, o2, ..., oB ]. In each choice step, the top K partial architectures are maintained to shrink the search space. We evaluate each candidate operation from the first choice block to the last. The greedy operation search algorithm is shown in Algorithm 1.
The hyper-parameter K is set equal to 3 in our experiment. We first extend the partial architecture in the first block choice which contains three partial architectures in pextend. Then we expand the top 3 partial architectures into the whole length B, which means that there are 3 × 3 = 9 partial architectures in other block choice. For a specific partial architecture arch, we sample the operation of the unselected blocks uniformly for c architectures where c denotes mini batch number of Dval. We validate each architecture on a mini batch and combine the results to generate evaluate(arch). We finally choose the best architecture to obtain o∗.
4 EXPERIMENTS AND RESULTS
4.1 DATASET AND IMPLEMENTATION DETAILS
Dataset We evaluate our method on the challenging MS COCO benchmark (Lin et al., 2014). We split the 135K training images trainval135 into 130K images archtrain and 5K images archval. First, we train the supernet using archtrain and evaluate the architecture using archval. After the architecture is obtained, we follow other standard detectors (Ren et al., 2015; Lin et al., 2017a) on using ImageNet (Russakovsky et al., 2015) for pre-training the weights of this architecture. The final model
is fine-tuned on the whole COCO trainval135 and validated on COCO minival. Another detection dataset VOC (Everingham et al., 2015) is also used. We use VOC trainval2007+trainval2012 as our training dataset and VOC test2007 as our vaildation dataset.
Implementation details The supernet training setting details can be found in Appendix A.1. For the training of our searched models, the input images are resized to have a short side of 800 pixels or a long side of 1333 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. For fair comparison, all our models are trained for 13 epochs, known as 1× schedule (Girshick et al., 2018). We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 8 and 11 epochs. Warm-up and synchronized BatchNorm (SyncBN) (Peng et al., 2018) are adopted for both baselines and our searched models.
4.2 MAIN RESULTS
4.2.1 COMPUTATION REALLOCATION PERFORMANCE
We denote the architecture using our computation reallocation by prefix ’CR-’, e.g. CR-ResNet50. Our final architectures have the almost the same FLOPs as the original network(the negligible difference in FLOPs is from the BatchNorm layer and activation layer). As shown in Table 1, our CR-ResNet50 and CR-ResNet101 outperforms the baseline by 1.9% and 1.6% respectively. It is worth mentioning that many mile-stone backbone improvements also only has around 1.5% gain. For example, the gain is 1.5% from ResNet50 to ResNeXt50-32x4d as indicated in Table 4. In addition, we run the baselines and searched models under longer 2× setting (results shown in Appendix A.4). It can be concluded that the improvement from our approach is consistent.
Our CR-ResNet50 and CR-ResNet101 are especially effective for large objects(3.5%, 4.8% improvement for APl). To understand these improvements, we depict the architecture sketches in Figure 4. We can find in the stage-level, our Stage CR-ResNet50 reallocate more capacity in deep stage. It reveals the fact that the budget in shallow stage is redundant while the resources in deep stage is limited. This pattern is consistent with ERF as in Figure 1. In operation-level, dilated convolution with large rates tends to appear in the deep stage. We explain the shallow stage needs more dense sampling to gather exact information while deep stage aims to recognize large object by more sparse sampling. The dilated convolutions in deep stage further explore the network potential to detect large objects, it is an adaptive way to balance the ERF. For light backbone, our CR-ResNet18 and CR-MobileNetV2 both improves 1.7% AP over the baselines with all-round APs to APl improvements. For light network, it is a more efficient way to allocate the limited capacity in the deep stage for the discriminative feature captured in the deep stage can benefit the shallow small object by the FPN top-down pathway.
4.2.2 TRANSFERABILITY VERIFICATION
Different dataset We transfer our searched model to another object detection dataset VOC (Everingham et al., 2015). Training details can be found in Appendix A.3. We denote the VOC metric [email protected] as AP50 for consistency. As shown in Table 2, our CR-ResNet50 and CR-ResNet101 achieves AP50 improvement 1.0% and 0.7% comparing with the already high baseline.
Different task Segmentation is another task that is highly sensitive to the ERF (Hamaguchi et al., 2018; Wang et al., 2018). Therefore, we transfer our computation reallocation network into the instance segmentation task by using the Mask RCNN (He et al., 2017) framework. The experimental results on COCO are shown in Table 3. The instance segmentation AP of our CR-MobileNetV2, CR-ResNet50 and CR-ResNet101 outperform the baseline respectively by 1.2%, 1.3% and 1.1% absolute AP. We also achieve bounding box AP improvement by 1.5%, 1.5% and 1.8% respectively.
Different head/neck Our work is orthogonal to other improvements on object detection. We exploit the SOTA detector Cascade Mask RCNN (Cai & Vasconcelos, 2018) for further verification. The detector equipped with our CR-Res101 can achieve 44.5% AP, better than the regular Res101 43.3% baseline by a significant 1.2% gain. Additionally, we evaluate replacing the original FPN with a searched NAS-FPN (Ghiasi et al., 2019) neck to strength our results. The Res50 with NASFPN neck can achieve 39.6% AP while our CR-Res50 with NAS-FPN can achieve 41.0% AP using the same 1× setting. More detailed results can be found in Appendix A.4.
Table 4: COCO minival AP (%) evaluating stage reallocation performance for different networks. Res50 denotes ResNet50, similarly for Res101. ReX50 denotes ResNeXt50, similarly for ReXt101.
MbileNetV2 Res18 Res50 Res101 ReX50-32×4d ReX101-32×4d Baseline AP 32.2 32.1 36.4 38.6 37.9 40.6 Stage-CR AP 33.5 33.4 37.4 39.5 38.9 41.5
100 120 140 160 180 200 220 240 260 280 FLOPs(G)
30.0
32.0
34.0
36.0
38.0
40.0
42.0
AP SCR-Backbone(stage) ResNet SCR-ResNet ResNeXt SCR-ResNeXt MobileNetV2 SCR-MobileNetV2
Figure 5: Detector FLOPs(G) versus AP on COCO minival. The bold lines and dotted lines are the baselines and our stage computation reallocation models(SCR-) respectively.
75.0 75.5 76.0 76.5 77.0 77.5 78.0 Top1 Acc.
36.0
37.0
38.0
39.0
40.0
41.0
AP (76.5, 38.3)
(77.3, 40.2)
FLOPs equivalent R50 FLOPs R50 FLOPs (best) R101 FLOPs R101 FLOPs (best)
Figure 6: Top1 accuracy on ImageNet validation set versus AP on COCO minival. Each dot is a model which has equivalent FLOPs as the baseline.
4.3 ANALYSIS
4.3.1 EFFECT OF STAGE REALLOCATION
Our design includes two parts, stage reallocation search and block operation search. In this section, we analyse the effectiveness of stage reallocation search alone. Table 4 shows the performance comparison between the baseline and the baseline with our stage reallocation search. From light MobileNetV2 model to heavy ResNeXt101, our stage reallocation brings a solid average 1.0% AP improvement. Figure 5 shows that our Stage-CR network series yield overall improvements over baselines with negligible difference in computation. The stage reallocation results for more models are shown in Appendix A.2. There is a trend to reallocate the computation from shallow stage to deep stage. The intuitive explanation is that reallocating more capacity in deep stage results in a balanced ERF as Figure 1 shows and can enhance the ability to detect medium and large object.
4.3.2 CORRELATIONS BETWEEN CLS. AND DET. PERFORMANCE
Often, a large AP increase could be obtained by simply replacing backbone with stronger network, e.g. from ResNet50 to ResNet101 and then to ResNeXt101. The assumption is that strong network can perform well on both classification and detection tasks. We further explore the performance correlation between these two tasks by a lot of experiments. We draw ImageNet top1 accuracy versus COCO AP correlation in Figure 6 for different architectures of the same FLOPS. Each dot is a single network architecture. We can easily find that although the performance correlation between these two tasks is basically positive, better classification accuracy may not always lead to better detection accuracy. This study further shows the gap between these two tasks.
5 CONCLUSION
In this paper, we present CR-NAS (Computation Reallocation Neural Architecture Search) that can learn computation reallocation strategies across different resolution and spatial position. We design
a two-level reallocation space and a novel hierarchical search procedure to cope with the complex search space. Extensive experiments show the effectiveness of our approach. The discovered model has great transfer-ability to other detection neck/head, other dataset and other vision tasks. Our CRNAS can be used as a plugin to other detection backbones to further booster the performance under certain computation resources.
A APPENDIX
A.1 SUPERNET TRAINING
Both stage and operation supernets use exactly the same setting. The supernet training process adopt the ’pre-training and fine-tuning’ paradigm. For ResNet and ResNeXt, the supernet channel distribution is [32, 64, 128, 256].
Supernet pre-training. We use ImageNet-1k for supernet pre-training. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. The supnet are trained for 150 epochs with the batch size 1024. To smooth the jittering in the training process, we adopt the cosine learning rate decay (Loshchilov & Hutter, 2016) with the initial learning rate 0.4. Warming up and synchronized-BN (Peng et al., 2018) are adopted to help convergence.
Supernet fine-tuning. We fine tune the pretrained supernet on archtrain. The input images are resized to have a short side of 800 pixels or a long side of 1333 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. Supernet is trained for 25 epochs (known as 2× schedule (Girshick et al., 2018)). We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 22 epochs. Warm-up and synchronized BatchNorm (SyncBN) (Peng et al., 2018) are adopted to help convergence.
A.2 REALLOCATION SETTINGS AND RESULTS
stage allocation space For ResNeXt, the stage allocation space is exactly the same as ResNet series. For MobileNetV2, original block numbers in Sandler et al. (2018) is defined by n=[1, 1, 2, 3, 4, 3, 3, 1, 1, 1]. We build our allocation space on the the bottleneck operator by fixing stem and tail components. A architecture is represented as m = [1, 1,m1,m2,m3,m4,m5, 1, 1, 1]. The allocation space is M = [M1,M2,M3,M4,M5]. M1,M2 = {1, 2, 3, 4, 5}, M3 = {3, 4, 5, 6, 7}, M4,M5 = {2, 3, 4, 5, 6}. It’s worth to mention the computation cost in different stage of m is not exactly the same because of the abnormal channels. We format the weight as [1.5, 1, 1, 0.75, 1.25] for [m1,m2,m3,m4,m5].
computation reallocation results We propose our CR-NAS in a sequential way. At first we reallocate the computation across different resolution. The Stage CR results is shown in Table A.2
Then we search for the spatial allocation by adopting the dilated convolution with different rates. the operation code as. we denote our final model as
[0 ] dilated conv with rate 1(normal conv) [1 ] dilated conv with rate 2 [2 ] dilated conv with rate 3
Our final model can be represnted as a series of allocation codes.
A.3 IMPLEMENTATION DETAILS OF VOC
We use the VOC trainval2007+trainval2012 to server as our whole training set. We conduct our results on the VOC test2007. The pretrained model is apoted. The input images are resized to have a short side of 600 pixels or a long side of 1000 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. We train for 18 whole epochs for all models. We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 15 and 17 epochs. Warm-up and synchronized BatchNorm (SyncBN) (Peng et al., 2018) are adopted to help convergence.
A.4 MORE EXPERIMENTS
longer schedule 2× schedule means training totally 25 epochs as indicated in Girshick et al. (2018). The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 22 epochs. Other training settings is exactly the same as in 1×.
Powerful detector The Cascade Mask RCNN (Cai & Vasconcelos, 2018) is a SOTA multi-stage object detector. The detector is trained for 20 epochs. The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 19 epochs. Warming up and synchronized-BN (Peng et al., 2018) are adopted to help convergence.
Powerful searched neck NAS-FPN (Ghiasi et al., 2019) is a powerful scalable feature pyramid architecture searched for object detection. We reimplement NAS-FPN (7 @ 384) in Faster RCNN (The original paper is implemented in RetinaNet (Lin et al., 2017b)). The detector is training under 1× setting as described in 4.1. | 1. How does the proposed method reallocate computation resources, and how effective is it compared to other methods?
2. How does the approach handle neural network training instability and differences in output network architectures?
3. Can the authors provide more analysis and intuition behind the final best network architecture and its improvements over other networks?
4. Why do different NAS algorithms produce different networks on different datasets, and what factors contribute to this variation?
5. Can the authors explain the choice of modifying only certain conv layers in ResNest BasicBlock and ResNet Bottleneck?
6. Does the value of hyperparameter K have a significant impact on performance, and if so, how should it be chosen?
7. Can the authors elaborate on their transfer learning experiments and any observed differences between training NAS on VOC versus using a fixed network architecture on VOC? | Review | Review
This paper describes a neural architecture search method for computation resources allocation across feature resolutions in object detection. A two level reallocation space is proposed for both stage and spatial reallocation. The experiment results have quite nice improvements on several standard data sets.
This is a great, well written paper overall. The design and experiment settings are well described with details. In short, this is a perfect paper that I enjoy reading.
I only have very small questions and suggestions to this paper.
The paper claimed the approach is able to reallocate the engaged computation resources in a more efficient way. If I did not missing anything, the paper only shows related experiments in figure 5 and figure 6 with corresponding descriptions in 4.3.2. I hope the author could have more details in these two figures with more analysis. Personally, I think more analysis on computational effectiveness may make the paper more attractive.
We all know that neural network training may not be very stable in some settings. One thing I am curious about in this paper is whether the output network architectures from different training are always the same. If they are not the same, can you compare the differences?
I am also curious if the author could give some intuition of the network architecture of the final best network. In other words, we want to know why the final network is better than other networks. I read the Figure 4, Table 5 and Table 6, but I really cannot understand why those networks are that 'good'. Maybe, we can find some clues by answering the last paragraph.
Following the last question, we also find out that the same NAS algorithm produces different networks on different data sets. Is it because of the data set settings, or because of the content of the data sets or because of network randomness? What are your intuitions?
A detailed question in 3.2.2. Why you only modify the second 3x3 conv in ResNest BasicBlock and only modify the center 3x3 conv in ResNet Bottleneck.
Does the hyperparameter K in 3.3.2 mater a lot (like 4 or 5)?
The 4.2.2 "transfer-ability verification" is a very nice section. Do you train NAS on VOC or only a fixed network architecture on VOC? If you did both, what is the performance difference? |
ICLR | Title
MeshMVS: Multi-view Stereo Guided Mesh Reconstruction
Abstract
Deep learning based 3D shape generation methods generally utilize latent features extracted from color images to encode the objects’ semantics and guide the shape generation process. These color image semantics only implicitly encode 3D information, potentially limiting the accuracy of the generated shapes. In this paper we propose a multi-view mesh generation method which incorporates geometry information in the color images explicitly by using the features from intermediate 2.5D depth representations of the input images and regularizing the 3D shapes against these depth images. Our system first predicts a coarse 3D volume from the color images by probabilistically merging voxel occupancy grids from individual views. Depth images corresponding to the multi-view color images are predicted which along with the rendered depth images of the coarse shape are used as a contrastive input whose features guide the refinement of the coarse shape through a series of graph convolution networks. Attention-based multi-view feature pooling is proposed to fuse the contrastive depth features from different viewpoints which are fed to the graph convolution networks. We validate the proposed multi-view mesh generation method on ShapeNet, where we obtain a significant improvement with 34% decrease in chamfer distance to ground truth and 14% increase in the F1-score compared with the state-of-the-art multi-view shape generation method.
1 INTRODUCTION
3D shape generation is a long-standing research problem in computer vision and computer graphics with applications in autonomous driving, augmented reality, etc. Conventional approaches mainly leverage multi-view geometry based on stereo correspondences between images but are restricted by the coverage provided by the input views. With the availability of large-scale 3D shape datasets and the success of deep learning in several computer vision tasks, 3D representations such as voxel grid Choy et al. (2016); Tulsiani et al. (2017); Yan et al. (2016) and point cloud Yang et al. (2018); Fan et al. (2017) have been explored for single-view 3D reconstruction. Among them, triangle mesh representation has received the most attention as it has various desirable properties for a wide range of applications and is capable of modeling detailed geometry without high memory requirement. Single-view 3D reconstruction methods Wang et al. (2018); Huang et al. (2015); Kar et al. (2015); Su et al. (2014) generate the 3D shape from merely a single color image but suffer from occlusion and limited visibility which leads to low quality reconstructions in the unseen areas. Multi-view methods Wen et al. (2019); Choy et al. (2016); Kar et al. (2017); Gwak et al. (2017) extend the input to images from different viewpoints which provides more visual information and improves the accuracy of the generated shapes. Recent work in multi-view mesh reconstruction Wen et al. (2019) introduces a multi-view deformation network using perceptual feature from each color image for refining the meshes generated by Pixel2Mesh Wang et al. (2018). Although promising results were obtained, this method relies on perceptual features from color images which do not explicitly encode the objects’ geometry and could restrict the accuracy of the 3D models.
In this work, we present a novel multi-view mesh generation method where we start by predicting coarse volumetric occupancy grid representations for the color images of each input viewpoint independently using a shared fully convolutional network which are merged into a single voxel grid in a probabilistic fashion followed by cubify Gkioxari et al. (2019) operation to convert it to a triangle
mesh. We then use Graph Convolutional Network (GCN) Scarselli et al. (2008); Wang et al. (2018) to fine-tune the cubified voxel grid in a coarse-to-fine manner. The GCN refines the coarse mesh by using the feature vector of each graph node (mesh vertices) obtained by projecting the vertices on the 2D contrastive depth features. The contrastive depth features are extracted from the rendered depth maps of the current mesh and predicted depth maps from a multi-view stereo network. We also propose an attention-based method to fuse feature from multiple views that can learn the importance of different views for each of the mesh vertices. Constrains between the intermediate refined mesh from GCN with predicted depth maps of different viewpoints further improve the final mesh quality. By employing multi-view voxel grid generation and refining it using geometry information from both the current mesh (through the rendered depth maps) and predicted depth maps, we are able to generate high-quality meshes. We validate our method on the ShapeNet Chang et al. (2015) benchmark and our method achieves the best performance among all previous multi-view and single-view mesh generation methods.
2 RELATED WORK
2.1 TRADITIONAL SHAPE GENERATION METHODS
3D model generation has traditionally been tackled using multi-view geometry principles. Among them, structure-from-motion (SfM) Schonberger & Frahm (2016); Agarwal et al. (2011); Cui & Tan (2015); Cui et al. (2017) and simultaneous localization and mapping (SLAM) Cadena et al. (2016); Mur-Artal et al. (2015); Engel et al. (2014); Whelan et al. (2015) are popular techniques that perform 3D reconstruction and camera pose estimation at the same time. These methods extract local image features, match them across images and use the matches to estimate camera poses and 3D geometry. Closer to our problem setup, multi-view stereo methods infer 3D geometry from images with known camera parameters. Volumetric methods Kar et al. (2017); Kutulakos & Seitz (2000); Seitz & Dyer (1999) predict voxel grid representation of objects by estimating the relationship between each voxel and object surfaces. Point cloud based methods Furukawa & Ponce (2009); Lhuillier & Quan (2005) start with a sparse point cloud and gradually increase the density of points to obtain a final dense point cloud of the object. Durou et al. (2008); Zhang et al. (1999); Favaro & Soatto (2005) reason about shading, texture and defocus to reason about visible parts of the object and infer its 3D geometry. While the results of these works are impressive in terms of quality and completeness of reconstruction,
they still struggle with poorly textured and reflective surfaces and require carefully selected input views.
2.2 DEEP SHAPE GENERATION METHODS
Deep learning based approaches can learn to infer 3D structure from training data and can be robust against poorly textured and reflective surfaces as well as limited and arbitrarily selected input views. These methods can be categorized into single view and multi-view methods. Huang et al. (2015); Su et al. (2014) use shape component retrieval and deformation from a large dataset for single-view 3D shape generation. Kurenkov et al. (2018) extend this idea by introducing free-form deformation networks on retrieved object templates from a database. Some work learn shape deformation from ground truth foreground masks of 2D images Kar et al. (2015); Yan et al. (2016); Tulsiani et al. (2017). Recurrent Neural Networks (RNN) based methods Choy et al. (2016); Kar et al. (2017); Gwak et al. (2017) are another popular solution to solve this problem. Gwak et al. (2017); Lin et al. (2019) introduce image silhouettes along with adversarial multi-view constraints and optimize object mesh models using multi-view photometric constraints. Predicting mesh directly from color images was proposed in Wang et al. (2018); Wickramasinghe et al. (2019); Pan et al. (2019); Wen et al. (2019); Gkioxari et al. (2019); Tang et al. (2019). DR-KFS Jin et al. (2019) introduces a differentiable visual similarity metric while SeqXY2SeqZ Han et al. (2020) represents 3D shapes using a set of 2D voxel tubes for shape reconstruction. Front2Back Yao et al. (2020) generates 3D shapes by fusing predicted depth and normal images and DV-Net Jia et al. (2020) predicts dense object point clouds using dual-view RGB images with a gated control network to fuse point clouds from the two views. FoldingNet Yang et al. (2018) learns to reconstruct arbitrary point clouds from a single 2D grid. AtlasNet Groueix et al. (2018) use learned parametric representation while Mescheder et al. (2019); Park et al. (2019); Liu et al. (2019b;a); Murez et al. (2020) employ implicit surface representation to reconstruct 3D shapes.
2.3 DEPTH ESTIMATION
Compared to 3D shape generation, depth prediction is an easier problem formulation since it simplifies the task to per-view depth map estimation. Traditional methods Campbell et al. (2008); Galliani et al. (2015); Schönberger et al. (2016) use multi-view stereo principles for depth prediction. Deep learning based multi-view stereo depth estimation was first introduced in Hartmann et al. (2017) where a learned cost metric is used to estimate patch similarities. DeepMVS Huang et al. (2018) warps multi-view images to 3D space and then applies deep networks for regularization and aggregation to estimate depth images. Learned 3D cost volume based depth prediction was proposed in MVSNet Yao et al. (2018) where a 3 dimensional cost volume is built using homographically warped 2D features from multi-view images and 3D CNNs are used for cost regularization and depth regression. This idea was further extended by Chen et al. (2019); Luo et al. (2019); Gu et al. (2019); Yao et al. (2019).
3 METHODOLOGY
Figure 1 shows the architecture of the proposed system which takes as input multi-view color images of an object with known poses and outputs a triangle mesh representing the surface of the object.
3.1 MULTI-VIEW VOXEL GRID PREDICTION
Single-view Voxel Grid Prediction The single-view voxel branch consists of a ResNet feature extractor and a fully convolutional voxel grid prediction network. It generates the coarse initial shape of an object from one viewpoint as voxel occupancy grid using a color image. Here, we set the resolution of the generated voxel occupancy grid as 32 × 32 × 32. The voxel prediction networks for all viewpoints share the same weights. Probabilistic Occupancy Grid Merging Voxel occupancy grid predicted from a single viewpoint suffers from occlusion and limited visibility. In order to fuse voxel grids from different viewpoints, we propose a probabilistic occupancy grid merging method which merges the voxel grids from each input viewpoint probabilistically to obtain the final voxel grid output. This allows occluded regions in one view to be estimated from other views where those regions are visible as well as increase the
confidence of prediction in overlapping regions. Occupancy probability of each voxel is represented by p(x) which is converted to log-odds (logit):
l(x) = log p(x)
1− p(x) (1)
Bayesian update on the probabilities reduce to simple summation of log likelihoods Konolige (1997). Hence, the multi-view log-odds of a voxel is given by:
l(x) = l1(x) + l2(x) + ...+ ln(x) (2)
where li is the voxel’s log-odds in view i and n is the number of input views. The final voxel probability x is obtained by applying the inverse function of Equation (1) which is a sigmoid function.
3.2 MESH REFINEMENT
The cubified mesh from the voxel branch only provides a coarse reconstruction of the object’s surface. We apply graph convolutional networks which represent each mesh vertex as one graph node and deforms them to more accurate positions. GCN-based Mesh Deformation The features pooled from multi-view images along with 3D coordinates of the vertices in world frame are used as features of the graph nodes. Series of Graphbased Convolutional Network (GCN) blocks are applied to deform a mesh at the current stage to the next stage, starting with the cubified voxel grids. A graph convolution deforms mesh vertices by propagating features from neighboring vertices by applying f ′ i = ReLU(W0fi + ∑ j∈N (i)W1fj) where N (i) is the set of neighboring vertices of the i-th vertex in the mesh, f{} represents the feature vector of a vertex, and W0 and W1 are learnable parameters of the model. Each GCN block utilizes several graph convolutions to transform the vertex features along with a final vertex refinement operation where the features along with vertex coordinates are further transformed as v ′
i = vi + tanh(Wvert[fi; vi]) where the matrix Wvert is another learnable parameter to obtain the deformed mesh. Contrastive Depth Feature Extraction Yao et al. (2020) demonstrate that using intermediate, image-centric 2.5D representations instead of directly generating 3D shapes in global frame from raw 2D images can improve 3D reconstruction quality. We therefore propose to formulate the features for graph nodes using 2.5D depth maps as input additional inputs alongside the RGB features. Specifically, we render the meshes at different GCN stages to depth image at all the input views using Kato et al. (2018) and use them along with predicted depths for depth feature extraction. We call this form of depth input contrastive depth as it contrasts the rendered depths of the current mesh against the predicted depths and allows the network to reason about the deformation better than when using predicted depth or color images alone. Given the 2D features, corresponding feature vectors of individual vertices can be found by projecting the 3D vertex coordinates to the feature planes using known camera parameters. We use VGG-16 Simonyan & Zisserman (2014) as our contrastive depth feature extraction network. Multi-View Depth Estimation We extend MVSNet Yao et al. (2018) and predict the depth maps of all views since the original implementation predicts depth of only one reference view. This is achieved by transforming the feature volumes to each view’s coordinate frame using homography warping and applying identical cost volume regularization and depth regression on each view. Detailed network architecture diagram of this module is provided in the appendix. Attention-based Multi-View Feature Pooling In order to fuse multi-view contrastive depth features, we formulate an attention module by adapting multi-head attention mechanism originally designed for sequence to sequence machine translation using transformer (encoder-decoder) architecture Vaswani et al. (2017). In a transformer architecture the encoder hidden state is mapped to lower dimension key-value pairs (K, V) while the decoder hidden state is mapped to a query vector Q using independent fully connected layers. The encoder hidden state in our case is the multi-view features while the decoder hidden state is the mean of the multi-view features. The attention weights are computed using scaled-dot product:
Attention(Q,K,V) = softmax( QKT√ N )V (3)
where N is the number of input views.
Multiple attention heads are used which are concatenated and transformed to obtain the final output
headi = Attention(QW Q i ,KW K i ,VW V i ) (4)
MultiHead(Q,K,V) = [head1; ...;headh]W 0 (5)
where multiple W are parameters to be learned, h is the number of attention heads and i ∈ [1, h]. We choose multi-head attention as our feature pooling method since it allows the model to attend information from different representation subspaces of the features by training multiple attentions in parallel. This method is also invariant to the order and number of input views. We visualize the learned attention weights (average of each attention heads) in Figure 2 where we can observe that the attention weights roughly takes into account the visibility/occlusion information from each view.
3.3 LOSS FUNCTIONS
Mesh losses The losses which are derived from Wang et al. (2018) to constrain the mesh predicted by each GCN block (P) to resemble the ground truth (Q) include Chamfer distance Lchamfer(P,Q) = |P|−1 ∑ (p,q)∈ΛP,Q ||p− q|| 2 + |Q|−1 ∑ (q,p)∈ΛQ,P ||q − p|| 2 and surface normal loss Lnormal(P,Q) =
−|P|−1 ∑ (p,q)∈ΛP,Q |up · uq| − |Q| −1∑
(q,p)∈ΛQ,P |uq · up| with additional regularization in the form of edge length loss Ledge(V,E) = 1|E| ∑ (v,v′)∈E ||v − v′||2 for visually appealing results.
Depth loss Our depth prediction network is supervised using adaptive reversed Huber loss (also known as BerHu criterion) Lambert-Lacroix & Zwald (2016). Ldepth = |x|, if |x| ≤ c, otherwise x 2+c2
2c where x is the depth error of a pixel and c is a constant set to 0.2. Note that the original MVSNet uses L1-loss, but we used BerHu loss since it gave slightly higher accuracy. Intuitively, this is because BerHu provides a good balance between L1 and L2 loss and has shown similar improvement in Laina et al. (2016). Contrastive depth loss BerHu loss is also applied between the rendered depth images at different GCN stages and the predicted depth images. Lcontrastive = |x|, if |x| ≤ c, otherwise x 2+c2
2c
Voxel loss Binary cross-entropy loss between the predicted voxel occupancy probabilities and the ground truth occupancies is used as voxel loss to supervise the voxel predictions Lvoxel = − ( p(x)log ( p(x) ) + ( 1− p(x) ) log ( 1− p(x) )) Final loss We use the weighted sum of the individual losses discussed above as the final loss to train our model in an end-to-end fashion. L = λchamferLchamfer+λnormalLnormal+λedgeLedge+λdepthLdepth+ λcontrastiveLcontrastive + λvoxelLvoxel , where L is the final loss term.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Comparisons We evaluate the proposed method against various multi-view shape generation methods. The state-of-the-art method is Pixel2Mesh++ Wen et al. (2019) (referred as P2M++). Wen et al. (2019) also provide a baseline by directly extending Pixel2Mesh Wang et al. (2018) to operate on multi-view images (referred as MVP2M) using their statistical feature pooling method to aggregate features from multiple color images. Results from additional multi-view shape generation baselines 3D-R2N2 Choy et al. (2016) and LSM Kar et al. (2017) are also reported. Dataset We evaluate our method against the state-of-the-art methods on the dataset from Choy et al. (2016) which is a subset of ShapeNet Chang et al. (2015) and has been widely used by recent 3D shape generation methods. It contains 50K 3D CAD models from 13 categories. Each model is rendered with a transparent background from 24 randomly chosen camera viewpoints to obtain color images. The corresponding camera intrinsics and extrinsics are provided in the dataset. Since the dataset does not contain depth images, we render them using a custom depth renderer at the same viewpoints as the color images and with the same camera intrinsics. We follow the training/testing/validation split of Gkioxari et al. (2019). Implementation For the depth prediction module, we follow the original MVSNet Yao et al. (2018) implementation. The output depth dimensions reduces by a factor of 4 to 56×56 from the 224×224 input image. The number of depth hypotheses is chosen as 48 which offers a balance between accuracy and running/training time efficiency. These depth hypotheses represent values from 0.1 m to 1.3 m at an interval of 25 mm. These values were chosen based on the range of depths present in the dataset.
The hierarchical features obtained from "Contrastive Depth Features Extractor" are of total 4800 dimensions for each view. The aggregated multi-view features are compressed to 480 dimensional after applying attentive feature pooling. 5 attention heads are used for merging multi-view features. The loss function weights are set as λchamfer = 1, λnormal = 1.6 × 10−4, λdepth = 0.1, λcontrastive = 0.001 and λvoxel = 1. Two settings of λedge were used, λedge = 0 (referred as Best) which gives better quantitative results and λedge = 0.2 (referred as Pretty) which gives better qualitative results. Training and Runtime The network is optimized using Adam optimizer with a learning rate of 10−4. The training is done on 5 Nvidia RTX-2080 GPUs with effective batch size 5. The depth prediction network (MVSNet) is trained independently for 30 epochs. Then the whole system is
trained for another 40 epochs with the weights of the MVSNet frozen. Our system is implemented in PyTorch deep learning framework and it takes around 60 hours for training. Evaluation Metric Following Wang et al. (2018); Wen et al. (2019), we use F1-score as our evaluation metric. The F1-score is the harmonic mean of precision and recall where the precision/recall are calculated by finding the percentage of points in the predicted/ground truth that can find a nearest neighbor from the other within a threshold. We provide evaluations with two threshold values: τ and 2τ where τ = 10−4 m2.
4.2 COMPARISON WITH PREVIOUS MULTI-VIEW SHAPE GENERATION METHODS
We quantitatively compare our method against previous works for multi-view shape generation in Table 1 and show the effectiveness of our methods in improving the shape quality. Our method outperforms the state-of-the-art method Pixel2Mesh++ Wen et al. (2019) with a decrease in chamfer distance to ground truth by 34% and 15% increase in F1-score at threshold τ . Note that in Table 1 the same model is trained for all the categories but accuracy on individual categories as well as average over the categories are evaluated. We provide the chamfer distances in the appendix.
Category F-score (τ ) ↑ F-score (2τ ) ↑
3D-R2N2 LSM MVP2M P2M++ Ours Ours 3D-R2N2 LSM MVP2M P2M++ Ours Ours(pretty) (best) (pretty) (best) Couch 45.47 43.02 53.17 57.56 71.63 73.63 59.97 55.49 73.24 75.33 85.28 88.24 Cabinet 54.08 50.80 56.85 65.72 75.91 76.39 64.42 60.72 76.58 81.57 87.61 88.84 Bench 44.56 49.33 60.37 66.24 81.11 83.76 62.47 65.92 75.69 79.67 90.56 92.57 Chair 37.62 48.55 54.19 62.05 77.63 78.69 54.26 64.95 72.36 77.68 88.24 90.02 Monitor 36.33 43.65 53.41 60.00 74.14 76.64 48.65 56.33 70.63 75.42 86.04 88.89 Firearm 55.72 56.14 79.67 80.74 92.92 94.32 76.79 73.89 89.08 89.29 96.81 97.67 Speaker 41.48 45.21 48.90 54.88 66.02 67.83 52.29 56.65 68.29 71.46 79.76 82.34 Lamp 32.25 45.58 50.82 62.56 72.47 75.93 49.38 64.76 65.72 74.00 82.00 85.33
Cellphone 58.09 60.11 66.07 74.36 85.57 86.45 69.66 71.39 82.31 86.16 93.40 94.28 Plane 47.81 55.60 75.16 76.79 89.23 92.13 70.49 76.39 86.38 86.62 94.65 96.57
Table 48.78 48.61 65.95 71.89 82.37 83.68 62.67 62.22 79.96 84.19 90.24 91.97 Car 59.86 51.91 67.27 68.45 77.01 80.43 78.31 68.20 84.64 85.19 88.99 92.33 Watercraft 40.72 47.96 61.85 62.99 75.52 80.48 63.59 66.95 77.49 77.32 86.77 90.35 Mean 46.37 49.73 61.05 66.48 78.58 80.80 62.53 64.91 77.10 80.30 88.49 90.72
Table 1: Qualitative comparison against state-of-the-art multi-view shape generation methods. We report F-score on each semantic category along with the mean over all categories using two thresholds τ and 2τ for nearest neighbor match where τ=10−4 m2.
We also provide visual results for qualitative assessment of the generated shapes by our Pretty model in Figure 3 which shows that it is able to more accurately predict topologically diverse shapes.
4.3 ABLATION STUDIES
Contrastive Depth Feature Extraction We evaluate several methods for contrastive feature extraction (Sub-section 3.2). These methods are 1) Input Concatenation: using the concatenated rendered and predicted depth maps as input to the VGG feature extractor, 2) Input Difference: using the difference of the two depth maps as input to VGG, 3) Feature Concatenation: concatenating features from rendered and predicted depths extracted by shared VGG, 4) Feature Difference: using difference of the features from the two depth maps extracted by shared VGG, and 5) Predicted depth only: using the VGG features from the predicted depths only. 6) Rendered depth only: using the VGG features from the rendered depths only. The quantitative results are summarized in Table 2 and shows that Input Concatenation method produces better results than other formulations. Accuracy with different settings Table 3 shows the contribution of different components towards the final accuracy. Naively extending the single-view Mesh R-CNN Gkioxari et al. (2019) to multiple views using statistical feature pooling Wen et al. (2019) for mesh refinement (row 1) gives an F1-score of 72.74% for threshold τ which is 6.26% improvement over Pixel2Mesh++. We further extend the above method with our probabilistic multi-view voxel grid prediction in row 2 and get a 4.23% improvement.
In row 3 of Table 3 we use our contrastive depth features instead of RGB features for mesh refinement and get 2.7% improvement. We then replace the statistical feature pooling with the proposed attention method and get 0.19% improvement. The improvement is not significant on our final architecture but we found the multi-head attention to perform better on more light-weight architectures. We also evaluate the effect of using additional regularization from contrastive depth losses: rendered depth vs predicted depth in the 5th rows of which improves the score by 0.98%. In row 6 we use ground truth
instead of predicted depths on our final model which gives the upper bound on our mesh prediction accuracy in relation to the depth prediction accuracy as 84.58%.
Number of View We test the performance of our framework with respect to the number of views. Table 4 shows that the accuracy of our method increases as we increase the number of input views for training. These experiments also validate that the attention-based feature pooling can efficiently encode features from different views to take advantage of larger number of views.
Table 5 shows the results when using different number of views during testing on our model trained with 3 views which indicates that increasing the number of views during testing does not improve the accuracy while decreasing the number of views can cause a significant drop in accuracy.
Metric 2 3 4 5 6 F1-τ 73.60 80.80 82.61 83.76 84.25 F1-2τ 85.80 90.72 91.78 92.73 93.14
Table 4: Accuracy w.r.t the number of views during training. The evaluation was performed on the same number of views as training.
Metric 2 3 4 5 6 F1-τ 72.46 80.80 80.98 80.94 80.85 F1-2τ 84.49 90.72 91.03 91.16 91.20
Table 5: Accuracy w.r.t the number of views during testing. The same model trained with 3 views was used in all of the cases.
5 CONCLUSION
We propose a neural network based solution to predict 3D triangle mesh models of objects from images taken from multiple views. First, we propose a multi-view voxel grid prediction module which probabilistically merges voxel grids predicted from individual input views. We then cubify the merged voxel grid to triangle mesh and apply graph convolutional networks for further refining the mesh. The features for the mesh vertices are extracted from contrastive depth input consisting of rendered depths at each refinement stage along with the predicted depths. The proposed mesh reconstruction method outperforms existing methods with a large margin and is capable of reconstructing objects with more complex topologies.
A APPENDIX
NETWORK ARCHITECTURE
MVSNET ARCHITECTURE
Our depth prediction module is based on MVSNet Yao et al. (2018) which constructs a regularized 3D cost volumes to estimate the depth map of the reference view. Here, we extent MVSNet to predict the depth maps of all views instead of only the reference view. This is achieved by transforming the feature volumes to each view’s coordinate frame using homography warping and applying identical cost volume regularization and depth regression on each view. This allows the reuse of pre-regularization feature volumes for efficient multi-view depth prediction invariant to the order of input images. Figure 4 shows the architecture of the our depth estimation module.
PROBABILISTIC OCCUPANCY GRID MERGING
We use single-view voxel prediction network from Gkioxari et al. (2019) to predict predicts voxel grids for each of the input images in their respective local coordinate frames. The occupancy grids are transformed to global frame (which is set to the coordinate frame of the first image) by finding the equivalent global grid values in the local grids after applying bilinear interpolation on the closest matches. The voxel grids in global coordinates are then probabilistically merged according to Sub-section 3.1 of the main submission.
EXPERIMENTS
We quantitatively compare our method against previous works for multi-view shape generation in Table 6 and show effectiveness of our proposed shape generation methods in improving shape quality. Our method outperforms the state-of-the-art method Pixel2Mesh++ Wen et al. (2019) with decrease in chamfer distance to ground truth by 34%, which shows the effectiveness of our proposed method. Note that in Table 6 same model is trained for all the categories but accuracy on individual categories as well as average over all the categories are evaluated.
Category Chamfer Distance (CD) ↓3D-R2N2 LSM MVP2M P2M++ Ours Couch 0.806 0.730 0.534 0.439 0.220
Cabinet 0.613 0.634 0.488 0.337 0.230 Bench 1.362 0.572 0.591 0.549 0.159 Chair 1.534 0.495 0.583 0.461 0.201 Monitor 1.465 0.592 0.658 0.566 0.217 Firearm 0.432 0.385 0.305 0.305 0.123 Speaker 1.443 0.767 0.745 0.635 0.402 Lamp 6.780 1.768 0.980 1.135 0.755 Cellphone 1.161 0.362 0.445 0.325 0.138 Plane 0.854 0.496 0.403 0.422 0.084
Table 1.243 0.994 0.511 0.388 0.181 Car 0.358 0.326 0.321 0.249 0.165 Watercraft 0.869 0.509 0.463 0.508 0.175 Mean 1.455 0.664 0.541 0.486 0.211
Table 6: Qualitative comparison against state-of-the-art multi-view shape generation methods. Following Wen et al. (2019), we report Chamfer Distance in m2 × 1000 from ground truth for different methods. Note that same model is trained for all the categories but accuracy on individual categories as well as average over all the categories are evaluated.
ABLATION STUDIES
Coarse Shape Generation We conduct comparisons on voxel grid predicted from our proposed probabilistically merged voxel grids against single view method Gkioxari et al. (2019). As is shown in Table 7, the accuracy of the initial shape generated from probabilistically merged voxel grid is higher than that from individual views.
Accuracy at Different GCN Stages We analyze the accuracy of meshes at different GCN stages in Table 8. The results validate that our method produces the meshes in a coarse-to-fine manner and multiple GCN refinements improve the mesh quality.
Resolution of Depth Prediction We conduct experiments using different numbers of depth hypotheses in our depth prediction network (Sub-section A), producing depth values at different resolutions. A higher number of depth hypothesis means finer resolution of the predicted depths. The quantitative results with different hypothesis numbers are summarized in Table 9. We set depth hypothesis as 48 for our final architecture which is equivalent to the resolution of 25 mm. We observe that the mesh accuracy remain relatively unchanged if we predict depths at finer resolutions.
Metric Single-view Multi-view F1-τ 25.19 31.27 F1-2τ 36.75 44.46
Table 7: Accuracy of predicted voxel grids from single-view prediction compared against the proposed probabilistically merged multi-view voxel grids. The voxel branch was trained separately without the mesh refinement and evaluation was performed on the cubified voxel grids. We use three views for probabilistic grid merging.
Generalization Capability We conduct experiments to evaluate the generalization capability of our system across the semantic categories. We train our model with only 12 out of the 13 categories and test on the category that was left out. Table 10 shows that the accuracy generally does not decrease significantly when compared with the model that was trained on all 13 categories when using 2τ threshold for the F-score.
Category F-score (τ ) ↑ F-score (2τ ) ↑Excluding Including Excluding Including Couch 63.29 73.63 80.79 88.24
Cabinet 68.26 76.39 83.10 88.84 Bench 76.08 83.76 87.42 92.57 Chair 60.60 78.69 75.93 90.02 Monitor 67.26 76.64 81.57 88.89 Firearm 78.59 94.32 86.28 97.67 Speaker 62.39 67.83 77.77 82.34 Lamp 63.50 75.93 74.66 85.33 Cellphone 67.24 86.45 80.54 94.28 Plane 57.48 92.13 67.27 96.57
Table 76.41 83.68 86.86 91.97 Car 59.08 80.43 75.58 92.33 Watercraft 64.97 80.48 78.95 90.35
Table 10: Accuracy when a category is excluded during training and evaluation is performed on the category to verify how well training on other categories generalizes to the excluded category.
B APPENDIX
BEST VS PRETTY MODELS
We provide qualitative comparison between the our models trained with best and pretty configurations in Figure 5. The best configuration refers to our model trained without edge regularization while pretty refers to the model trained with the regularization (Sub-section 4.1). We observe that without the regularization we get higher score on our evaluation metrics but get degenerate meshes with self-intersections and irregularly sized faces.
FAILURE CASES
Some failure cases of our model (with pretty setting) are shown in Figure 6. We notice that the rough topology of the mesh is recovered while we failed to reconstruct the fine topology. We can regard the recovery from wrong initial topology as a promising future work. | 1. What is the main contribution of the paper regarding 3D shape prediction?
2. What are the strengths and weaknesses of the proposed approach compared to prior works like P2M++?
3. Do you have any concerns about the complexity of the pipeline and the minor improvements from certain components?
4. How does the reviewer assess the significance of the ablation studies and their relation to the final improvement over pixel2Mesh?
5. What are some missing discussions in the related work section that the reviewer would like to see addressed?
6. Is there a suggestion to simplify the approach while maintaining or improving the performance? | Review | Review
The paper proposes to first predict a coarse (32^3) voxel grid by aggregating independent predictions from individual views. Then, it translate it into a mesh and refine it using deepMVS predictions (using each view in turn as a reference view), and a GCN architecture on the mesh.
On the positive side:
I like the idea of using MVS-Net, but why not use it from the start (before the single view voxel prediction).
I think this paper is going toward a render-and-compare approach for 3D shape prediction, which I think is a good idea.
the boost in the results seems impressive compared to P2M++
There are however several things I don't like or that worry me about this paper:
the pipeline presented in this paper is extremely complicated, and has many different parts. After reading it, I have no idea what really makes the improvement compared to P2M++. It uses voxels, mesh and depth maps, Graph convolution networks, attention-based architecture, SVR and deepMVS, the training loss has 5 balancing hyperparameters, between things as different as cross-entropy and chamfer distance.
To me, the ablation studies (Table 2 and 3) show clearly that the most complex parts of the pipeline (3.2, contrastive depth and attention based aggregation) only provides very minor improvements (~1%). Given their complexity and number of hyper parameters, I do not think these can be considered as significative. Given these results, it is completely unclear to me how the proposed approach can lead to a ~14% improvement over pixel2Mesh. I thus think the approach should be strongly simplified (maybe loosing 1% in final performance), but the paper should provide a clear ablation that actually explain why their framework is so much better than P2M++ and this is interesting. Right now, I believe it could be for a bad reason (for example DeepMVS could give excellent results on synthetic data because it is too simple - note I realize that Table 3 shows it is not perfect since there is a further 3.5% boost using GT depth, but it could still be unrealistically good for synthetic data)
Related work is lacking discussion of important references, namely all classical references for point-based SfM in 2.1 , foldingNet and AtlasNet for mesh generation in 2.2, all implicit volumetric works also in 2,2 (deepSDF, OccupancyNetworks…), the most classical deep depth prediction works in 2.3 (Eigen and Fergus…)
To summarise, despite its impressive numbers, I think this paper cannot be accepted as is, mainly because of its complexity, lack of clear explanation for its huge performance boost, and the only marginal/not significative boosts given by the most complexe parts of the pipeline.
Some additional notes on presentation:
I am not sure “contrastive depth” is a good choice of name since contrastive feature learning is a popular but unrelated research direction.
I found 3.2 very hard to parse/re-order. I could only do it with the help of fig. 1 which is itself hard to parse and does not represent e.g. how the attention-based pooling happens |
ICLR | Title
MeshMVS: Multi-view Stereo Guided Mesh Reconstruction
Abstract
Deep learning based 3D shape generation methods generally utilize latent features extracted from color images to encode the objects’ semantics and guide the shape generation process. These color image semantics only implicitly encode 3D information, potentially limiting the accuracy of the generated shapes. In this paper we propose a multi-view mesh generation method which incorporates geometry information in the color images explicitly by using the features from intermediate 2.5D depth representations of the input images and regularizing the 3D shapes against these depth images. Our system first predicts a coarse 3D volume from the color images by probabilistically merging voxel occupancy grids from individual views. Depth images corresponding to the multi-view color images are predicted which along with the rendered depth images of the coarse shape are used as a contrastive input whose features guide the refinement of the coarse shape through a series of graph convolution networks. Attention-based multi-view feature pooling is proposed to fuse the contrastive depth features from different viewpoints which are fed to the graph convolution networks. We validate the proposed multi-view mesh generation method on ShapeNet, where we obtain a significant improvement with 34% decrease in chamfer distance to ground truth and 14% increase in the F1-score compared with the state-of-the-art multi-view shape generation method.
1 INTRODUCTION
3D shape generation is a long-standing research problem in computer vision and computer graphics with applications in autonomous driving, augmented reality, etc. Conventional approaches mainly leverage multi-view geometry based on stereo correspondences between images but are restricted by the coverage provided by the input views. With the availability of large-scale 3D shape datasets and the success of deep learning in several computer vision tasks, 3D representations such as voxel grid Choy et al. (2016); Tulsiani et al. (2017); Yan et al. (2016) and point cloud Yang et al. (2018); Fan et al. (2017) have been explored for single-view 3D reconstruction. Among them, triangle mesh representation has received the most attention as it has various desirable properties for a wide range of applications and is capable of modeling detailed geometry without high memory requirement. Single-view 3D reconstruction methods Wang et al. (2018); Huang et al. (2015); Kar et al. (2015); Su et al. (2014) generate the 3D shape from merely a single color image but suffer from occlusion and limited visibility which leads to low quality reconstructions in the unseen areas. Multi-view methods Wen et al. (2019); Choy et al. (2016); Kar et al. (2017); Gwak et al. (2017) extend the input to images from different viewpoints which provides more visual information and improves the accuracy of the generated shapes. Recent work in multi-view mesh reconstruction Wen et al. (2019) introduces a multi-view deformation network using perceptual feature from each color image for refining the meshes generated by Pixel2Mesh Wang et al. (2018). Although promising results were obtained, this method relies on perceptual features from color images which do not explicitly encode the objects’ geometry and could restrict the accuracy of the 3D models.
In this work, we present a novel multi-view mesh generation method where we start by predicting coarse volumetric occupancy grid representations for the color images of each input viewpoint independently using a shared fully convolutional network which are merged into a single voxel grid in a probabilistic fashion followed by cubify Gkioxari et al. (2019) operation to convert it to a triangle
mesh. We then use Graph Convolutional Network (GCN) Scarselli et al. (2008); Wang et al. (2018) to fine-tune the cubified voxel grid in a coarse-to-fine manner. The GCN refines the coarse mesh by using the feature vector of each graph node (mesh vertices) obtained by projecting the vertices on the 2D contrastive depth features. The contrastive depth features are extracted from the rendered depth maps of the current mesh and predicted depth maps from a multi-view stereo network. We also propose an attention-based method to fuse feature from multiple views that can learn the importance of different views for each of the mesh vertices. Constrains between the intermediate refined mesh from GCN with predicted depth maps of different viewpoints further improve the final mesh quality. By employing multi-view voxel grid generation and refining it using geometry information from both the current mesh (through the rendered depth maps) and predicted depth maps, we are able to generate high-quality meshes. We validate our method on the ShapeNet Chang et al. (2015) benchmark and our method achieves the best performance among all previous multi-view and single-view mesh generation methods.
2 RELATED WORK
2.1 TRADITIONAL SHAPE GENERATION METHODS
3D model generation has traditionally been tackled using multi-view geometry principles. Among them, structure-from-motion (SfM) Schonberger & Frahm (2016); Agarwal et al. (2011); Cui & Tan (2015); Cui et al. (2017) and simultaneous localization and mapping (SLAM) Cadena et al. (2016); Mur-Artal et al. (2015); Engel et al. (2014); Whelan et al. (2015) are popular techniques that perform 3D reconstruction and camera pose estimation at the same time. These methods extract local image features, match them across images and use the matches to estimate camera poses and 3D geometry. Closer to our problem setup, multi-view stereo methods infer 3D geometry from images with known camera parameters. Volumetric methods Kar et al. (2017); Kutulakos & Seitz (2000); Seitz & Dyer (1999) predict voxel grid representation of objects by estimating the relationship between each voxel and object surfaces. Point cloud based methods Furukawa & Ponce (2009); Lhuillier & Quan (2005) start with a sparse point cloud and gradually increase the density of points to obtain a final dense point cloud of the object. Durou et al. (2008); Zhang et al. (1999); Favaro & Soatto (2005) reason about shading, texture and defocus to reason about visible parts of the object and infer its 3D geometry. While the results of these works are impressive in terms of quality and completeness of reconstruction,
they still struggle with poorly textured and reflective surfaces and require carefully selected input views.
2.2 DEEP SHAPE GENERATION METHODS
Deep learning based approaches can learn to infer 3D structure from training data and can be robust against poorly textured and reflective surfaces as well as limited and arbitrarily selected input views. These methods can be categorized into single view and multi-view methods. Huang et al. (2015); Su et al. (2014) use shape component retrieval and deformation from a large dataset for single-view 3D shape generation. Kurenkov et al. (2018) extend this idea by introducing free-form deformation networks on retrieved object templates from a database. Some work learn shape deformation from ground truth foreground masks of 2D images Kar et al. (2015); Yan et al. (2016); Tulsiani et al. (2017). Recurrent Neural Networks (RNN) based methods Choy et al. (2016); Kar et al. (2017); Gwak et al. (2017) are another popular solution to solve this problem. Gwak et al. (2017); Lin et al. (2019) introduce image silhouettes along with adversarial multi-view constraints and optimize object mesh models using multi-view photometric constraints. Predicting mesh directly from color images was proposed in Wang et al. (2018); Wickramasinghe et al. (2019); Pan et al. (2019); Wen et al. (2019); Gkioxari et al. (2019); Tang et al. (2019). DR-KFS Jin et al. (2019) introduces a differentiable visual similarity metric while SeqXY2SeqZ Han et al. (2020) represents 3D shapes using a set of 2D voxel tubes for shape reconstruction. Front2Back Yao et al. (2020) generates 3D shapes by fusing predicted depth and normal images and DV-Net Jia et al. (2020) predicts dense object point clouds using dual-view RGB images with a gated control network to fuse point clouds from the two views. FoldingNet Yang et al. (2018) learns to reconstruct arbitrary point clouds from a single 2D grid. AtlasNet Groueix et al. (2018) use learned parametric representation while Mescheder et al. (2019); Park et al. (2019); Liu et al. (2019b;a); Murez et al. (2020) employ implicit surface representation to reconstruct 3D shapes.
2.3 DEPTH ESTIMATION
Compared to 3D shape generation, depth prediction is an easier problem formulation since it simplifies the task to per-view depth map estimation. Traditional methods Campbell et al. (2008); Galliani et al. (2015); Schönberger et al. (2016) use multi-view stereo principles for depth prediction. Deep learning based multi-view stereo depth estimation was first introduced in Hartmann et al. (2017) where a learned cost metric is used to estimate patch similarities. DeepMVS Huang et al. (2018) warps multi-view images to 3D space and then applies deep networks for regularization and aggregation to estimate depth images. Learned 3D cost volume based depth prediction was proposed in MVSNet Yao et al. (2018) where a 3 dimensional cost volume is built using homographically warped 2D features from multi-view images and 3D CNNs are used for cost regularization and depth regression. This idea was further extended by Chen et al. (2019); Luo et al. (2019); Gu et al. (2019); Yao et al. (2019).
3 METHODOLOGY
Figure 1 shows the architecture of the proposed system which takes as input multi-view color images of an object with known poses and outputs a triangle mesh representing the surface of the object.
3.1 MULTI-VIEW VOXEL GRID PREDICTION
Single-view Voxel Grid Prediction The single-view voxel branch consists of a ResNet feature extractor and a fully convolutional voxel grid prediction network. It generates the coarse initial shape of an object from one viewpoint as voxel occupancy grid using a color image. Here, we set the resolution of the generated voxel occupancy grid as 32 × 32 × 32. The voxel prediction networks for all viewpoints share the same weights. Probabilistic Occupancy Grid Merging Voxel occupancy grid predicted from a single viewpoint suffers from occlusion and limited visibility. In order to fuse voxel grids from different viewpoints, we propose a probabilistic occupancy grid merging method which merges the voxel grids from each input viewpoint probabilistically to obtain the final voxel grid output. This allows occluded regions in one view to be estimated from other views where those regions are visible as well as increase the
confidence of prediction in overlapping regions. Occupancy probability of each voxel is represented by p(x) which is converted to log-odds (logit):
l(x) = log p(x)
1− p(x) (1)
Bayesian update on the probabilities reduce to simple summation of log likelihoods Konolige (1997). Hence, the multi-view log-odds of a voxel is given by:
l(x) = l1(x) + l2(x) + ...+ ln(x) (2)
where li is the voxel’s log-odds in view i and n is the number of input views. The final voxel probability x is obtained by applying the inverse function of Equation (1) which is a sigmoid function.
3.2 MESH REFINEMENT
The cubified mesh from the voxel branch only provides a coarse reconstruction of the object’s surface. We apply graph convolutional networks which represent each mesh vertex as one graph node and deforms them to more accurate positions. GCN-based Mesh Deformation The features pooled from multi-view images along with 3D coordinates of the vertices in world frame are used as features of the graph nodes. Series of Graphbased Convolutional Network (GCN) blocks are applied to deform a mesh at the current stage to the next stage, starting with the cubified voxel grids. A graph convolution deforms mesh vertices by propagating features from neighboring vertices by applying f ′ i = ReLU(W0fi + ∑ j∈N (i)W1fj) where N (i) is the set of neighboring vertices of the i-th vertex in the mesh, f{} represents the feature vector of a vertex, and W0 and W1 are learnable parameters of the model. Each GCN block utilizes several graph convolutions to transform the vertex features along with a final vertex refinement operation where the features along with vertex coordinates are further transformed as v ′
i = vi + tanh(Wvert[fi; vi]) where the matrix Wvert is another learnable parameter to obtain the deformed mesh. Contrastive Depth Feature Extraction Yao et al. (2020) demonstrate that using intermediate, image-centric 2.5D representations instead of directly generating 3D shapes in global frame from raw 2D images can improve 3D reconstruction quality. We therefore propose to formulate the features for graph nodes using 2.5D depth maps as input additional inputs alongside the RGB features. Specifically, we render the meshes at different GCN stages to depth image at all the input views using Kato et al. (2018) and use them along with predicted depths for depth feature extraction. We call this form of depth input contrastive depth as it contrasts the rendered depths of the current mesh against the predicted depths and allows the network to reason about the deformation better than when using predicted depth or color images alone. Given the 2D features, corresponding feature vectors of individual vertices can be found by projecting the 3D vertex coordinates to the feature planes using known camera parameters. We use VGG-16 Simonyan & Zisserman (2014) as our contrastive depth feature extraction network. Multi-View Depth Estimation We extend MVSNet Yao et al. (2018) and predict the depth maps of all views since the original implementation predicts depth of only one reference view. This is achieved by transforming the feature volumes to each view’s coordinate frame using homography warping and applying identical cost volume regularization and depth regression on each view. Detailed network architecture diagram of this module is provided in the appendix. Attention-based Multi-View Feature Pooling In order to fuse multi-view contrastive depth features, we formulate an attention module by adapting multi-head attention mechanism originally designed for sequence to sequence machine translation using transformer (encoder-decoder) architecture Vaswani et al. (2017). In a transformer architecture the encoder hidden state is mapped to lower dimension key-value pairs (K, V) while the decoder hidden state is mapped to a query vector Q using independent fully connected layers. The encoder hidden state in our case is the multi-view features while the decoder hidden state is the mean of the multi-view features. The attention weights are computed using scaled-dot product:
Attention(Q,K,V) = softmax( QKT√ N )V (3)
where N is the number of input views.
Multiple attention heads are used which are concatenated and transformed to obtain the final output
headi = Attention(QW Q i ,KW K i ,VW V i ) (4)
MultiHead(Q,K,V) = [head1; ...;headh]W 0 (5)
where multiple W are parameters to be learned, h is the number of attention heads and i ∈ [1, h]. We choose multi-head attention as our feature pooling method since it allows the model to attend information from different representation subspaces of the features by training multiple attentions in parallel. This method is also invariant to the order and number of input views. We visualize the learned attention weights (average of each attention heads) in Figure 2 where we can observe that the attention weights roughly takes into account the visibility/occlusion information from each view.
3.3 LOSS FUNCTIONS
Mesh losses The losses which are derived from Wang et al. (2018) to constrain the mesh predicted by each GCN block (P) to resemble the ground truth (Q) include Chamfer distance Lchamfer(P,Q) = |P|−1 ∑ (p,q)∈ΛP,Q ||p− q|| 2 + |Q|−1 ∑ (q,p)∈ΛQ,P ||q − p|| 2 and surface normal loss Lnormal(P,Q) =
−|P|−1 ∑ (p,q)∈ΛP,Q |up · uq| − |Q| −1∑
(q,p)∈ΛQ,P |uq · up| with additional regularization in the form of edge length loss Ledge(V,E) = 1|E| ∑ (v,v′)∈E ||v − v′||2 for visually appealing results.
Depth loss Our depth prediction network is supervised using adaptive reversed Huber loss (also known as BerHu criterion) Lambert-Lacroix & Zwald (2016). Ldepth = |x|, if |x| ≤ c, otherwise x 2+c2
2c where x is the depth error of a pixel and c is a constant set to 0.2. Note that the original MVSNet uses L1-loss, but we used BerHu loss since it gave slightly higher accuracy. Intuitively, this is because BerHu provides a good balance between L1 and L2 loss and has shown similar improvement in Laina et al. (2016). Contrastive depth loss BerHu loss is also applied between the rendered depth images at different GCN stages and the predicted depth images. Lcontrastive = |x|, if |x| ≤ c, otherwise x 2+c2
2c
Voxel loss Binary cross-entropy loss between the predicted voxel occupancy probabilities and the ground truth occupancies is used as voxel loss to supervise the voxel predictions Lvoxel = − ( p(x)log ( p(x) ) + ( 1− p(x) ) log ( 1− p(x) )) Final loss We use the weighted sum of the individual losses discussed above as the final loss to train our model in an end-to-end fashion. L = λchamferLchamfer+λnormalLnormal+λedgeLedge+λdepthLdepth+ λcontrastiveLcontrastive + λvoxelLvoxel , where L is the final loss term.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Comparisons We evaluate the proposed method against various multi-view shape generation methods. The state-of-the-art method is Pixel2Mesh++ Wen et al. (2019) (referred as P2M++). Wen et al. (2019) also provide a baseline by directly extending Pixel2Mesh Wang et al. (2018) to operate on multi-view images (referred as MVP2M) using their statistical feature pooling method to aggregate features from multiple color images. Results from additional multi-view shape generation baselines 3D-R2N2 Choy et al. (2016) and LSM Kar et al. (2017) are also reported. Dataset We evaluate our method against the state-of-the-art methods on the dataset from Choy et al. (2016) which is a subset of ShapeNet Chang et al. (2015) and has been widely used by recent 3D shape generation methods. It contains 50K 3D CAD models from 13 categories. Each model is rendered with a transparent background from 24 randomly chosen camera viewpoints to obtain color images. The corresponding camera intrinsics and extrinsics are provided in the dataset. Since the dataset does not contain depth images, we render them using a custom depth renderer at the same viewpoints as the color images and with the same camera intrinsics. We follow the training/testing/validation split of Gkioxari et al. (2019). Implementation For the depth prediction module, we follow the original MVSNet Yao et al. (2018) implementation. The output depth dimensions reduces by a factor of 4 to 56×56 from the 224×224 input image. The number of depth hypotheses is chosen as 48 which offers a balance between accuracy and running/training time efficiency. These depth hypotheses represent values from 0.1 m to 1.3 m at an interval of 25 mm. These values were chosen based on the range of depths present in the dataset.
The hierarchical features obtained from "Contrastive Depth Features Extractor" are of total 4800 dimensions for each view. The aggregated multi-view features are compressed to 480 dimensional after applying attentive feature pooling. 5 attention heads are used for merging multi-view features. The loss function weights are set as λchamfer = 1, λnormal = 1.6 × 10−4, λdepth = 0.1, λcontrastive = 0.001 and λvoxel = 1. Two settings of λedge were used, λedge = 0 (referred as Best) which gives better quantitative results and λedge = 0.2 (referred as Pretty) which gives better qualitative results. Training and Runtime The network is optimized using Adam optimizer with a learning rate of 10−4. The training is done on 5 Nvidia RTX-2080 GPUs with effective batch size 5. The depth prediction network (MVSNet) is trained independently for 30 epochs. Then the whole system is
trained for another 40 epochs with the weights of the MVSNet frozen. Our system is implemented in PyTorch deep learning framework and it takes around 60 hours for training. Evaluation Metric Following Wang et al. (2018); Wen et al. (2019), we use F1-score as our evaluation metric. The F1-score is the harmonic mean of precision and recall where the precision/recall are calculated by finding the percentage of points in the predicted/ground truth that can find a nearest neighbor from the other within a threshold. We provide evaluations with two threshold values: τ and 2τ where τ = 10−4 m2.
4.2 COMPARISON WITH PREVIOUS MULTI-VIEW SHAPE GENERATION METHODS
We quantitatively compare our method against previous works for multi-view shape generation in Table 1 and show the effectiveness of our methods in improving the shape quality. Our method outperforms the state-of-the-art method Pixel2Mesh++ Wen et al. (2019) with a decrease in chamfer distance to ground truth by 34% and 15% increase in F1-score at threshold τ . Note that in Table 1 the same model is trained for all the categories but accuracy on individual categories as well as average over the categories are evaluated. We provide the chamfer distances in the appendix.
Category F-score (τ ) ↑ F-score (2τ ) ↑
3D-R2N2 LSM MVP2M P2M++ Ours Ours 3D-R2N2 LSM MVP2M P2M++ Ours Ours(pretty) (best) (pretty) (best) Couch 45.47 43.02 53.17 57.56 71.63 73.63 59.97 55.49 73.24 75.33 85.28 88.24 Cabinet 54.08 50.80 56.85 65.72 75.91 76.39 64.42 60.72 76.58 81.57 87.61 88.84 Bench 44.56 49.33 60.37 66.24 81.11 83.76 62.47 65.92 75.69 79.67 90.56 92.57 Chair 37.62 48.55 54.19 62.05 77.63 78.69 54.26 64.95 72.36 77.68 88.24 90.02 Monitor 36.33 43.65 53.41 60.00 74.14 76.64 48.65 56.33 70.63 75.42 86.04 88.89 Firearm 55.72 56.14 79.67 80.74 92.92 94.32 76.79 73.89 89.08 89.29 96.81 97.67 Speaker 41.48 45.21 48.90 54.88 66.02 67.83 52.29 56.65 68.29 71.46 79.76 82.34 Lamp 32.25 45.58 50.82 62.56 72.47 75.93 49.38 64.76 65.72 74.00 82.00 85.33
Cellphone 58.09 60.11 66.07 74.36 85.57 86.45 69.66 71.39 82.31 86.16 93.40 94.28 Plane 47.81 55.60 75.16 76.79 89.23 92.13 70.49 76.39 86.38 86.62 94.65 96.57
Table 48.78 48.61 65.95 71.89 82.37 83.68 62.67 62.22 79.96 84.19 90.24 91.97 Car 59.86 51.91 67.27 68.45 77.01 80.43 78.31 68.20 84.64 85.19 88.99 92.33 Watercraft 40.72 47.96 61.85 62.99 75.52 80.48 63.59 66.95 77.49 77.32 86.77 90.35 Mean 46.37 49.73 61.05 66.48 78.58 80.80 62.53 64.91 77.10 80.30 88.49 90.72
Table 1: Qualitative comparison against state-of-the-art multi-view shape generation methods. We report F-score on each semantic category along with the mean over all categories using two thresholds τ and 2τ for nearest neighbor match where τ=10−4 m2.
We also provide visual results for qualitative assessment of the generated shapes by our Pretty model in Figure 3 which shows that it is able to more accurately predict topologically diverse shapes.
4.3 ABLATION STUDIES
Contrastive Depth Feature Extraction We evaluate several methods for contrastive feature extraction (Sub-section 3.2). These methods are 1) Input Concatenation: using the concatenated rendered and predicted depth maps as input to the VGG feature extractor, 2) Input Difference: using the difference of the two depth maps as input to VGG, 3) Feature Concatenation: concatenating features from rendered and predicted depths extracted by shared VGG, 4) Feature Difference: using difference of the features from the two depth maps extracted by shared VGG, and 5) Predicted depth only: using the VGG features from the predicted depths only. 6) Rendered depth only: using the VGG features from the rendered depths only. The quantitative results are summarized in Table 2 and shows that Input Concatenation method produces better results than other formulations. Accuracy with different settings Table 3 shows the contribution of different components towards the final accuracy. Naively extending the single-view Mesh R-CNN Gkioxari et al. (2019) to multiple views using statistical feature pooling Wen et al. (2019) for mesh refinement (row 1) gives an F1-score of 72.74% for threshold τ which is 6.26% improvement over Pixel2Mesh++. We further extend the above method with our probabilistic multi-view voxel grid prediction in row 2 and get a 4.23% improvement.
In row 3 of Table 3 we use our contrastive depth features instead of RGB features for mesh refinement and get 2.7% improvement. We then replace the statistical feature pooling with the proposed attention method and get 0.19% improvement. The improvement is not significant on our final architecture but we found the multi-head attention to perform better on more light-weight architectures. We also evaluate the effect of using additional regularization from contrastive depth losses: rendered depth vs predicted depth in the 5th rows of which improves the score by 0.98%. In row 6 we use ground truth
instead of predicted depths on our final model which gives the upper bound on our mesh prediction accuracy in relation to the depth prediction accuracy as 84.58%.
Number of View We test the performance of our framework with respect to the number of views. Table 4 shows that the accuracy of our method increases as we increase the number of input views for training. These experiments also validate that the attention-based feature pooling can efficiently encode features from different views to take advantage of larger number of views.
Table 5 shows the results when using different number of views during testing on our model trained with 3 views which indicates that increasing the number of views during testing does not improve the accuracy while decreasing the number of views can cause a significant drop in accuracy.
Metric 2 3 4 5 6 F1-τ 73.60 80.80 82.61 83.76 84.25 F1-2τ 85.80 90.72 91.78 92.73 93.14
Table 4: Accuracy w.r.t the number of views during training. The evaluation was performed on the same number of views as training.
Metric 2 3 4 5 6 F1-τ 72.46 80.80 80.98 80.94 80.85 F1-2τ 84.49 90.72 91.03 91.16 91.20
Table 5: Accuracy w.r.t the number of views during testing. The same model trained with 3 views was used in all of the cases.
5 CONCLUSION
We propose a neural network based solution to predict 3D triangle mesh models of objects from images taken from multiple views. First, we propose a multi-view voxel grid prediction module which probabilistically merges voxel grids predicted from individual input views. We then cubify the merged voxel grid to triangle mesh and apply graph convolutional networks for further refining the mesh. The features for the mesh vertices are extracted from contrastive depth input consisting of rendered depths at each refinement stage along with the predicted depths. The proposed mesh reconstruction method outperforms existing methods with a large margin and is capable of reconstructing objects with more complex topologies.
A APPENDIX
NETWORK ARCHITECTURE
MVSNET ARCHITECTURE
Our depth prediction module is based on MVSNet Yao et al. (2018) which constructs a regularized 3D cost volumes to estimate the depth map of the reference view. Here, we extent MVSNet to predict the depth maps of all views instead of only the reference view. This is achieved by transforming the feature volumes to each view’s coordinate frame using homography warping and applying identical cost volume regularization and depth regression on each view. This allows the reuse of pre-regularization feature volumes for efficient multi-view depth prediction invariant to the order of input images. Figure 4 shows the architecture of the our depth estimation module.
PROBABILISTIC OCCUPANCY GRID MERGING
We use single-view voxel prediction network from Gkioxari et al. (2019) to predict predicts voxel grids for each of the input images in their respective local coordinate frames. The occupancy grids are transformed to global frame (which is set to the coordinate frame of the first image) by finding the equivalent global grid values in the local grids after applying bilinear interpolation on the closest matches. The voxel grids in global coordinates are then probabilistically merged according to Sub-section 3.1 of the main submission.
EXPERIMENTS
We quantitatively compare our method against previous works for multi-view shape generation in Table 6 and show effectiveness of our proposed shape generation methods in improving shape quality. Our method outperforms the state-of-the-art method Pixel2Mesh++ Wen et al. (2019) with decrease in chamfer distance to ground truth by 34%, which shows the effectiveness of our proposed method. Note that in Table 6 same model is trained for all the categories but accuracy on individual categories as well as average over all the categories are evaluated.
Category Chamfer Distance (CD) ↓3D-R2N2 LSM MVP2M P2M++ Ours Couch 0.806 0.730 0.534 0.439 0.220
Cabinet 0.613 0.634 0.488 0.337 0.230 Bench 1.362 0.572 0.591 0.549 0.159 Chair 1.534 0.495 0.583 0.461 0.201 Monitor 1.465 0.592 0.658 0.566 0.217 Firearm 0.432 0.385 0.305 0.305 0.123 Speaker 1.443 0.767 0.745 0.635 0.402 Lamp 6.780 1.768 0.980 1.135 0.755 Cellphone 1.161 0.362 0.445 0.325 0.138 Plane 0.854 0.496 0.403 0.422 0.084
Table 1.243 0.994 0.511 0.388 0.181 Car 0.358 0.326 0.321 0.249 0.165 Watercraft 0.869 0.509 0.463 0.508 0.175 Mean 1.455 0.664 0.541 0.486 0.211
Table 6: Qualitative comparison against state-of-the-art multi-view shape generation methods. Following Wen et al. (2019), we report Chamfer Distance in m2 × 1000 from ground truth for different methods. Note that same model is trained for all the categories but accuracy on individual categories as well as average over all the categories are evaluated.
ABLATION STUDIES
Coarse Shape Generation We conduct comparisons on voxel grid predicted from our proposed probabilistically merged voxel grids against single view method Gkioxari et al. (2019). As is shown in Table 7, the accuracy of the initial shape generated from probabilistically merged voxel grid is higher than that from individual views.
Accuracy at Different GCN Stages We analyze the accuracy of meshes at different GCN stages in Table 8. The results validate that our method produces the meshes in a coarse-to-fine manner and multiple GCN refinements improve the mesh quality.
Resolution of Depth Prediction We conduct experiments using different numbers of depth hypotheses in our depth prediction network (Sub-section A), producing depth values at different resolutions. A higher number of depth hypothesis means finer resolution of the predicted depths. The quantitative results with different hypothesis numbers are summarized in Table 9. We set depth hypothesis as 48 for our final architecture which is equivalent to the resolution of 25 mm. We observe that the mesh accuracy remain relatively unchanged if we predict depths at finer resolutions.
Metric Single-view Multi-view F1-τ 25.19 31.27 F1-2τ 36.75 44.46
Table 7: Accuracy of predicted voxel grids from single-view prediction compared against the proposed probabilistically merged multi-view voxel grids. The voxel branch was trained separately without the mesh refinement and evaluation was performed on the cubified voxel grids. We use three views for probabilistic grid merging.
Generalization Capability We conduct experiments to evaluate the generalization capability of our system across the semantic categories. We train our model with only 12 out of the 13 categories and test on the category that was left out. Table 10 shows that the accuracy generally does not decrease significantly when compared with the model that was trained on all 13 categories when using 2τ threshold for the F-score.
Category F-score (τ ) ↑ F-score (2τ ) ↑Excluding Including Excluding Including Couch 63.29 73.63 80.79 88.24
Cabinet 68.26 76.39 83.10 88.84 Bench 76.08 83.76 87.42 92.57 Chair 60.60 78.69 75.93 90.02 Monitor 67.26 76.64 81.57 88.89 Firearm 78.59 94.32 86.28 97.67 Speaker 62.39 67.83 77.77 82.34 Lamp 63.50 75.93 74.66 85.33 Cellphone 67.24 86.45 80.54 94.28 Plane 57.48 92.13 67.27 96.57
Table 76.41 83.68 86.86 91.97 Car 59.08 80.43 75.58 92.33 Watercraft 64.97 80.48 78.95 90.35
Table 10: Accuracy when a category is excluded during training and evaluation is performed on the category to verify how well training on other categories generalizes to the excluded category.
B APPENDIX
BEST VS PRETTY MODELS
We provide qualitative comparison between the our models trained with best and pretty configurations in Figure 5. The best configuration refers to our model trained without edge regularization while pretty refers to the model trained with the regularization (Sub-section 4.1). We observe that without the regularization we get higher score on our evaluation metrics but get degenerate meshes with self-intersections and irregularly sized faces.
FAILURE CASES
Some failure cases of our model (with pretty setting) are shown in Figure 6. We notice that the rough topology of the mesh is recovered while we failed to reconstruct the fine topology. We can regard the recovery from wrong initial topology as a promising future work. | 1. What is the focus of the paper regarding 3D object reconstruction?
2. What is the novelty of the proposed approach, particularly in the refinement stage?
3. What are the strengths and weaknesses of the paper regarding its results and contributions?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. What are some concerns regarding the use of GCN and depth features in the proposed approach? | Review | Review
Overview
This paper proposes a system of reconstructing 3D objects from multi-view images. The system consists of a single-view voxel generation network, a multi-view voxel fusion mechanism, a multi-view depth estimation network, and a refinement network aggregating multi-view depth features. The major contribution is in the refinement stage upon the coarse reconstruction obtained from voxel predictions, typically for the introduction of the Attention-based Multi-View Feature Pooling.
Method Novelty According to the paper and the attached code, it seems like the authors mostly utilized existing networks to build a system. The author introduces their Attention-based Multi-View Feature Pooling mechanism which is new. Despite the results, the system is rather bulky and ad-hoc. For the use of GCN in refinement, see Question 2.
Results The paper achieves plausible state-of-the-arts quantitate results on standard evaluation sets and metrics. The visual quality is reasonable, however from Figure 3 it seems like reconstructed local surface suffers from noises. Their results struggles to getting clean surface especially when compared to implicit-based methods, such as DeepSDF. The authors did not provide more qualitative results in supplementals.
Clarity This paper is well written and easy to understand. The attached code is well documented and can be deployed.
Conclusion
Overall, this is a well written paper with plausible outcomes. The reviewer believes this paper carries out reasonable efforts and insights into this topic. The reviewer is marginally positive towards its acceptance due to the pleasing results, but is holding a conservative attitude towards its contribution significances. The reviewer would like to see the questions addressed in the rebuttal period, while also refer to other's reviews.
Questions:
For each single-view voxel prediction, the paper did not clarify which coordinate system those voxel are in. When aggregating multi-view voxel grid, how is the coordinate transformation handled between different viewpoints? If voxel from different coordinate systems should undertake transformation, how is interpolation handled when merging to a single 32x32x32 grid?
Use of GCN. As GCN only optimizes the current mesh, it cannot correct the topology error occurring after the coarse reconstruction. How would this method overcome this, especially when the cubified mesh is in wrong topology?
Use of depth. From multi-view predicted depth, one can simply reconstruct from the depths, or run differentiable render for optimizing the mesh geometry directly. Why would we need contrastive depth feature extraction? |
ICLR | Title
MeshMVS: Multi-view Stereo Guided Mesh Reconstruction
Abstract
Deep learning based 3D shape generation methods generally utilize latent features extracted from color images to encode the objects’ semantics and guide the shape generation process. These color image semantics only implicitly encode 3D information, potentially limiting the accuracy of the generated shapes. In this paper we propose a multi-view mesh generation method which incorporates geometry information in the color images explicitly by using the features from intermediate 2.5D depth representations of the input images and regularizing the 3D shapes against these depth images. Our system first predicts a coarse 3D volume from the color images by probabilistically merging voxel occupancy grids from individual views. Depth images corresponding to the multi-view color images are predicted which along with the rendered depth images of the coarse shape are used as a contrastive input whose features guide the refinement of the coarse shape through a series of graph convolution networks. Attention-based multi-view feature pooling is proposed to fuse the contrastive depth features from different viewpoints which are fed to the graph convolution networks. We validate the proposed multi-view mesh generation method on ShapeNet, where we obtain a significant improvement with 34% decrease in chamfer distance to ground truth and 14% increase in the F1-score compared with the state-of-the-art multi-view shape generation method.
1 INTRODUCTION
3D shape generation is a long-standing research problem in computer vision and computer graphics with applications in autonomous driving, augmented reality, etc. Conventional approaches mainly leverage multi-view geometry based on stereo correspondences between images but are restricted by the coverage provided by the input views. With the availability of large-scale 3D shape datasets and the success of deep learning in several computer vision tasks, 3D representations such as voxel grid Choy et al. (2016); Tulsiani et al. (2017); Yan et al. (2016) and point cloud Yang et al. (2018); Fan et al. (2017) have been explored for single-view 3D reconstruction. Among them, triangle mesh representation has received the most attention as it has various desirable properties for a wide range of applications and is capable of modeling detailed geometry without high memory requirement. Single-view 3D reconstruction methods Wang et al. (2018); Huang et al. (2015); Kar et al. (2015); Su et al. (2014) generate the 3D shape from merely a single color image but suffer from occlusion and limited visibility which leads to low quality reconstructions in the unseen areas. Multi-view methods Wen et al. (2019); Choy et al. (2016); Kar et al. (2017); Gwak et al. (2017) extend the input to images from different viewpoints which provides more visual information and improves the accuracy of the generated shapes. Recent work in multi-view mesh reconstruction Wen et al. (2019) introduces a multi-view deformation network using perceptual feature from each color image for refining the meshes generated by Pixel2Mesh Wang et al. (2018). Although promising results were obtained, this method relies on perceptual features from color images which do not explicitly encode the objects’ geometry and could restrict the accuracy of the 3D models.
In this work, we present a novel multi-view mesh generation method where we start by predicting coarse volumetric occupancy grid representations for the color images of each input viewpoint independently using a shared fully convolutional network which are merged into a single voxel grid in a probabilistic fashion followed by cubify Gkioxari et al. (2019) operation to convert it to a triangle
mesh. We then use Graph Convolutional Network (GCN) Scarselli et al. (2008); Wang et al. (2018) to fine-tune the cubified voxel grid in a coarse-to-fine manner. The GCN refines the coarse mesh by using the feature vector of each graph node (mesh vertices) obtained by projecting the vertices on the 2D contrastive depth features. The contrastive depth features are extracted from the rendered depth maps of the current mesh and predicted depth maps from a multi-view stereo network. We also propose an attention-based method to fuse feature from multiple views that can learn the importance of different views for each of the mesh vertices. Constrains between the intermediate refined mesh from GCN with predicted depth maps of different viewpoints further improve the final mesh quality. By employing multi-view voxel grid generation and refining it using geometry information from both the current mesh (through the rendered depth maps) and predicted depth maps, we are able to generate high-quality meshes. We validate our method on the ShapeNet Chang et al. (2015) benchmark and our method achieves the best performance among all previous multi-view and single-view mesh generation methods.
2 RELATED WORK
2.1 TRADITIONAL SHAPE GENERATION METHODS
3D model generation has traditionally been tackled using multi-view geometry principles. Among them, structure-from-motion (SfM) Schonberger & Frahm (2016); Agarwal et al. (2011); Cui & Tan (2015); Cui et al. (2017) and simultaneous localization and mapping (SLAM) Cadena et al. (2016); Mur-Artal et al. (2015); Engel et al. (2014); Whelan et al. (2015) are popular techniques that perform 3D reconstruction and camera pose estimation at the same time. These methods extract local image features, match them across images and use the matches to estimate camera poses and 3D geometry. Closer to our problem setup, multi-view stereo methods infer 3D geometry from images with known camera parameters. Volumetric methods Kar et al. (2017); Kutulakos & Seitz (2000); Seitz & Dyer (1999) predict voxel grid representation of objects by estimating the relationship between each voxel and object surfaces. Point cloud based methods Furukawa & Ponce (2009); Lhuillier & Quan (2005) start with a sparse point cloud and gradually increase the density of points to obtain a final dense point cloud of the object. Durou et al. (2008); Zhang et al. (1999); Favaro & Soatto (2005) reason about shading, texture and defocus to reason about visible parts of the object and infer its 3D geometry. While the results of these works are impressive in terms of quality and completeness of reconstruction,
they still struggle with poorly textured and reflective surfaces and require carefully selected input views.
2.2 DEEP SHAPE GENERATION METHODS
Deep learning based approaches can learn to infer 3D structure from training data and can be robust against poorly textured and reflective surfaces as well as limited and arbitrarily selected input views. These methods can be categorized into single view and multi-view methods. Huang et al. (2015); Su et al. (2014) use shape component retrieval and deformation from a large dataset for single-view 3D shape generation. Kurenkov et al. (2018) extend this idea by introducing free-form deformation networks on retrieved object templates from a database. Some work learn shape deformation from ground truth foreground masks of 2D images Kar et al. (2015); Yan et al. (2016); Tulsiani et al. (2017). Recurrent Neural Networks (RNN) based methods Choy et al. (2016); Kar et al. (2017); Gwak et al. (2017) are another popular solution to solve this problem. Gwak et al. (2017); Lin et al. (2019) introduce image silhouettes along with adversarial multi-view constraints and optimize object mesh models using multi-view photometric constraints. Predicting mesh directly from color images was proposed in Wang et al. (2018); Wickramasinghe et al. (2019); Pan et al. (2019); Wen et al. (2019); Gkioxari et al. (2019); Tang et al. (2019). DR-KFS Jin et al. (2019) introduces a differentiable visual similarity metric while SeqXY2SeqZ Han et al. (2020) represents 3D shapes using a set of 2D voxel tubes for shape reconstruction. Front2Back Yao et al. (2020) generates 3D shapes by fusing predicted depth and normal images and DV-Net Jia et al. (2020) predicts dense object point clouds using dual-view RGB images with a gated control network to fuse point clouds from the two views. FoldingNet Yang et al. (2018) learns to reconstruct arbitrary point clouds from a single 2D grid. AtlasNet Groueix et al. (2018) use learned parametric representation while Mescheder et al. (2019); Park et al. (2019); Liu et al. (2019b;a); Murez et al. (2020) employ implicit surface representation to reconstruct 3D shapes.
2.3 DEPTH ESTIMATION
Compared to 3D shape generation, depth prediction is an easier problem formulation since it simplifies the task to per-view depth map estimation. Traditional methods Campbell et al. (2008); Galliani et al. (2015); Schönberger et al. (2016) use multi-view stereo principles for depth prediction. Deep learning based multi-view stereo depth estimation was first introduced in Hartmann et al. (2017) where a learned cost metric is used to estimate patch similarities. DeepMVS Huang et al. (2018) warps multi-view images to 3D space and then applies deep networks for regularization and aggregation to estimate depth images. Learned 3D cost volume based depth prediction was proposed in MVSNet Yao et al. (2018) where a 3 dimensional cost volume is built using homographically warped 2D features from multi-view images and 3D CNNs are used for cost regularization and depth regression. This idea was further extended by Chen et al. (2019); Luo et al. (2019); Gu et al. (2019); Yao et al. (2019).
3 METHODOLOGY
Figure 1 shows the architecture of the proposed system which takes as input multi-view color images of an object with known poses and outputs a triangle mesh representing the surface of the object.
3.1 MULTI-VIEW VOXEL GRID PREDICTION
Single-view Voxel Grid Prediction The single-view voxel branch consists of a ResNet feature extractor and a fully convolutional voxel grid prediction network. It generates the coarse initial shape of an object from one viewpoint as voxel occupancy grid using a color image. Here, we set the resolution of the generated voxel occupancy grid as 32 × 32 × 32. The voxel prediction networks for all viewpoints share the same weights. Probabilistic Occupancy Grid Merging Voxel occupancy grid predicted from a single viewpoint suffers from occlusion and limited visibility. In order to fuse voxel grids from different viewpoints, we propose a probabilistic occupancy grid merging method which merges the voxel grids from each input viewpoint probabilistically to obtain the final voxel grid output. This allows occluded regions in one view to be estimated from other views where those regions are visible as well as increase the
confidence of prediction in overlapping regions. Occupancy probability of each voxel is represented by p(x) which is converted to log-odds (logit):
l(x) = log p(x)
1− p(x) (1)
Bayesian update on the probabilities reduce to simple summation of log likelihoods Konolige (1997). Hence, the multi-view log-odds of a voxel is given by:
l(x) = l1(x) + l2(x) + ...+ ln(x) (2)
where li is the voxel’s log-odds in view i and n is the number of input views. The final voxel probability x is obtained by applying the inverse function of Equation (1) which is a sigmoid function.
3.2 MESH REFINEMENT
The cubified mesh from the voxel branch only provides a coarse reconstruction of the object’s surface. We apply graph convolutional networks which represent each mesh vertex as one graph node and deforms them to more accurate positions. GCN-based Mesh Deformation The features pooled from multi-view images along with 3D coordinates of the vertices in world frame are used as features of the graph nodes. Series of Graphbased Convolutional Network (GCN) blocks are applied to deform a mesh at the current stage to the next stage, starting with the cubified voxel grids. A graph convolution deforms mesh vertices by propagating features from neighboring vertices by applying f ′ i = ReLU(W0fi + ∑ j∈N (i)W1fj) where N (i) is the set of neighboring vertices of the i-th vertex in the mesh, f{} represents the feature vector of a vertex, and W0 and W1 are learnable parameters of the model. Each GCN block utilizes several graph convolutions to transform the vertex features along with a final vertex refinement operation where the features along with vertex coordinates are further transformed as v ′
i = vi + tanh(Wvert[fi; vi]) where the matrix Wvert is another learnable parameter to obtain the deformed mesh. Contrastive Depth Feature Extraction Yao et al. (2020) demonstrate that using intermediate, image-centric 2.5D representations instead of directly generating 3D shapes in global frame from raw 2D images can improve 3D reconstruction quality. We therefore propose to formulate the features for graph nodes using 2.5D depth maps as input additional inputs alongside the RGB features. Specifically, we render the meshes at different GCN stages to depth image at all the input views using Kato et al. (2018) and use them along with predicted depths for depth feature extraction. We call this form of depth input contrastive depth as it contrasts the rendered depths of the current mesh against the predicted depths and allows the network to reason about the deformation better than when using predicted depth or color images alone. Given the 2D features, corresponding feature vectors of individual vertices can be found by projecting the 3D vertex coordinates to the feature planes using known camera parameters. We use VGG-16 Simonyan & Zisserman (2014) as our contrastive depth feature extraction network. Multi-View Depth Estimation We extend MVSNet Yao et al. (2018) and predict the depth maps of all views since the original implementation predicts depth of only one reference view. This is achieved by transforming the feature volumes to each view’s coordinate frame using homography warping and applying identical cost volume regularization and depth regression on each view. Detailed network architecture diagram of this module is provided in the appendix. Attention-based Multi-View Feature Pooling In order to fuse multi-view contrastive depth features, we formulate an attention module by adapting multi-head attention mechanism originally designed for sequence to sequence machine translation using transformer (encoder-decoder) architecture Vaswani et al. (2017). In a transformer architecture the encoder hidden state is mapped to lower dimension key-value pairs (K, V) while the decoder hidden state is mapped to a query vector Q using independent fully connected layers. The encoder hidden state in our case is the multi-view features while the decoder hidden state is the mean of the multi-view features. The attention weights are computed using scaled-dot product:
Attention(Q,K,V) = softmax( QKT√ N )V (3)
where N is the number of input views.
Multiple attention heads are used which are concatenated and transformed to obtain the final output
headi = Attention(QW Q i ,KW K i ,VW V i ) (4)
MultiHead(Q,K,V) = [head1; ...;headh]W 0 (5)
where multiple W are parameters to be learned, h is the number of attention heads and i ∈ [1, h]. We choose multi-head attention as our feature pooling method since it allows the model to attend information from different representation subspaces of the features by training multiple attentions in parallel. This method is also invariant to the order and number of input views. We visualize the learned attention weights (average of each attention heads) in Figure 2 where we can observe that the attention weights roughly takes into account the visibility/occlusion information from each view.
3.3 LOSS FUNCTIONS
Mesh losses The losses which are derived from Wang et al. (2018) to constrain the mesh predicted by each GCN block (P) to resemble the ground truth (Q) include Chamfer distance Lchamfer(P,Q) = |P|−1 ∑ (p,q)∈ΛP,Q ||p− q|| 2 + |Q|−1 ∑ (q,p)∈ΛQ,P ||q − p|| 2 and surface normal loss Lnormal(P,Q) =
−|P|−1 ∑ (p,q)∈ΛP,Q |up · uq| − |Q| −1∑
(q,p)∈ΛQ,P |uq · up| with additional regularization in the form of edge length loss Ledge(V,E) = 1|E| ∑ (v,v′)∈E ||v − v′||2 for visually appealing results.
Depth loss Our depth prediction network is supervised using adaptive reversed Huber loss (also known as BerHu criterion) Lambert-Lacroix & Zwald (2016). Ldepth = |x|, if |x| ≤ c, otherwise x 2+c2
2c where x is the depth error of a pixel and c is a constant set to 0.2. Note that the original MVSNet uses L1-loss, but we used BerHu loss since it gave slightly higher accuracy. Intuitively, this is because BerHu provides a good balance between L1 and L2 loss and has shown similar improvement in Laina et al. (2016). Contrastive depth loss BerHu loss is also applied between the rendered depth images at different GCN stages and the predicted depth images. Lcontrastive = |x|, if |x| ≤ c, otherwise x 2+c2
2c
Voxel loss Binary cross-entropy loss between the predicted voxel occupancy probabilities and the ground truth occupancies is used as voxel loss to supervise the voxel predictions Lvoxel = − ( p(x)log ( p(x) ) + ( 1− p(x) ) log ( 1− p(x) )) Final loss We use the weighted sum of the individual losses discussed above as the final loss to train our model in an end-to-end fashion. L = λchamferLchamfer+λnormalLnormal+λedgeLedge+λdepthLdepth+ λcontrastiveLcontrastive + λvoxelLvoxel , where L is the final loss term.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Comparisons We evaluate the proposed method against various multi-view shape generation methods. The state-of-the-art method is Pixel2Mesh++ Wen et al. (2019) (referred as P2M++). Wen et al. (2019) also provide a baseline by directly extending Pixel2Mesh Wang et al. (2018) to operate on multi-view images (referred as MVP2M) using their statistical feature pooling method to aggregate features from multiple color images. Results from additional multi-view shape generation baselines 3D-R2N2 Choy et al. (2016) and LSM Kar et al. (2017) are also reported. Dataset We evaluate our method against the state-of-the-art methods on the dataset from Choy et al. (2016) which is a subset of ShapeNet Chang et al. (2015) and has been widely used by recent 3D shape generation methods. It contains 50K 3D CAD models from 13 categories. Each model is rendered with a transparent background from 24 randomly chosen camera viewpoints to obtain color images. The corresponding camera intrinsics and extrinsics are provided in the dataset. Since the dataset does not contain depth images, we render them using a custom depth renderer at the same viewpoints as the color images and with the same camera intrinsics. We follow the training/testing/validation split of Gkioxari et al. (2019). Implementation For the depth prediction module, we follow the original MVSNet Yao et al. (2018) implementation. The output depth dimensions reduces by a factor of 4 to 56×56 from the 224×224 input image. The number of depth hypotheses is chosen as 48 which offers a balance between accuracy and running/training time efficiency. These depth hypotheses represent values from 0.1 m to 1.3 m at an interval of 25 mm. These values were chosen based on the range of depths present in the dataset.
The hierarchical features obtained from "Contrastive Depth Features Extractor" are of total 4800 dimensions for each view. The aggregated multi-view features are compressed to 480 dimensional after applying attentive feature pooling. 5 attention heads are used for merging multi-view features. The loss function weights are set as λchamfer = 1, λnormal = 1.6 × 10−4, λdepth = 0.1, λcontrastive = 0.001 and λvoxel = 1. Two settings of λedge were used, λedge = 0 (referred as Best) which gives better quantitative results and λedge = 0.2 (referred as Pretty) which gives better qualitative results. Training and Runtime The network is optimized using Adam optimizer with a learning rate of 10−4. The training is done on 5 Nvidia RTX-2080 GPUs with effective batch size 5. The depth prediction network (MVSNet) is trained independently for 30 epochs. Then the whole system is
trained for another 40 epochs with the weights of the MVSNet frozen. Our system is implemented in PyTorch deep learning framework and it takes around 60 hours for training. Evaluation Metric Following Wang et al. (2018); Wen et al. (2019), we use F1-score as our evaluation metric. The F1-score is the harmonic mean of precision and recall where the precision/recall are calculated by finding the percentage of points in the predicted/ground truth that can find a nearest neighbor from the other within a threshold. We provide evaluations with two threshold values: τ and 2τ where τ = 10−4 m2.
4.2 COMPARISON WITH PREVIOUS MULTI-VIEW SHAPE GENERATION METHODS
We quantitatively compare our method against previous works for multi-view shape generation in Table 1 and show the effectiveness of our methods in improving the shape quality. Our method outperforms the state-of-the-art method Pixel2Mesh++ Wen et al. (2019) with a decrease in chamfer distance to ground truth by 34% and 15% increase in F1-score at threshold τ . Note that in Table 1 the same model is trained for all the categories but accuracy on individual categories as well as average over the categories are evaluated. We provide the chamfer distances in the appendix.
Category F-score (τ ) ↑ F-score (2τ ) ↑
3D-R2N2 LSM MVP2M P2M++ Ours Ours 3D-R2N2 LSM MVP2M P2M++ Ours Ours(pretty) (best) (pretty) (best) Couch 45.47 43.02 53.17 57.56 71.63 73.63 59.97 55.49 73.24 75.33 85.28 88.24 Cabinet 54.08 50.80 56.85 65.72 75.91 76.39 64.42 60.72 76.58 81.57 87.61 88.84 Bench 44.56 49.33 60.37 66.24 81.11 83.76 62.47 65.92 75.69 79.67 90.56 92.57 Chair 37.62 48.55 54.19 62.05 77.63 78.69 54.26 64.95 72.36 77.68 88.24 90.02 Monitor 36.33 43.65 53.41 60.00 74.14 76.64 48.65 56.33 70.63 75.42 86.04 88.89 Firearm 55.72 56.14 79.67 80.74 92.92 94.32 76.79 73.89 89.08 89.29 96.81 97.67 Speaker 41.48 45.21 48.90 54.88 66.02 67.83 52.29 56.65 68.29 71.46 79.76 82.34 Lamp 32.25 45.58 50.82 62.56 72.47 75.93 49.38 64.76 65.72 74.00 82.00 85.33
Cellphone 58.09 60.11 66.07 74.36 85.57 86.45 69.66 71.39 82.31 86.16 93.40 94.28 Plane 47.81 55.60 75.16 76.79 89.23 92.13 70.49 76.39 86.38 86.62 94.65 96.57
Table 48.78 48.61 65.95 71.89 82.37 83.68 62.67 62.22 79.96 84.19 90.24 91.97 Car 59.86 51.91 67.27 68.45 77.01 80.43 78.31 68.20 84.64 85.19 88.99 92.33 Watercraft 40.72 47.96 61.85 62.99 75.52 80.48 63.59 66.95 77.49 77.32 86.77 90.35 Mean 46.37 49.73 61.05 66.48 78.58 80.80 62.53 64.91 77.10 80.30 88.49 90.72
Table 1: Qualitative comparison against state-of-the-art multi-view shape generation methods. We report F-score on each semantic category along with the mean over all categories using two thresholds τ and 2τ for nearest neighbor match where τ=10−4 m2.
We also provide visual results for qualitative assessment of the generated shapes by our Pretty model in Figure 3 which shows that it is able to more accurately predict topologically diverse shapes.
4.3 ABLATION STUDIES
Contrastive Depth Feature Extraction We evaluate several methods for contrastive feature extraction (Sub-section 3.2). These methods are 1) Input Concatenation: using the concatenated rendered and predicted depth maps as input to the VGG feature extractor, 2) Input Difference: using the difference of the two depth maps as input to VGG, 3) Feature Concatenation: concatenating features from rendered and predicted depths extracted by shared VGG, 4) Feature Difference: using difference of the features from the two depth maps extracted by shared VGG, and 5) Predicted depth only: using the VGG features from the predicted depths only. 6) Rendered depth only: using the VGG features from the rendered depths only. The quantitative results are summarized in Table 2 and shows that Input Concatenation method produces better results than other formulations. Accuracy with different settings Table 3 shows the contribution of different components towards the final accuracy. Naively extending the single-view Mesh R-CNN Gkioxari et al. (2019) to multiple views using statistical feature pooling Wen et al. (2019) for mesh refinement (row 1) gives an F1-score of 72.74% for threshold τ which is 6.26% improvement over Pixel2Mesh++. We further extend the above method with our probabilistic multi-view voxel grid prediction in row 2 and get a 4.23% improvement.
In row 3 of Table 3 we use our contrastive depth features instead of RGB features for mesh refinement and get 2.7% improvement. We then replace the statistical feature pooling with the proposed attention method and get 0.19% improvement. The improvement is not significant on our final architecture but we found the multi-head attention to perform better on more light-weight architectures. We also evaluate the effect of using additional regularization from contrastive depth losses: rendered depth vs predicted depth in the 5th rows of which improves the score by 0.98%. In row 6 we use ground truth
instead of predicted depths on our final model which gives the upper bound on our mesh prediction accuracy in relation to the depth prediction accuracy as 84.58%.
Number of View We test the performance of our framework with respect to the number of views. Table 4 shows that the accuracy of our method increases as we increase the number of input views for training. These experiments also validate that the attention-based feature pooling can efficiently encode features from different views to take advantage of larger number of views.
Table 5 shows the results when using different number of views during testing on our model trained with 3 views which indicates that increasing the number of views during testing does not improve the accuracy while decreasing the number of views can cause a significant drop in accuracy.
Metric 2 3 4 5 6 F1-τ 73.60 80.80 82.61 83.76 84.25 F1-2τ 85.80 90.72 91.78 92.73 93.14
Table 4: Accuracy w.r.t the number of views during training. The evaluation was performed on the same number of views as training.
Metric 2 3 4 5 6 F1-τ 72.46 80.80 80.98 80.94 80.85 F1-2τ 84.49 90.72 91.03 91.16 91.20
Table 5: Accuracy w.r.t the number of views during testing. The same model trained with 3 views was used in all of the cases.
5 CONCLUSION
We propose a neural network based solution to predict 3D triangle mesh models of objects from images taken from multiple views. First, we propose a multi-view voxel grid prediction module which probabilistically merges voxel grids predicted from individual input views. We then cubify the merged voxel grid to triangle mesh and apply graph convolutional networks for further refining the mesh. The features for the mesh vertices are extracted from contrastive depth input consisting of rendered depths at each refinement stage along with the predicted depths. The proposed mesh reconstruction method outperforms existing methods with a large margin and is capable of reconstructing objects with more complex topologies.
A APPENDIX
NETWORK ARCHITECTURE
MVSNET ARCHITECTURE
Our depth prediction module is based on MVSNet Yao et al. (2018) which constructs a regularized 3D cost volumes to estimate the depth map of the reference view. Here, we extent MVSNet to predict the depth maps of all views instead of only the reference view. This is achieved by transforming the feature volumes to each view’s coordinate frame using homography warping and applying identical cost volume regularization and depth regression on each view. This allows the reuse of pre-regularization feature volumes for efficient multi-view depth prediction invariant to the order of input images. Figure 4 shows the architecture of the our depth estimation module.
PROBABILISTIC OCCUPANCY GRID MERGING
We use single-view voxel prediction network from Gkioxari et al. (2019) to predict predicts voxel grids for each of the input images in their respective local coordinate frames. The occupancy grids are transformed to global frame (which is set to the coordinate frame of the first image) by finding the equivalent global grid values in the local grids after applying bilinear interpolation on the closest matches. The voxel grids in global coordinates are then probabilistically merged according to Sub-section 3.1 of the main submission.
EXPERIMENTS
We quantitatively compare our method against previous works for multi-view shape generation in Table 6 and show effectiveness of our proposed shape generation methods in improving shape quality. Our method outperforms the state-of-the-art method Pixel2Mesh++ Wen et al. (2019) with decrease in chamfer distance to ground truth by 34%, which shows the effectiveness of our proposed method. Note that in Table 6 same model is trained for all the categories but accuracy on individual categories as well as average over all the categories are evaluated.
Category Chamfer Distance (CD) ↓3D-R2N2 LSM MVP2M P2M++ Ours Couch 0.806 0.730 0.534 0.439 0.220
Cabinet 0.613 0.634 0.488 0.337 0.230 Bench 1.362 0.572 0.591 0.549 0.159 Chair 1.534 0.495 0.583 0.461 0.201 Monitor 1.465 0.592 0.658 0.566 0.217 Firearm 0.432 0.385 0.305 0.305 0.123 Speaker 1.443 0.767 0.745 0.635 0.402 Lamp 6.780 1.768 0.980 1.135 0.755 Cellphone 1.161 0.362 0.445 0.325 0.138 Plane 0.854 0.496 0.403 0.422 0.084
Table 1.243 0.994 0.511 0.388 0.181 Car 0.358 0.326 0.321 0.249 0.165 Watercraft 0.869 0.509 0.463 0.508 0.175 Mean 1.455 0.664 0.541 0.486 0.211
Table 6: Qualitative comparison against state-of-the-art multi-view shape generation methods. Following Wen et al. (2019), we report Chamfer Distance in m2 × 1000 from ground truth for different methods. Note that same model is trained for all the categories but accuracy on individual categories as well as average over all the categories are evaluated.
ABLATION STUDIES
Coarse Shape Generation We conduct comparisons on voxel grid predicted from our proposed probabilistically merged voxel grids against single view method Gkioxari et al. (2019). As is shown in Table 7, the accuracy of the initial shape generated from probabilistically merged voxel grid is higher than that from individual views.
Accuracy at Different GCN Stages We analyze the accuracy of meshes at different GCN stages in Table 8. The results validate that our method produces the meshes in a coarse-to-fine manner and multiple GCN refinements improve the mesh quality.
Resolution of Depth Prediction We conduct experiments using different numbers of depth hypotheses in our depth prediction network (Sub-section A), producing depth values at different resolutions. A higher number of depth hypothesis means finer resolution of the predicted depths. The quantitative results with different hypothesis numbers are summarized in Table 9. We set depth hypothesis as 48 for our final architecture which is equivalent to the resolution of 25 mm. We observe that the mesh accuracy remain relatively unchanged if we predict depths at finer resolutions.
Metric Single-view Multi-view F1-τ 25.19 31.27 F1-2τ 36.75 44.46
Table 7: Accuracy of predicted voxel grids from single-view prediction compared against the proposed probabilistically merged multi-view voxel grids. The voxel branch was trained separately without the mesh refinement and evaluation was performed on the cubified voxel grids. We use three views for probabilistic grid merging.
Generalization Capability We conduct experiments to evaluate the generalization capability of our system across the semantic categories. We train our model with only 12 out of the 13 categories and test on the category that was left out. Table 10 shows that the accuracy generally does not decrease significantly when compared with the model that was trained on all 13 categories when using 2τ threshold for the F-score.
Category F-score (τ ) ↑ F-score (2τ ) ↑Excluding Including Excluding Including Couch 63.29 73.63 80.79 88.24
Cabinet 68.26 76.39 83.10 88.84 Bench 76.08 83.76 87.42 92.57 Chair 60.60 78.69 75.93 90.02 Monitor 67.26 76.64 81.57 88.89 Firearm 78.59 94.32 86.28 97.67 Speaker 62.39 67.83 77.77 82.34 Lamp 63.50 75.93 74.66 85.33 Cellphone 67.24 86.45 80.54 94.28 Plane 57.48 92.13 67.27 96.57
Table 76.41 83.68 86.86 91.97 Car 59.08 80.43 75.58 92.33 Watercraft 64.97 80.48 78.95 90.35
Table 10: Accuracy when a category is excluded during training and evaluation is performed on the category to verify how well training on other categories generalizes to the excluded category.
B APPENDIX
BEST VS PRETTY MODELS
We provide qualitative comparison between the our models trained with best and pretty configurations in Figure 5. The best configuration refers to our model trained without edge regularization while pretty refers to the model trained with the regularization (Sub-section 4.1). We observe that without the regularization we get higher score on our evaluation metrics but get degenerate meshes with self-intersections and irregularly sized faces.
FAILURE CASES
Some failure cases of our model (with pretty setting) are shown in Figure 6. We notice that the rough topology of the mesh is recovered while we failed to reconstruct the fine topology. We can regard the recovery from wrong initial topology as a promising future work. | 1. What are the strengths and weaknesses of the proposed approach in the paper?
2. What are the novel aspects introduced by the paper in the field of computer vision?
3. What are the questions raised regarding the choice of certain components in the model and their significance?
4. How does the reviewer assess the clarity, quality, originality, and significance of the paper's content? | Review | Review
Quality: Overall the quality of this work is high. The quantitative and qualitative results are impressive relative to the SoA. I would like to see the qualitative results for the Best model as opposed to just the Pretty model, and I'm curious why the best qualitative mode was not the same as the best quantitative model. I would think analyzing this difference could give the authors insight into how to improve the model.
Clarity: Overall the paper is written clearly, explaining and justifying the different components of the model clearly. There are a few issues/questions I have:
Page 2: change "non-reflective" -> "reflective"
For depth estimation, I'm wondering why you changed the MVSNet loss function to use BerHu instead of L1 used in the original paper?
Could you define the terms in the BerHu criterion? What are x and c? It would also be good to shed some intuition on why this criterion is the right one.
The mixing constants in your loss function (
λ
) vary across several orders of magnitude. How were those selected?
On page 6 you state that two values of
τ
are used, but elsewhere in the paper
τ
is defined as
10
−
4
and you use
τ
and
2
τ
.
Originality: The paper generally uses a mix of SoA techniques creatively woven together in a fairly sophisticated model. Oher novel aspects such as using the neural renderer to create the contrastive depth module was interesting.
Significance: This work is significant based on the importance of the problem - this is one of the harder and most important problems in computer vision today, in the quality of its results and in the creative way it combines SoA methods to provide multiple semi-supervised losses. |
ICLR | Title
ES-MAML: Simple Hessian-Free Meta Learning
Abstract
We introduce ES-MAML, a new framework for solving the model agnostic meta learning (MAML) problem based on Evolution Strategies (ES). Existing algorithms for MAML are based on policy gradients, and incur significant difficulties when attempting to estimate second derivatives using backpropagation on stochastic policies. We show how ES can be applied to MAML to obtain an algorithm which avoids the problem of estimating second derivatives, and is also conceptually simple and easy to implement. Moreover, ES-MAML can handle new types of non-smooth adaptation operators, and other techniques for improving performance and estimation of ES methods become applicable. We show empirically that ES-MAML is competitive with existing methods and often yields better adaptation with fewer queries.
1 INTRODUCTION
Meta-learning is a paradigm in machine learning that aims to develop models and training algorithms which can quickly adapt to new tasks and data. Our focus in this paper is on meta-learning in reinforcement learning (RL), where data efficiency is of paramount importance because gathering new samples often requires costly simulations or interactions with the real world. A popular technique for RL meta-learning is Model Agnostic Meta Learning (MAML) (Finn et al., 2017; 2018), a model for training an agent which can quickly adapt to new and unknown tasks by performing one (or a few) gradient updates in the new environment. We provide a formal description of MAML in Section 2.
MAML has proven to be successful for many applications. However, implementing and running MAML continues to be challenging. One major complication is that the standard version of MAML requires estimating second derivatives of the RL reward function, which is difficult when using backpropagation on stochastic policies; indeed, the original implementation of MAML (Finn et al., 2017) did so incorrectly, which spurred the development of unbiased higher-order estimators (DiCE, (Foerster et al., 2018)) and further analysis of the credit assignment mechanism in MAML (Rothfuss et al., 2019). Another challenge arises from the high variance inherent in policy gradient methods, which can be ameliorated through control variates such as in T-MAML (Liu et al., 2019), through careful adaptive hyperparameter tuning (Behl et al., 2019; Antoniou et al., 2019) and learning rate annealing (Loshchilov & Hutter, 2017).
To avoid these issues, we propose an alternative approach to MAML based on Evolution Strategies (ES), as opposed to the policy gradient underlying previous MAML algorithms. We provide a detailed discussion of ES in Section 3.1. ES has several advantages:
∗Equal contribution. †Work performed during Google internship. ‡Work performed during the Google AI Residency Program. http://g.co/airesidency
1. Our zero-order formulation of ES-MAML (Section 3.2, Algorithm 3) does not require estimating any second derivatives. This dodges the many issues caused by estimating second derivatives with backpropagation on stochastic policies (see Section 2 for details).
2. ES is conceptually much simpler than policy gradients, which also translates to ease of implementation. It does not use backpropagation, so it can be run on CPUs only.
3. ES is highly flexible with different adaptation operators (Section 3.3).
4. ES allows us to use deterministic policies, which can be safer when doing adaptation (Section 4.3). ES is also capable of learning linear and other compact policies (Section 4.2).
On the point (4), a feature of ES algorithms is that exploration takes place in the parameter space. Whereas policy gradient methods are primarily motivated by interactions with the environment through randomized actions, ES is driven by optimization in high-dimensional parameter spaces with an expensive querying model. In the context of MAML, the notions of “exploration” and “task identification” have thus been shifted to the parameter space instead of the action space. This distinction plays a key role in the stability of the algorithm. One immediate implication is that we can use deterministic policies, unlike policy gradients which is based on stochastic policies. Another difference is that ES uses only the total reward and not the individual state-action pairs within each episode. While this may appear to be a weakness, since less information is being used, we find in practice that it seems to lead to more stable training profiles.
This paper is organized as follows. In Section 2, we give a formal definition of MAML, and discuss related works. In Section 3, we introduce Evolutionary Strategies and show how ES can be applied to create a new framework for MAML. In Section 4, we present numerical experiments, highlighting the topics of exploration (Section 4.1), the utility of compact architectures (Section 4.2), the stability of deterministic policies (Section 4.3), and comparisons against existing MAML algorithms in the few-shot regime (Section 4.4). Additional material can be found in the Appendix.
2 MODEL AGNOSTIC META LEARNING IN RL
We first discuss the original formulation of MAML (Finn et al., 2017). Let T be a set of reinforcement learning tasks with common state and action spaces S,A, and P(T ) a distribution over T . In the standard MAML setting, each task Ti ∈ T has an associated Markov Decision Process (MDP) with transition distribution qi(st+1|st, at), an episode length H , and a reward function RTi which maps a trajectory τ = (s0, a1, ..., aH−1, sH) to the total reward R(τ). A stochastic policy is a function π : S → P(A) which maps states to probability distributions over the action space. A deterministic policy is a function π : S → A. Policies are typically encoded by a neural network with parameters θ, and we often refer to the policy πθ simply by θ.
The MAML problem is to find the so-called MAML point (called also a meta-policy), which is a policy θ∗ that can be ‘adapted’ quickly to solve an unknown task T ∈ T by taking a (few)1 policy gradient steps with respect to T . The optimization problem to be solved in training (in its one-shot version) is thus of the form:
max θ J(θ) := ET∼P(T )[Eτ ′∼PT (τ ′|θ′)[RT (τ
′)]], (1)
where: θ′ = U(θ, T ) = θ + α∇θEτ∼PT (τ |θ)[RT (τ)] is called the adapted policy for a step size α > 0 and PT (·|η) is a distribution over trajectories given task T ∈ T and conditioned on the policy parameterized by η.
Standard MAML approaches are based on the following expression for the gradient of the MAML objective function (1) to conduct training:
∇θJ(θ) = ET∼P(T )[Er′∼PT (τ ′|θ′)[∇θ′ logPT (τ ′|θ′)RT (τ ′)∇θU(θ, T )]]. (2)
We collectively refer to algorithms based on computing (2) using policy gradients as PG-MAML.
1We adopt the common convention of defining the adaptation operator with a single gradient step, to simplify notation. It can be extended to multiple steps.
Since the adaptation operator U(θ, T ) contains the policy gradient ∇θEτ∼PT (τ |θ)[R(τ)], its own gradient∇θU(θ, T ) is second-order in θ:
∇θU = I+α ∫ PT (τ |θ)∇2θ log πθ(τ)RT (τ)dτ+α ∫ PT (τ |θ)∇θ log πθ(τ)∇θ log πθ(τ)TRT (τ)dτ.
(3) Correctly computing the gradient (2) with the term (3) using automatic differentiation is known to be tricky. Multiple authors (Foerster et al., 2018; Rothfuss et al., 2019; Liu et al., 2019) have pointed out that the original implementation of MAML incorrectly estimates the term (3), which inadvertently causes the training to lose ‘pre-adaptation credit assignment’. Moreover, even when correctly implemented, the variance when estimating (3) can be extremely high, which impedes training. To improve on this, extensions to the original MAML include ProMP (Rothfuss et al., 2019), which introduces a new low-variance curvature (LVC) estimator for the Hessian, and T-MAML (Liu et al., 2019), which adds control variates to reduce the variance of the unbiased DiCE estimator (Foerster et al., 2018). However, these are not without their drawbacks: the proposed solutions are complicated, the variance of the Hessian estimate remains problematic, and LVC introduces unknown estimator bias.
Another issue that arises in PG-MAML is that policies are necessarily stochastic. However, randomized actions can lead to risky exploration behavior when computing the adaptation, especially for robotics applications where the collection of tasks may involve differing system dynamics as opposed to only differing rewards (Yang et al., 2019). We explore this further in Section 4.3.
These issues: the difficulty of estimating the Hessian term (3), the typically high variance of∇θJ(θ) for policy gradient algorithms in general, and the unsuitability of stochastic policies in some domains, lead us to the proposed method ES-MAML in Section 3.
Aside from policy gradients, there have also been biologically-inspired algorithms for MAML, based on concepts such as the Baldwin effect (Fernando et al., 2018). However, we note that despite the similar naming, methods such as ‘Evolvability ES’ (Gajewski et al., 2019) bear little resemblance to our proposed ES-MAML. The problem solved by our algorithm is the standard MAML, whereas (Gajewski et al., 2019) aims to maximize loosely related notions of the diversity of behavioral characteristics. Moreover, ES-MAML and its extensions we consider are all derived notions such as smoothings and approximations, with rigorous mathematical definitions as stated below.
3 ES-MAML ALGORITHMS
Formulating MAML with ES allows us to employ numerous techniques originally developed for enhancing ES, to MAML. We aim to improve both phases of MAML algorithm: the meta-learning training algorithm, and the efficiency of the adaptation operator.
3.1 EVOLUTION STRATEGIES METHODS (ES)
Evolution Strategies (ES) (Wierstra et al., 2008; 2014), which recently became popular for RL (Salimans et al., 2017), rely on optimizing the smoothing of the blackbox function f : Rd → R, which takes as input parameters θ ∈ Rd of the policy and outputs total discounted (expected) reward obtained by an agent applying that policy in the given environment. Instead of optimizing the function f directly, we optimize a smoothed objective. We define the Gaussian smoothing of F as f̃σ(θ) = Eg∼N (0,Id)[f(θ + σg)]. The gradient of this smoothed objective, sometimes called an ES-gradient, is given as (see: (Nesterov & Spokoiny, 2017)):
∇θf̃σ(θ) = 1
σ Eg∼N (0,Id)[f(θ + σg)g]. (4)
Note that the gradient can be approximated via Monte Carlo (MC) samples:
In ES literature the above algorithm is often modified by adding control variates to equation 4 to obtain other unbiased estimators with reduced variance. The forward finite difference (Forward-FD) estimator (Choromanski et al., 2018) is given by subtracting the current policy value f(θ), yielding ∇θf̃σ(θ) = 1σEg∼N (0,Id)[(f(θ + σg) − f(θ))g]. The antithetic estimator (Nesterov & Spokoiny, 2017; Mania et al., 2018) is given by the symmetric difference ∇θf̃σ(θ) = 12σEg∼N (0,Id)[(f(θ +
1 ESGrad (f, θ, n, σ) inputs: function f , policy θ, number of perturbations n, precision σ 2 Sample n i.i.d N(0, I) vectors g1, . . . , gn; 3 return 1nσ ∑n i=1 f(θ + σgi)gi;
Algorithm 1: Monte Carlo ES Gradient
σg) − f(θ − σg))g]. Notice that the variance of the Forward-FD and antithetic estimators is translation-invariant with respect to f . In practice, the Forward-FD or antithetic estimator is usually preferred over the basic version expressed in equation 4.
In the next sections we will refer to Algorithm 1 for computing the gradient though we emphasize that there are several other recently developed variants of computing ES-gradients as well as applying them for optimization. We describe some of these variants in Section 3.3 and appendix A.3. A key feature of ES-MAML is that we can directly make use of new enhancements of ES.
3.2 META-TRAINING MAML WITH ES
To formulate MAML in the ES framework, we take a more abstract viewpoint. For each task T ∈ T , let fT (θ) be the (expected) cumulative reward of the policy θ. We treat fT as a blackbox, and make no assumptions on its structure (so the task need not even be MDP, and fT may be nonsmooth). The MAML problem is then
max θ J(θ) := ET∼P(T )fT (U(θ, T )). (5)
As argued in (Liu et al., 2019; Rothfuss et al., 2019) (see also Section 2), a major challenge for policy gradient MAML is estimating the Hessian, which is both conceptually subtle and difficult to correctly implement using automatic differentiation. The algorithm we propose obviates the need to calculate any second derivatives, and thus avoids this issue.
Suppose that we can evaluate (or approximate) fT (θ) and U(θ, T ), but fT and U(·, T ) may be nonsmooth or their gradients may be intractable. We consider the Gaussian smoothing J̃σ of the MAML reward (5), and optimize J̃σ using ES methods. The gradient∇J̃σ(θ) is given by
∇J̃σ(θ) = E T∼P(T ) g∼N (0,I)
[ 1
σ fT (U(θ + σg, T ))g
] (6)
and can be estimated by jointly sampling over (T,g) and evaluating fT (U(θ + σg, T )). This algorithm is specified in Algorithm 2 box, and we refer to it as (zero-order) ES-MAML.
Data: initial policy θ0, meta step size β 1 for t = 0, 1, . . . do 2 Sample n tasks T1, . . . , Tn and iid vectors g1, . . . ,gn ∼ N (0, I); 3 foreach (Ti,gi) do 4 vi ← fTi(U(θt + σgi, Ti)) 5 end 6 θt+1 ← θt + βσn ∑n i=1 vigi 7 end Algorithm 2: Zero-Order ES-MAML (general adaptation operator U(·, T ))
Data: initial policy θ0, adaptation step size α, meta step size β, number of queries K 1 for t = 0, 1, . . . do 2 Sample n tasks T1, . . . , Tn and iid vectors g1, . . . ,gn ∼ N (0, I); 3 foreach (Ti,gi) do 4 d(i) ← ESGRAD(fTi , θt + σgi,K, σ); 5 θ
(i) t ← θt + σgi + αd(i);
6 vi ← fTi(θ(i)t ); 7 end 8 θt+1 ← θt + βσn ∑n i=1 vigi; 9 end Algorithm 3: Zero-Order ES-MAML with ESGradient Adaptation
The standard adaptation operator U(·, T ) is the one-step task gradient. Since fT is permitted to be nonsmooth in our setting, we use the adaptation operator U(θ, T ) = θ + α∇f̃Tσ (θ) acting on its smoothing. Expanding the definition of J̃σ , the gradient of the smoothed MAML is then given by
∇J̃σ(θ) = 1
σ E T∼P(T ) g∼N (0,I)
[ fT ( θ + σg + 1
σ Eh∼N (0,I)[fT (θ + σg + σh)h]
) g ] . (7)
This leads to the algorithm that we specify in Algorithm 3, where the adaptation operator U(·, T ) is itself estimated using the ES gradient in the inner loop.
We can also derive an algorithm analogous to PG-MAML by applying a first-order method to the MAML reward ET∼P(T )f̃T (θ + α∇f̃T (θ)) directly, without smoothing. The gradient is given by
∇J(θ) = ET∼P(T )∇f̃T (θ + α∇f̃T (θ))(I+ α∇2f̃T (θ)), (8)
which corresponds to equation (3) in (Liu et al., 2019) when expressed in terms of policy gradients. Every term in this expression has a simple Monte Carlo estimator (see Algorithm 4 in the appendix for the MC Hessian estimator). We discuss this algorithm in greater detail in Appendix A.1. This formulation can be viewed as the “MAML of the smoothing”, compared to the “smoothing of the MAML” which is the basis for Algorithm 3. It is the additional smoothing present in equation 6 which eliminates the gradient of U(·, T ) (and hence, the Hessian of fT ). Just as with the Hessian estimation in the original PG-MAML, we find empirically that the MC estimator of the Hessian (Algorithm 4) has high variance, making it often harmful in training. We present some comparisons between Algorithm 3 and Algorithm 5, with and without the Hessian term, in Appendix A.1.2.
Note that when U(·, T ) is estimated, such as in Algorithm 3, the resulting estimator for∇J̃σ will in general be biased. This is similar to the estimator bias which occurs in PG-MAML because we do not have access to the true adapted trajectory distribution. We discuss this further in Appendix A.2.
3.3 IMPROVING THE ADAPTATION OPERATOR WITH ES
Algorithm 2 allows for great flexibility in choosing new adaptation operators. The simplest extension is to modify the ES gradient step: we can draw on general techniques for improving the ES gradient estimator, some of which are described in Appendix A.3. Some other methods are explored below.
3.3.1 IMPROVED EXPLORATION
Instead of using i.i.d Gaussian vectors to estimate the ES gradient in U(·, T ), we consider samples constructed according to Determinantal Point Processes (DPP). DPP sampling (Kulesza & Taskar, 2012; Wachinger & Golland, 2015) is a method of selecting a subset of samples so as to maximize the ‘diversity’ of the subset. It has been applied to ES to select perturbations gi so that the gradient estimator has lower variance (Choromanski et al., 2019a). The sampling matrix determining DPP sampling can also be data-dependent and use information from the meta-training stage to construct a learned kernel with better properties for the adaptation phase. In the experimental section we show that DPP-ES can help in improving adaptation in MAML.
3.3.2 HILL CLIMBING AND POPULATION SEARCH
Nondifferentiable operators U(·, T ) can be also used in Algorithm 2. One particularly interesting example is the local search operator given by U(θ, T ) = argmax{fT (θ′) : ‖θ′ − θ‖ ≤ R}, where R > 0 is the search radius. That is, U(θ, T ) selects the best policy for task T which is in a ‘neighborhood’ of θ. For simplicity, we took the search neighborhood to be the ball B(θ,R) here, but we may also use more general neighborhoods of θ. In general, exactly solving for the maximizer of fT over B(θ,R) is intractable, but local search can often be well approximated by a hill climbing algorithm. Hill climbing creates a population of candidate policies by perturbing the best observed policy (which is initialized to θ), evaluates the reward fT for each candidate, and then updates the best observed policy. This is repeated for several iterations. A key property of this search method is that the progress is monotonic, so the reward of the returned policy U(θ, T ) will always improve over θ. This does not hold for the stochastic gradient operator, and appears to be beneficial on some difficult problems (see Section 4.1). It has been claimed that hill climbing and other genetic algorithms (Moriarty et al., 1999) are competitive with gradient-based methods for solving difficult RL tasks (Such et al., 2017; Risi & Stanley, 2019). Another stochastic algorithm approximating local search is CMA-ES (Hansen et al., 2003; Igel, 2003; Krause et al., 2016), which performs more sophisticated search by adapting the covariance matrix of the perturbations.
4 EXPERIMENTS
The performance of MAML algorithms can be evaluated in several ways. One important measure is the performance of the final meta-policy: whether the algorithm can consistently produce metapolicies with better adaptation. In the RL setting, the adaptation of the meta-policy is also a function of the number K of queries used: that is, the number of rollouts used by the adaptation operator U(·, T ). The meta-learning goal of data efficiency corresponds to adapting with low K. The speed of the meta-training is also important, and can be measured in several ways: the number of metapolicy updates, wall-clock time, and the number of rollouts used for meta-training. In this section, we present experiments which evaluate various aspects of ES-MAML and PG-MAML in terms of data efficiency (K) and meta-training time. Further details of the environments and hyperparameters are given in Appendix A.7.
In the RL setting, the amount of information used drastically decreases if ES methods are applied in comparison to the PG setting. To be precise, ES uses only the cumulative reward over an episode, whereas policy gradients use every state-action pair. Intuitively, we may thus expect that ES should have worse sampling complexity because it uses less information for the same number of rollouts. However, it seems that in practice ES often matches or even exceeds policy gradients approaches (Salimans et al., 2017; Mania et al., 2018). Several explanations have been proposed: In the PG case, especially with algorithms such as PPO, the network must optimize multiple additional surrogate objectives such as entropy bonuses and value functions as well as hyperparameters such as the TDstep number. Furthermore, it has been argued that ES is more robust against delayed rewards, action infrequency, and long time horizons (Salimans et al., 2017). These advantages of ES in traditional RL also transfer to MAML, as we show empirically in this section. ES may lead to additional advantages (even if the numbers of rollouts needed in training is comparable with PG ones) in terms of wall-clock time, because it does not require backpropagation, and can be parallelized over CPUs.
4.1 EXPLORATION: TARGET ENVIRONMENTS
In this section, we present two experiments on environments with very sparse rewards where the meta-policy must exhibit exploratory behavior to determine the correct adaptation.
The four corners benchmark was introduced in (Rothfuss et al., 2019) to demonstrate the weaknesses of exploration in PG-MAML. An agent on a 2D square receives reward for moving towards a selected corner of the square, but only observes rewards once it is sufficiently close to the target corner, making the reward sparse. An effective exploration strategy for this set of tasks is for the meta-policy θ∗ to travel in circular trajectories to observe which corner produces rewards; however, for a single policy to produce this exploration behavior is difficult. In Figure 1, we demonstrate the behavior of ES-MAML on the four corners problem. When K = 20, the same number of rollouts for adaptation as used in (Rothfuss et al., 2019), the basic version of Algorithm 3 is able to correctly explore and adapt to the task by finding the target corner. Moreover, it does not require any modifications to encourage exploration, unlike PG-MAML. We further used K = 10, 5, which caused the performance to drop. For better performance in this low-information environment, we experimented with two different adaptation operators U(·, T ) in Algorithm 2, which are HC (hill climbing) and DPP-ES. The standard ES gradient is denoted MC.
Furthermore, ES-MAML is not limited to “single goal” exploration. We created a more difficult task, six circles, where the agent continuously accrues negative rewards until it reaches six target points to “deactivate” them. Solving this task requires the agent to explore in circular trajectories, similar to the trajectory used by PG-MAML on the four corners task. We visualize the behavior in Figure 2. Observe that ES-MAML with the HC operator is able to develop a strategy to explore the target locations.
Additional examples on the classic Navigation-2D task are presented in Appendix A.4, highlighting the differences in exploration behavior between PG-MAML and ES-MAML.
4.2 GOOD ADAPTATION WITH COMPACT ARCHITECTURES
One of the main benefits of ES is due to its ability to train compact linear policies, which can outperform hidden-layer policies. We demonstrate this on several benchmark MAML problems in the HalfCheetah and Ant environments in Figure 3. In contrast, (Finn & Levine, 2018) observed that PG-MAML empirically and theoretically suggested that training with more deeper layers under SGD increases performance. We demonstrate that on the Forward-Backward and Goal-Velocity MAML benchmarks, ES-MAML is consistently able to train successful linear policies faster than deep networks. We also show that, for the Forward-Backward Ant problem, ES-MAML with the new HC operator is the most performant. Using more compact policies also directly speeds up ES-MAML, since fewer perturbations are needed for gradient estimation.
4.3 DETERMINISTIC POLICIES
We find that deterministic policies often produce more stable behaviors than the stochastic ones that are required for PG, where randomized actions in unstable environments can lead to catastrophic outcomes. In PG, this is often mitigated by reducing the entropy bonus, but this has an undesirable side effect of reducing exploration. In contrast, ES-MAML explores in parameter space, which mitigates this issue. To demonstrate this, we use the “Biased-Sensor CartPole” environment from (Yang et al., 2019). This environment has unstable dynamics and sparse rewards, so it requires exploration but is also risky. We see in Figure 4 that ES-MAML is able to stably maintain the maximum reward (500).
We also include results in Figure 4 from two other environments, Swimmer and Walker2d, for which it is known that PG is surprisingly unstable, and ES yields better training (Mania et al., 2018). Notice that we again find linear policies (L) outperforming policies with one (H) or two (HH) hidden layers.
4.4 LOW-K BENCHMARKS
For real-world applications, we may be constrained to use fewer queries K than has typically been demonstrated in previous MAML works. Hence, it is of interest to compare how ES-MAML compares to PG-MAML for adapting with very low K.
One possible concern is that low K might harm ES in particular because it uses only the cumulative rewards; if for example K = 5, then the ES adaptation gradient can make use of only 5 values. In comparison, PG-MAML uses K · H state-action pairs, so for K = 5, H = 200, PG-MAML still has 1000 pieces of information available.
However, we find experimentally that the standard ES-MAML (Algorithm 3) remains competitive with PG-MAML even in the low-K setting. In Figure 5, we compare ES-MAML and PG-MAML on the Forward-Backward and Goal-Velocity tasks across four environments (HalfCheetah, Swimmer, Walker2d, Ant) and two model architectures. While PG-MAML can generally outperform ESMAML on the Goal-Velocity task, ES-MAML is similar or better on the Forward-Backward task. Moreover, we observed that for low K, PG-MAML can be highly unstable (note the wide error bars), with some trajectories failing catastrophically, whereas ES-MAML is relatively stable. This is an important consideration in real applications, where the risk of catastrophic failure is undesirable.
5 CONCLUSION
We have presented a new framework for MAML based on ES algorithms. The ES-MAML approach avoids the problems of Hessian estimation which necessitated complicated alterations in PG-MAML and is straightforward to implement. ES-MAML is flexible in the choice of adaptation operators, and can be augmented with general improvements to ES, along with more exotic adaptation operators. In particular, ES-MAML can be paired with nonsmooth adaptation operators such as hill climbing, which we found empirically to yield better exploratory behavior and better performance on sparse-reward environments. ES-MAML performs well with linear or compact deterministic policies, which is an advantage when adapting if the state dynamics are possibly unstable.
A.1 FIRST-ORDER ES-MAML
A.1.1 ALGORITHM
Suppose that we first apply Gaussian smoothing to the task rewards and then form the MAML problem, so we have J(θ) = ET∼P(T )f̃T (U(θ, T )). The function J is then itself differentiable, and we can directly apply first-order methods to it. The classical case where U(θ, T ) = θ + α∇f̃T (θ) yields the gradient
∇J(θ) = ET∼P(T )∇f̃T (θ + α∇f̃T (θ))(I+ α∇2f̃T (θ)). (9) This is analogous to formulas obtained in e.g (Liu et al., 2019) for the policy gradient MAML. We can then approximate this gradient as an input to stochastic first-order methods. An example with standard SGD is shown in Algorithm 5.
1 ESHess (f, θ, n, σ) inputs: function f , policy θ, number of perturbations n, precision σ 2 Sample i.i.d N (0, I) vectors g1, . . . ,gn; 3 v ← 1n ∑n i=1 f(θ + σgi);
4 H0 ← 1n ∑n i=1 f(θ + σgi)gig T i ; 5 return 1σ2 (H 0 − v · I);
Algorithm 4: Monte Carlo ES Hessian
Data: initial policy θ0, adaptation step size α, meta step size β, number of queries K
1 for t = 0, 1, . . . do 2 Sample n tasks T1, . . . , Tn; 3 foreach Ti do 4 d
(i) 1 ← ESGRAD(fTi , θt,K, σ);
5 H(i) ← ESHESS(fTi , θt,K, σ); 6 θ
(i) t ← θt + α · di;
7 d (i) 2 ← ESGRAD(fTi , θ (i) t ,K, σ); 8 end 9 θt+1 ← θt + βn ∑n i=1(I+ αH (i))d (i) 2 ;
10 end Algorithm 5: First Order ES-MAML
A central problem, as discussed in (Rothfuss et al., 2019; Liu et al., 2019) is the estimation of ∇2f̃T (θ). However, a simple expression exists for this object in the ES setting; it can be shown that
∇2f̃T (θ) = 1 σ2 (Eh∼N (0,I)[fT (θ + σh)hhT ]− f̃T (θ)I]. (10)
Note that for the vector h, hT is the transpose (and unrelated to tasks T ). A basic MC estimator is shown in Algorithm 4. Given an independent estimator for ∇f̃T (θ + α∇f̃T (θ)), we can then take the product to obtain an estimator for∇J .
A.1.2 EXPERIMENTS WITH FIRST-ORDER ES-MAML
Unlike zero-order ES-MAML (Algorithm 3), the first-order ES-MAML explicitly builds an approximation of the Hessian of fT . Given the literature on PG-MAML, we expect that estimating the Hessian ∇2f̃T (θ) with Algorithm 4 without any control variates may have high variance. We compare two variants of first-order ES-MAML:
1. The full version (FO-Hessian) specified in Algorithm 5.
2. The ‘first-order approximation’ (FO-NoHessian) which ignores the term I+α∇2f̃T (θ) and approximates the MAML gradient as ET∼P(T )∇f̃T (θ + α∇f̃T (θ)). This is equivalent to setting H(i) = 0 in line 5 of Algorithm 5.
The results on the four corner exploration problem (Section 4.1) and the Forward-Backward Ant, using Linear policies, are shown in Figure A1. On Forward-Backward Ant, FO-NoHessian actually outperformed FO-Hessian, so the inclusion of the Hessian term actually slowed convergence. On the four corners task, both FO-Hessian and FO-NoHessian have large error bars, and FO-Hessian slightly outperforms FO-NoHessian.
There is conflicting evidence as to whether the same phenomenon occurs with PG-MAML; (Finn et al., 2017, §5.2) found that on supervised learning MAML, omitting Hessian terms is competitive
Figure A1: Comparisons between the FO-Hessian and FO-NoHessian variants of Algorithm 5.
but slightly worse than the full PG-MAML, and does not report comparisons with and without the Hessian on RL MAML. (Rothfuss et al., 2019; Liu et al., 2019) argue for the importance of the second-order terms in proper credit assignment, but use heavily modified estimators (LVC, control variates; see Section 2) in their experiments, so the performance is not directly comparable to the ‘naive’ estimator in Algorithm 4. Our interpretation is that Algorithm 4 has high variance, making the Hessian estimates inaccurate, which can slow training on relatively ‘easier’ tasks like ForwardBackward walking but possibly increase the exploration on four corners.
We also compare FO-NoHessian against Algorithm 3 on Forward-Backward HalfCheetah and Ant in Figure A2. In this experiment, the two methods ran on servers with different number of workers available, so we measure the score by the total number of rollouts. We found that FO-NoHessian was slightly faster than Algorithm 3 when measured by rollouts on Ant, but FO-NoHessian had notably poor performance when the number of queries was low (K = 5) on HalfCheetah, and failed to reach similar scores as the others even after running for many more rollouts.
Figure A2: Comparisons between FO-NoHessian and Algorithm 3, by rollouts
.
A.2 HANDLING ESTIMATOR BIAS
Since the adapted policy U(θ, T ) generally cannot be evaluated exactly, we cannot easily obtain unbiased estimates of fT (U(θ, T )). This problem arises for both PG-MAML and ES-MAML.
We consider PG-MAML first as an example. In PG-MAML, the adaptation operator is U(θ, T ) = θ+α∇θEτ∼PT (τ |θ)[R(τ)]. In general, we can only obtain an estimate of∇θEτ∼PT (τ |θ)[R(τ)] and not its exact value. However, the MAML gradient is given by
∇θJ(θ) = ET ∼P(T )[Er′∼PT (τ ′|θ′)[∇θ′ logPT (τ ′|θ′)R(τ ′)∇θU(θ, T )]] (11)
which requires exact sampling from the adapted trajectories τ ′ ∼ PT (τ ′|U(θ, T )). Since this is a nonlinear function of U(θ, T ), we cannot obtain unbiased estimates of ∇J(θ) by sampling τ ′ generated by an estimate of U(θ, T ).
In the case of ES-MAML, the adaptation operator is U(θ, T ) = θ+α∇f̃(θ, T ) = Ehu(θ, T ;h) for h ∼ N (0, I), where u(θ, T ;h) = θ + ασ f
T (θ + σh)h. Clearly, fT (u(θ, T ;h)) is not an unbiased estimator of fT (U(θ, T )).
We may question whether using an unbiased estimator of fT (U(θ, T )) is likely to improve performance. One natural strategy is to reformulate the objective function so as to make the desired estimator unbiased. This happens to be the case for the algorithm E-MAML (Al-Shedivat et al., 2018), which treats the adaptation operator as an explicit function of K sampled trajectories and “moves the expectation outside”. That is, we now have an adaptation operator U(θ, T ; τ1, . . . , τK), and the objective function becomes
ET [Eτ1,...,τk∼PT (τ |θ)f T (U(θ, T ; τ1, . . . , τK))] (12)
An unbiased estimator for the E-MAML gradient can be obtained by sampling only from τ ∼ PT (τ |θ) (Al-Shedivat et al., 2018). However, it has been argued that by doing so, E-MAML does not properly assign credit to the pre-adaptation policy (Rothfuss et al., 2019). Thus, this particular mathematical strategy seems to be disadvantageous for RL.
The problem of finding estimators for function-of-expectations f(EX) is difficult and while general unbiased estimation methods exist (Blanchet et al., 2017), they are often complicated and suffer from high variance. In the context of MAML, ProMP compares the low variance curvature (LVC) estimator (Rothfuss et al., 2019), which is biased, against the unbiased DiCE estimator (Foerster et al., 2018), for the Hessian term in the MAML gradient, and found that the lower variance of LVC produced better performance than DiCE. Alternatively, control variates can be used to reduce the variance of the DiCE estimator, which is the approach followed in (Liu et al., 2019).
In the ES framework, the problem can also be formulated to avoid exactly evaluating U(·, T ), and hence circumvents the question of estimator bias. We observe an interesting connection between MAML and the stochastic composition problem. Let us define uh(θ, T ) = u(θ, T ;h) and fTg (θ) = fT (θ + σg). For a given task T , the MAML reward is given by
f̃T (U(θ, T )) = f̃T [Ehuh(θ, T )] = EgfTg (Ehuh(θ, T )). (13)
This is a two-layer nested stochastic composition problem with outer function f̃T = EgfTg and inner function U(·, T ) = Ehuh(·, T ). An accelerated algorithm (ASC-PG) was developed in (Wang et al., 2017)] for this class of problems. While neither fTg nor uh(·, T ) is smooth, which is assumed in (Wang et al., 2017), we can verify that the crucial content of the assumptions hold:
1. Ehuh(θ, T ) = U(θ, T ) 2. We can define two functions
ζTg (θ) = 1
σ fTg (θ)g, ξ T h (θ) = I+
α
σ2 (fTh (θ)hh T − fTh (θ)I)
such that for any θ1, θ2,
Eg,h[ξTh (θ1)ζTg (θ2)] = JU(θ1, T )∇f̃T (θ2)
where JU denotes the Jacobian of U(·, T ), and g,h are independent vectors sampled from N (0, I). This follows immediately from equation 4 and equation 10.
The ASC-PG algorithm does not immediately extend to the full MAML problem, as upon taking an outer expectation over T , the MAML reward J(θ) = ETEgfTg (Ehuh(θ, T )) is no longer a stochastic composition of the required form. In particular, there are conceptual difficulties when the number of tasks in T is infinite. However, it can be used to solve the MAML problem for each task within a consensus framework, such as consensus ADMM (Hong et al., 2016).
A.3 EXTENSIONS OF ES
In this section, we discuss several general techniques for improving the basic ES gradient estimator (Algorithm 1). These can be applied both to the ES gradient of the meta-training (the ‘outer loop’ of Algorithm 3), and more interestingly, to the adaptation operator itself. That is, given U(θ, T ) =
θ + α∇f̃Tσ (θ), we replace the estimation of U by ESGRAD on line 4 of Algorithm 3 with an improved estimator of ∇f̃Tσ (θ), which even may depend on data collected during the meta-training stage. Many techniques exist for reducing the variance of the estimator such as Quasi Monte Carlo sampling (Choromanski et al., 2018). Aside from variance reduction, there are also methods with special properties.
A.3.1 ACTIVE SUBSPACES
Active Subspaces is a method for finding a low-dimensional subspace where the contribution of the gradient is maximized. Conceptually, the goal is to find and update on-the-fly a low-rank subspace L so that the projection ∇fT (θ)L of ∇fT (θ) into L is maximized and apply ∇fT (θ)L instead of ∇fT (θ). This should be done in such a way that ∇fT (θ) does not need to be computed explicitly. Optimizing in lower-dimensional subspaces might be computationally more efficient and can be thought of as an example of guided ES methods, where the algorithm is guided how to explore space in the anisotropic way, leveraging its knowledge about function optimization landscape that it gained in the previous steps of optimization. In the context of RL, the active subspace method ASEBO (Choromanski et al., 2019b) was successfully applied to speed up policy training algorithms. This strategy can be made data-dependent also in the MAML context, by learning an optimal subspace using data from the meta-training stage, and sampling from that subspace in the adaptation step.
A.3.2 REGRESSION-BASED OPTIMIZATION
Regression-Based Optimization (RBO) is an alternative method of gradient estimation. From Taylor series expansion we have f(θ + d) − f(θ) = ∇f(θ)Td + O(‖d‖2). By taking multiple finite difference expressions f(θ + d) − f(θ) for different d, we can recover the gradient by solving a regularized regression problem. The regularization has an additional advantage - it was shown that the gradient can be recovered even if a substantial fraction of the rewards f(θ + d) are corrupted (Choromanski et al., 2019c). Strictly speaking, this is not based on the Gaussian smoothing as in ES, but is another method for estimating gradients using only zero-th order evaluations.
A.3.3 EXPERIMENTS
We present a preliminary experiment with RBO and ASEBO gradient adaptation in Figure A3. To be precise, the algorithms used are identical to Algorithm 3 except that in line 4, d(i) ← ESGRAD is replaced by d(i) ← RBO (yielding RBO-MAML) and d(i) ← ASEBO (yielding ASEBO-MAML) respectively.
Figure A3: RBO-MAML and ASEBO-MAML compared to ES-MAML.
On the left plot, we test for noise robustness on the Forward-Backward Swimmer MAML task, comparing standard ES-MAML (Algorithm 3) to RBO-MAML. To simulate noisy data, we randomly corrupt 25% of the queries fT (θ + σg) used to estimate the adaptation operator U(θ, T ) with an enormous additive noise. This is the same type of corruption used in (Choromanski et al., 2019c).
Interestingly, RBO does not appear to be more robust against noise than the standard MC estimator, which suggests that the original ES-MAML has some inherent robustness to noise.
On the right plot, we compare ASEBO-MAML to ES-MAML on the Goal-Velocity HalfCheetah task in the low-K setting. We found that when measured in iterations, ASEBO-MAML outperforms ES-MAML. However, ASEBO requires additional linear algebra operations and thus uses significantly more wall-clock time (not shown in plot) per iteration, so if measured by real time, then ES-MAML was more effective.
A.4 NAVIGATION-2D EXPLORATION TASK
Navigation-2D (Finn et al., 2017) is a classic environment where the agent must explore to adapt to the task. The agent is represented by a point on a 2D square, and at each time step, receives reward equal to its distance from a given target point on the square. Note that unlike the four corners and six circles tasks, the reward for Navigation-2D is dense. We visualize the differing exploration strategies learned by PG-MAML and ES-MAML in Figure A4. Notice that PG-MAML makes many tiny movements in multiple directions to ‘triangulate’ the target location using the differences in reward for different state-action pairs. On the other hand, ES-MAML learns a meta-policy such that each perturbation of the meta-policy causes the agent to move in a different direction (represented by red paths), so it can determine the target location from the total rewards of each path.
Figure A4: Comparing the exploration behavior of PG-MAML and ES-MAML on the Navigation2D task. We use K = 20 queries for each algorithm.
A.5 PG-MAML RL BENCHMARKS
In Figure A5, we compare ES-MAML and PG-MAML on the Forward-Backward and Goal-Velocity tasks for HalfCheetah, Swimmer, Walker2d, and Ant, using the same values of K that were used in the original experiments of (Finn et al., 2017).
Figure A5: Comparisons between ES-MAML and PG-MAML using the queriesK from (Finn et al., 2017).
A.6 REGRESSION AND SUPERVISED LEARNING
MAML has also been applied to supervised learning. We demonstrate ES-MAML on sine regression (Finn et al., 2017), where the task is to fit a sine curve f with unknown amplitude and phase given a set of K pairs (xi, f(xi)). The meta-policy must be able to learn that all of tasks have a common periodic nature, so that it can correctly adapt to an unknown sine curve outside of the points xi.
For regression, the loss is the mean-squared error (MSE) between the adapted policy πθ(x) and the true curve f(x). Given data samples {(xi, f(xi)}Ki=1, the empirical loss isL(θ) = 1K ∑K i=1(f(xi)− πθ(xi)) 2. Note that unlike in reinforcement learning, we can exactly compute∇L(θ); for deep networks, this is by automatic differentiation. Thus, we opt to use Tensorflow to compute the adaptation operator U(θ, T ) in Algorithm 3. This is in accordance with the general principle that when gradients are available, it is more efficient to use the gradient than to approximate it by a zero-order method (Nesterov & Spokoiny, 2017).
We show several results in Figure A6. The adaptation step size is α = 0.01, which is the same as in (Finn et al., 2017). For comparison, (Finn et al., 2017) reports that PG-MAML can obtain a loss of ≈ 0.5 after one adaptation step with K = 5, though it is not specified how many iterations the meta-policy was trained for. ES-MAML approaches the same level of performance, though the number of training iterations required is higher than for the RL tasks, and surprisingly high for what appears to be a simpler problem. This is likely again a reflection of the fact that for problems such as regression where the gradients are available, it is more efficient to use gradients.
As an aside, this leads to a related question of the correct interpretation of the query number K in the supervised setting. There is a distinction between obtaining a data sample (xi, f(xi)), and doing a computation (such as a gradient) using that sample. If the main bottleneck is collecting the data {(xi, f(xi)}, then we may be satisfied with any algorithm that performs any number of operations on the data, as long as it uses only K samples. On the other hand, in the (on-policy) RL setting, samples cannot typically be ‘re-used’ to the same extent, because rollouts τ sampled with a given
Figure A6: The MSE of the adapted policy, for varying number of gradient steps and query number K. Runs are averaged across 3 seeds.
policy πθ follow an unknown distribution P(τ |θ) which reduces their usefulness away from θ. Thus, the corresponding notion to rollouts in the SL setting would be the number of backpropagations (for PG-MAML) or perturbations (for ES-MAML), but clearly these have different relative costs than doing simulations in RL.
A.7 HYPERPARAMETERS AND SETUPS
A.7.1 ENVIRONMENTS
Unless otherwise explicitly stated, we default to K = 20 and horizon = 200 for all RL experiments. We also use the standard reward normalization in (Mania et al., 2018), and use a global state normalization (i.e. the same mean, standard deviation normalization values for MDP states are shared across workers).
For the Ant environments (Goal-Position Ant, Forward-Backward Ant), there are significant differences in weighting on the auxiliary rewards such as control costs, contact costs, and survival rewards across different previous work (e.g. those costs are downweighted in (Finn et al., 2017) whereas the coefficients are vanilla Gym weightings in (Liu et al., 2019)). These auxiliary rewards can lead to local minima, such as the agent staying stationary to collect the survival bonus which may be confused with movement progress when presenting a training curve. To make sure the agent is explicitly performing the required task, we opted to remove such costs in our work and only present the main goal-distance cost and forward-movement reward respectively.
For the other environments, we used default weightings and rewards, since they do not change across previous works.
A.7.2 ES-MAML HYPERPARAMETERS
Let N be the number of possible distinct tasks possible. We sample tasks without replacement, which is important if N 5, as each worker performs adaptations on all possible tasks. For standard ES-MAML (Algorithm 3), we used the following settings.
Setting Value (Total Workers, # Perturbations, # Current Evals) (300, 150, 150) (Train Set Size, Task Batch Size, Test Set Size) (50,5,5) or (N,N,N) Number of rollouts per parameter 1 Number of Perturbations per worker 1 Outer-Loop Precision Parameter 0.1 Adaptation Precision Parameter 0.1 Outer-Loop Step Size 0.01 Adaptation Step Size (α) 0.05 Hidden Layer Width 32 ES Estimation Type Forward-FD Reward Normalization True State Normalization True
For ES-MAML and PG-MAML, we took 3 seeded runs, using the default TRPO hyperparameters found in (Liu et al., 2019). | 1. What are the strengths and weaknesses of the proposed MAML algorithm based on evolutionary strategies?
2. How does the approach compare to other reinforcement learning methods, particularly regarding its stability and efficiency?
3. Are there any concerns or limitations regarding hyperparameter tuning and task complexity?
4. Is there a discussion missing about potential failure cases and limitations of the method?
5. Are there any questions regarding the architecture used in section 4.4? | Review | Review
Note: I was asked to write a last-minute review for this paper since the overall rating of the other reviews are not consistent. Therefore, the review is rather brief and I will comment also on concerns raised by the other reviewers.
The paper introduces a new MAML algorithm based on evolutionary strategies (ES) for reinforcement learning tasks. Compared to prior MAML algorithms requiring an estimation of the Hessian, ES-MAML demonstrated to be more stable and efficient. Overall, the paper is well motivated, well written and uses a sound mathematical formulation of the solution approach. Furthermore, the results are convincing and show quite some promise.
Concerning the remarks from Reviewer #3, I believe that it is totally fair to use here a simple ES algorithm that still shows reasonable performance. Of course, we would expect that other ES algorithms might perform better, but this is clearly not the point of the paper. Furthermore, also other papers [1,2] showed that very simple ES algorithm can perform very well on weight optimization of policies.
(Remark: since there is no page limit for refs, I would recommend to cite [1,2] in the paper)
I share some concerns from Reviewer #4 regarding the hyperparameters. By now, it is well known that hyperparameter tuning can improve the performance of RL algorithm quite a bit and is sometimes even the main factor for superior performance. The authors wrote in their reply to Reviewer #4: “In fact, we did not perform much tuning,”. I would like to reply: In fact, this is not a very useful answer. If there was hyperparameter tuning involved, the amount has to be quantified (in the appendix) and the same amount should be applied to all approaches being compared in the paper.
Furthermore, I missed a discussion about the limitations of the approach. For example, I would expect that the approach will fail if the networks get too large (and thus the parameter space is too large (>1Mio Parameters?)) and the task is fairly complicated such that the parameter space is not too redundant. I think there is a reason why people tried to use ES for optimizing DNNs for decades, but failed, and now nearly everyone uses GD variants. So, the authors should be more explicit about potential failure cases and limitations.
Small remark: I haven’t found a description of the architectures used in Section 4.4. Since the paper should be self-contained, I would recommend to briefly make this explicit in the appendix.
[1] Patryk Chrabaszcz, Ilya Loshchilov, Frank Hutter: Back to Basics: Benchmarking Canonical Evolution Strategies for Playing Atari. IJCAI 2018: 1419-1426
[2] Lior Fuks, Noor Awad, Frank Hutter, Marius Lindauer:
An Evolution Strategy with Progressive Episode Lengths for Playing Games. IJCAI 2019: 1234-1240 |
ICLR | Title
ES-MAML: Simple Hessian-Free Meta Learning
Abstract
We introduce ES-MAML, a new framework for solving the model agnostic meta learning (MAML) problem based on Evolution Strategies (ES). Existing algorithms for MAML are based on policy gradients, and incur significant difficulties when attempting to estimate second derivatives using backpropagation on stochastic policies. We show how ES can be applied to MAML to obtain an algorithm which avoids the problem of estimating second derivatives, and is also conceptually simple and easy to implement. Moreover, ES-MAML can handle new types of non-smooth adaptation operators, and other techniques for improving performance and estimation of ES methods become applicable. We show empirically that ES-MAML is competitive with existing methods and often yields better adaptation with fewer queries.
1 INTRODUCTION
Meta-learning is a paradigm in machine learning that aims to develop models and training algorithms which can quickly adapt to new tasks and data. Our focus in this paper is on meta-learning in reinforcement learning (RL), where data efficiency is of paramount importance because gathering new samples often requires costly simulations or interactions with the real world. A popular technique for RL meta-learning is Model Agnostic Meta Learning (MAML) (Finn et al., 2017; 2018), a model for training an agent which can quickly adapt to new and unknown tasks by performing one (or a few) gradient updates in the new environment. We provide a formal description of MAML in Section 2.
MAML has proven to be successful for many applications. However, implementing and running MAML continues to be challenging. One major complication is that the standard version of MAML requires estimating second derivatives of the RL reward function, which is difficult when using backpropagation on stochastic policies; indeed, the original implementation of MAML (Finn et al., 2017) did so incorrectly, which spurred the development of unbiased higher-order estimators (DiCE, (Foerster et al., 2018)) and further analysis of the credit assignment mechanism in MAML (Rothfuss et al., 2019). Another challenge arises from the high variance inherent in policy gradient methods, which can be ameliorated through control variates such as in T-MAML (Liu et al., 2019), through careful adaptive hyperparameter tuning (Behl et al., 2019; Antoniou et al., 2019) and learning rate annealing (Loshchilov & Hutter, 2017).
To avoid these issues, we propose an alternative approach to MAML based on Evolution Strategies (ES), as opposed to the policy gradient underlying previous MAML algorithms. We provide a detailed discussion of ES in Section 3.1. ES has several advantages:
∗Equal contribution. †Work performed during Google internship. ‡Work performed during the Google AI Residency Program. http://g.co/airesidency
1. Our zero-order formulation of ES-MAML (Section 3.2, Algorithm 3) does not require estimating any second derivatives. This dodges the many issues caused by estimating second derivatives with backpropagation on stochastic policies (see Section 2 for details).
2. ES is conceptually much simpler than policy gradients, which also translates to ease of implementation. It does not use backpropagation, so it can be run on CPUs only.
3. ES is highly flexible with different adaptation operators (Section 3.3).
4. ES allows us to use deterministic policies, which can be safer when doing adaptation (Section 4.3). ES is also capable of learning linear and other compact policies (Section 4.2).
On the point (4), a feature of ES algorithms is that exploration takes place in the parameter space. Whereas policy gradient methods are primarily motivated by interactions with the environment through randomized actions, ES is driven by optimization in high-dimensional parameter spaces with an expensive querying model. In the context of MAML, the notions of “exploration” and “task identification” have thus been shifted to the parameter space instead of the action space. This distinction plays a key role in the stability of the algorithm. One immediate implication is that we can use deterministic policies, unlike policy gradients which is based on stochastic policies. Another difference is that ES uses only the total reward and not the individual state-action pairs within each episode. While this may appear to be a weakness, since less information is being used, we find in practice that it seems to lead to more stable training profiles.
This paper is organized as follows. In Section 2, we give a formal definition of MAML, and discuss related works. In Section 3, we introduce Evolutionary Strategies and show how ES can be applied to create a new framework for MAML. In Section 4, we present numerical experiments, highlighting the topics of exploration (Section 4.1), the utility of compact architectures (Section 4.2), the stability of deterministic policies (Section 4.3), and comparisons against existing MAML algorithms in the few-shot regime (Section 4.4). Additional material can be found in the Appendix.
2 MODEL AGNOSTIC META LEARNING IN RL
We first discuss the original formulation of MAML (Finn et al., 2017). Let T be a set of reinforcement learning tasks with common state and action spaces S,A, and P(T ) a distribution over T . In the standard MAML setting, each task Ti ∈ T has an associated Markov Decision Process (MDP) with transition distribution qi(st+1|st, at), an episode length H , and a reward function RTi which maps a trajectory τ = (s0, a1, ..., aH−1, sH) to the total reward R(τ). A stochastic policy is a function π : S → P(A) which maps states to probability distributions over the action space. A deterministic policy is a function π : S → A. Policies are typically encoded by a neural network with parameters θ, and we often refer to the policy πθ simply by θ.
The MAML problem is to find the so-called MAML point (called also a meta-policy), which is a policy θ∗ that can be ‘adapted’ quickly to solve an unknown task T ∈ T by taking a (few)1 policy gradient steps with respect to T . The optimization problem to be solved in training (in its one-shot version) is thus of the form:
max θ J(θ) := ET∼P(T )[Eτ ′∼PT (τ ′|θ′)[RT (τ
′)]], (1)
where: θ′ = U(θ, T ) = θ + α∇θEτ∼PT (τ |θ)[RT (τ)] is called the adapted policy for a step size α > 0 and PT (·|η) is a distribution over trajectories given task T ∈ T and conditioned on the policy parameterized by η.
Standard MAML approaches are based on the following expression for the gradient of the MAML objective function (1) to conduct training:
∇θJ(θ) = ET∼P(T )[Er′∼PT (τ ′|θ′)[∇θ′ logPT (τ ′|θ′)RT (τ ′)∇θU(θ, T )]]. (2)
We collectively refer to algorithms based on computing (2) using policy gradients as PG-MAML.
1We adopt the common convention of defining the adaptation operator with a single gradient step, to simplify notation. It can be extended to multiple steps.
Since the adaptation operator U(θ, T ) contains the policy gradient ∇θEτ∼PT (τ |θ)[R(τ)], its own gradient∇θU(θ, T ) is second-order in θ:
∇θU = I+α ∫ PT (τ |θ)∇2θ log πθ(τ)RT (τ)dτ+α ∫ PT (τ |θ)∇θ log πθ(τ)∇θ log πθ(τ)TRT (τ)dτ.
(3) Correctly computing the gradient (2) with the term (3) using automatic differentiation is known to be tricky. Multiple authors (Foerster et al., 2018; Rothfuss et al., 2019; Liu et al., 2019) have pointed out that the original implementation of MAML incorrectly estimates the term (3), which inadvertently causes the training to lose ‘pre-adaptation credit assignment’. Moreover, even when correctly implemented, the variance when estimating (3) can be extremely high, which impedes training. To improve on this, extensions to the original MAML include ProMP (Rothfuss et al., 2019), which introduces a new low-variance curvature (LVC) estimator for the Hessian, and T-MAML (Liu et al., 2019), which adds control variates to reduce the variance of the unbiased DiCE estimator (Foerster et al., 2018). However, these are not without their drawbacks: the proposed solutions are complicated, the variance of the Hessian estimate remains problematic, and LVC introduces unknown estimator bias.
Another issue that arises in PG-MAML is that policies are necessarily stochastic. However, randomized actions can lead to risky exploration behavior when computing the adaptation, especially for robotics applications where the collection of tasks may involve differing system dynamics as opposed to only differing rewards (Yang et al., 2019). We explore this further in Section 4.3.
These issues: the difficulty of estimating the Hessian term (3), the typically high variance of∇θJ(θ) for policy gradient algorithms in general, and the unsuitability of stochastic policies in some domains, lead us to the proposed method ES-MAML in Section 3.
Aside from policy gradients, there have also been biologically-inspired algorithms for MAML, based on concepts such as the Baldwin effect (Fernando et al., 2018). However, we note that despite the similar naming, methods such as ‘Evolvability ES’ (Gajewski et al., 2019) bear little resemblance to our proposed ES-MAML. The problem solved by our algorithm is the standard MAML, whereas (Gajewski et al., 2019) aims to maximize loosely related notions of the diversity of behavioral characteristics. Moreover, ES-MAML and its extensions we consider are all derived notions such as smoothings and approximations, with rigorous mathematical definitions as stated below.
3 ES-MAML ALGORITHMS
Formulating MAML with ES allows us to employ numerous techniques originally developed for enhancing ES, to MAML. We aim to improve both phases of MAML algorithm: the meta-learning training algorithm, and the efficiency of the adaptation operator.
3.1 EVOLUTION STRATEGIES METHODS (ES)
Evolution Strategies (ES) (Wierstra et al., 2008; 2014), which recently became popular for RL (Salimans et al., 2017), rely on optimizing the smoothing of the blackbox function f : Rd → R, which takes as input parameters θ ∈ Rd of the policy and outputs total discounted (expected) reward obtained by an agent applying that policy in the given environment. Instead of optimizing the function f directly, we optimize a smoothed objective. We define the Gaussian smoothing of F as f̃σ(θ) = Eg∼N (0,Id)[f(θ + σg)]. The gradient of this smoothed objective, sometimes called an ES-gradient, is given as (see: (Nesterov & Spokoiny, 2017)):
∇θf̃σ(θ) = 1
σ Eg∼N (0,Id)[f(θ + σg)g]. (4)
Note that the gradient can be approximated via Monte Carlo (MC) samples:
In ES literature the above algorithm is often modified by adding control variates to equation 4 to obtain other unbiased estimators with reduced variance. The forward finite difference (Forward-FD) estimator (Choromanski et al., 2018) is given by subtracting the current policy value f(θ), yielding ∇θf̃σ(θ) = 1σEg∼N (0,Id)[(f(θ + σg) − f(θ))g]. The antithetic estimator (Nesterov & Spokoiny, 2017; Mania et al., 2018) is given by the symmetric difference ∇θf̃σ(θ) = 12σEg∼N (0,Id)[(f(θ +
1 ESGrad (f, θ, n, σ) inputs: function f , policy θ, number of perturbations n, precision σ 2 Sample n i.i.d N(0, I) vectors g1, . . . , gn; 3 return 1nσ ∑n i=1 f(θ + σgi)gi;
Algorithm 1: Monte Carlo ES Gradient
σg) − f(θ − σg))g]. Notice that the variance of the Forward-FD and antithetic estimators is translation-invariant with respect to f . In practice, the Forward-FD or antithetic estimator is usually preferred over the basic version expressed in equation 4.
In the next sections we will refer to Algorithm 1 for computing the gradient though we emphasize that there are several other recently developed variants of computing ES-gradients as well as applying them for optimization. We describe some of these variants in Section 3.3 and appendix A.3. A key feature of ES-MAML is that we can directly make use of new enhancements of ES.
3.2 META-TRAINING MAML WITH ES
To formulate MAML in the ES framework, we take a more abstract viewpoint. For each task T ∈ T , let fT (θ) be the (expected) cumulative reward of the policy θ. We treat fT as a blackbox, and make no assumptions on its structure (so the task need not even be MDP, and fT may be nonsmooth). The MAML problem is then
max θ J(θ) := ET∼P(T )fT (U(θ, T )). (5)
As argued in (Liu et al., 2019; Rothfuss et al., 2019) (see also Section 2), a major challenge for policy gradient MAML is estimating the Hessian, which is both conceptually subtle and difficult to correctly implement using automatic differentiation. The algorithm we propose obviates the need to calculate any second derivatives, and thus avoids this issue.
Suppose that we can evaluate (or approximate) fT (θ) and U(θ, T ), but fT and U(·, T ) may be nonsmooth or their gradients may be intractable. We consider the Gaussian smoothing J̃σ of the MAML reward (5), and optimize J̃σ using ES methods. The gradient∇J̃σ(θ) is given by
∇J̃σ(θ) = E T∼P(T ) g∼N (0,I)
[ 1
σ fT (U(θ + σg, T ))g
] (6)
and can be estimated by jointly sampling over (T,g) and evaluating fT (U(θ + σg, T )). This algorithm is specified in Algorithm 2 box, and we refer to it as (zero-order) ES-MAML.
Data: initial policy θ0, meta step size β 1 for t = 0, 1, . . . do 2 Sample n tasks T1, . . . , Tn and iid vectors g1, . . . ,gn ∼ N (0, I); 3 foreach (Ti,gi) do 4 vi ← fTi(U(θt + σgi, Ti)) 5 end 6 θt+1 ← θt + βσn ∑n i=1 vigi 7 end Algorithm 2: Zero-Order ES-MAML (general adaptation operator U(·, T ))
Data: initial policy θ0, adaptation step size α, meta step size β, number of queries K 1 for t = 0, 1, . . . do 2 Sample n tasks T1, . . . , Tn and iid vectors g1, . . . ,gn ∼ N (0, I); 3 foreach (Ti,gi) do 4 d(i) ← ESGRAD(fTi , θt + σgi,K, σ); 5 θ
(i) t ← θt + σgi + αd(i);
6 vi ← fTi(θ(i)t ); 7 end 8 θt+1 ← θt + βσn ∑n i=1 vigi; 9 end Algorithm 3: Zero-Order ES-MAML with ESGradient Adaptation
The standard adaptation operator U(·, T ) is the one-step task gradient. Since fT is permitted to be nonsmooth in our setting, we use the adaptation operator U(θ, T ) = θ + α∇f̃Tσ (θ) acting on its smoothing. Expanding the definition of J̃σ , the gradient of the smoothed MAML is then given by
∇J̃σ(θ) = 1
σ E T∼P(T ) g∼N (0,I)
[ fT ( θ + σg + 1
σ Eh∼N (0,I)[fT (θ + σg + σh)h]
) g ] . (7)
This leads to the algorithm that we specify in Algorithm 3, where the adaptation operator U(·, T ) is itself estimated using the ES gradient in the inner loop.
We can also derive an algorithm analogous to PG-MAML by applying a first-order method to the MAML reward ET∼P(T )f̃T (θ + α∇f̃T (θ)) directly, without smoothing. The gradient is given by
∇J(θ) = ET∼P(T )∇f̃T (θ + α∇f̃T (θ))(I+ α∇2f̃T (θ)), (8)
which corresponds to equation (3) in (Liu et al., 2019) when expressed in terms of policy gradients. Every term in this expression has a simple Monte Carlo estimator (see Algorithm 4 in the appendix for the MC Hessian estimator). We discuss this algorithm in greater detail in Appendix A.1. This formulation can be viewed as the “MAML of the smoothing”, compared to the “smoothing of the MAML” which is the basis for Algorithm 3. It is the additional smoothing present in equation 6 which eliminates the gradient of U(·, T ) (and hence, the Hessian of fT ). Just as with the Hessian estimation in the original PG-MAML, we find empirically that the MC estimator of the Hessian (Algorithm 4) has high variance, making it often harmful in training. We present some comparisons between Algorithm 3 and Algorithm 5, with and without the Hessian term, in Appendix A.1.2.
Note that when U(·, T ) is estimated, such as in Algorithm 3, the resulting estimator for∇J̃σ will in general be biased. This is similar to the estimator bias which occurs in PG-MAML because we do not have access to the true adapted trajectory distribution. We discuss this further in Appendix A.2.
3.3 IMPROVING THE ADAPTATION OPERATOR WITH ES
Algorithm 2 allows for great flexibility in choosing new adaptation operators. The simplest extension is to modify the ES gradient step: we can draw on general techniques for improving the ES gradient estimator, some of which are described in Appendix A.3. Some other methods are explored below.
3.3.1 IMPROVED EXPLORATION
Instead of using i.i.d Gaussian vectors to estimate the ES gradient in U(·, T ), we consider samples constructed according to Determinantal Point Processes (DPP). DPP sampling (Kulesza & Taskar, 2012; Wachinger & Golland, 2015) is a method of selecting a subset of samples so as to maximize the ‘diversity’ of the subset. It has been applied to ES to select perturbations gi so that the gradient estimator has lower variance (Choromanski et al., 2019a). The sampling matrix determining DPP sampling can also be data-dependent and use information from the meta-training stage to construct a learned kernel with better properties for the adaptation phase. In the experimental section we show that DPP-ES can help in improving adaptation in MAML.
3.3.2 HILL CLIMBING AND POPULATION SEARCH
Nondifferentiable operators U(·, T ) can be also used in Algorithm 2. One particularly interesting example is the local search operator given by U(θ, T ) = argmax{fT (θ′) : ‖θ′ − θ‖ ≤ R}, where R > 0 is the search radius. That is, U(θ, T ) selects the best policy for task T which is in a ‘neighborhood’ of θ. For simplicity, we took the search neighborhood to be the ball B(θ,R) here, but we may also use more general neighborhoods of θ. In general, exactly solving for the maximizer of fT over B(θ,R) is intractable, but local search can often be well approximated by a hill climbing algorithm. Hill climbing creates a population of candidate policies by perturbing the best observed policy (which is initialized to θ), evaluates the reward fT for each candidate, and then updates the best observed policy. This is repeated for several iterations. A key property of this search method is that the progress is monotonic, so the reward of the returned policy U(θ, T ) will always improve over θ. This does not hold for the stochastic gradient operator, and appears to be beneficial on some difficult problems (see Section 4.1). It has been claimed that hill climbing and other genetic algorithms (Moriarty et al., 1999) are competitive with gradient-based methods for solving difficult RL tasks (Such et al., 2017; Risi & Stanley, 2019). Another stochastic algorithm approximating local search is CMA-ES (Hansen et al., 2003; Igel, 2003; Krause et al., 2016), which performs more sophisticated search by adapting the covariance matrix of the perturbations.
4 EXPERIMENTS
The performance of MAML algorithms can be evaluated in several ways. One important measure is the performance of the final meta-policy: whether the algorithm can consistently produce metapolicies with better adaptation. In the RL setting, the adaptation of the meta-policy is also a function of the number K of queries used: that is, the number of rollouts used by the adaptation operator U(·, T ). The meta-learning goal of data efficiency corresponds to adapting with low K. The speed of the meta-training is also important, and can be measured in several ways: the number of metapolicy updates, wall-clock time, and the number of rollouts used for meta-training. In this section, we present experiments which evaluate various aspects of ES-MAML and PG-MAML in terms of data efficiency (K) and meta-training time. Further details of the environments and hyperparameters are given in Appendix A.7.
In the RL setting, the amount of information used drastically decreases if ES methods are applied in comparison to the PG setting. To be precise, ES uses only the cumulative reward over an episode, whereas policy gradients use every state-action pair. Intuitively, we may thus expect that ES should have worse sampling complexity because it uses less information for the same number of rollouts. However, it seems that in practice ES often matches or even exceeds policy gradients approaches (Salimans et al., 2017; Mania et al., 2018). Several explanations have been proposed: In the PG case, especially with algorithms such as PPO, the network must optimize multiple additional surrogate objectives such as entropy bonuses and value functions as well as hyperparameters such as the TDstep number. Furthermore, it has been argued that ES is more robust against delayed rewards, action infrequency, and long time horizons (Salimans et al., 2017). These advantages of ES in traditional RL also transfer to MAML, as we show empirically in this section. ES may lead to additional advantages (even if the numbers of rollouts needed in training is comparable with PG ones) in terms of wall-clock time, because it does not require backpropagation, and can be parallelized over CPUs.
4.1 EXPLORATION: TARGET ENVIRONMENTS
In this section, we present two experiments on environments with very sparse rewards where the meta-policy must exhibit exploratory behavior to determine the correct adaptation.
The four corners benchmark was introduced in (Rothfuss et al., 2019) to demonstrate the weaknesses of exploration in PG-MAML. An agent on a 2D square receives reward for moving towards a selected corner of the square, but only observes rewards once it is sufficiently close to the target corner, making the reward sparse. An effective exploration strategy for this set of tasks is for the meta-policy θ∗ to travel in circular trajectories to observe which corner produces rewards; however, for a single policy to produce this exploration behavior is difficult. In Figure 1, we demonstrate the behavior of ES-MAML on the four corners problem. When K = 20, the same number of rollouts for adaptation as used in (Rothfuss et al., 2019), the basic version of Algorithm 3 is able to correctly explore and adapt to the task by finding the target corner. Moreover, it does not require any modifications to encourage exploration, unlike PG-MAML. We further used K = 10, 5, which caused the performance to drop. For better performance in this low-information environment, we experimented with two different adaptation operators U(·, T ) in Algorithm 2, which are HC (hill climbing) and DPP-ES. The standard ES gradient is denoted MC.
Furthermore, ES-MAML is not limited to “single goal” exploration. We created a more difficult task, six circles, where the agent continuously accrues negative rewards until it reaches six target points to “deactivate” them. Solving this task requires the agent to explore in circular trajectories, similar to the trajectory used by PG-MAML on the four corners task. We visualize the behavior in Figure 2. Observe that ES-MAML with the HC operator is able to develop a strategy to explore the target locations.
Additional examples on the classic Navigation-2D task are presented in Appendix A.4, highlighting the differences in exploration behavior between PG-MAML and ES-MAML.
4.2 GOOD ADAPTATION WITH COMPACT ARCHITECTURES
One of the main benefits of ES is due to its ability to train compact linear policies, which can outperform hidden-layer policies. We demonstrate this on several benchmark MAML problems in the HalfCheetah and Ant environments in Figure 3. In contrast, (Finn & Levine, 2018) observed that PG-MAML empirically and theoretically suggested that training with more deeper layers under SGD increases performance. We demonstrate that on the Forward-Backward and Goal-Velocity MAML benchmarks, ES-MAML is consistently able to train successful linear policies faster than deep networks. We also show that, for the Forward-Backward Ant problem, ES-MAML with the new HC operator is the most performant. Using more compact policies also directly speeds up ES-MAML, since fewer perturbations are needed for gradient estimation.
4.3 DETERMINISTIC POLICIES
We find that deterministic policies often produce more stable behaviors than the stochastic ones that are required for PG, where randomized actions in unstable environments can lead to catastrophic outcomes. In PG, this is often mitigated by reducing the entropy bonus, but this has an undesirable side effect of reducing exploration. In contrast, ES-MAML explores in parameter space, which mitigates this issue. To demonstrate this, we use the “Biased-Sensor CartPole” environment from (Yang et al., 2019). This environment has unstable dynamics and sparse rewards, so it requires exploration but is also risky. We see in Figure 4 that ES-MAML is able to stably maintain the maximum reward (500).
We also include results in Figure 4 from two other environments, Swimmer and Walker2d, for which it is known that PG is surprisingly unstable, and ES yields better training (Mania et al., 2018). Notice that we again find linear policies (L) outperforming policies with one (H) or two (HH) hidden layers.
4.4 LOW-K BENCHMARKS
For real-world applications, we may be constrained to use fewer queries K than has typically been demonstrated in previous MAML works. Hence, it is of interest to compare how ES-MAML compares to PG-MAML for adapting with very low K.
One possible concern is that low K might harm ES in particular because it uses only the cumulative rewards; if for example K = 5, then the ES adaptation gradient can make use of only 5 values. In comparison, PG-MAML uses K · H state-action pairs, so for K = 5, H = 200, PG-MAML still has 1000 pieces of information available.
However, we find experimentally that the standard ES-MAML (Algorithm 3) remains competitive with PG-MAML even in the low-K setting. In Figure 5, we compare ES-MAML and PG-MAML on the Forward-Backward and Goal-Velocity tasks across four environments (HalfCheetah, Swimmer, Walker2d, Ant) and two model architectures. While PG-MAML can generally outperform ESMAML on the Goal-Velocity task, ES-MAML is similar or better on the Forward-Backward task. Moreover, we observed that for low K, PG-MAML can be highly unstable (note the wide error bars), with some trajectories failing catastrophically, whereas ES-MAML is relatively stable. This is an important consideration in real applications, where the risk of catastrophic failure is undesirable.
5 CONCLUSION
We have presented a new framework for MAML based on ES algorithms. The ES-MAML approach avoids the problems of Hessian estimation which necessitated complicated alterations in PG-MAML and is straightforward to implement. ES-MAML is flexible in the choice of adaptation operators, and can be augmented with general improvements to ES, along with more exotic adaptation operators. In particular, ES-MAML can be paired with nonsmooth adaptation operators such as hill climbing, which we found empirically to yield better exploratory behavior and better performance on sparse-reward environments. ES-MAML performs well with linear or compact deterministic policies, which is an advantage when adapting if the state dynamics are possibly unstable.
A.1 FIRST-ORDER ES-MAML
A.1.1 ALGORITHM
Suppose that we first apply Gaussian smoothing to the task rewards and then form the MAML problem, so we have J(θ) = ET∼P(T )f̃T (U(θ, T )). The function J is then itself differentiable, and we can directly apply first-order methods to it. The classical case where U(θ, T ) = θ + α∇f̃T (θ) yields the gradient
∇J(θ) = ET∼P(T )∇f̃T (θ + α∇f̃T (θ))(I+ α∇2f̃T (θ)). (9) This is analogous to formulas obtained in e.g (Liu et al., 2019) for the policy gradient MAML. We can then approximate this gradient as an input to stochastic first-order methods. An example with standard SGD is shown in Algorithm 5.
1 ESHess (f, θ, n, σ) inputs: function f , policy θ, number of perturbations n, precision σ 2 Sample i.i.d N (0, I) vectors g1, . . . ,gn; 3 v ← 1n ∑n i=1 f(θ + σgi);
4 H0 ← 1n ∑n i=1 f(θ + σgi)gig T i ; 5 return 1σ2 (H 0 − v · I);
Algorithm 4: Monte Carlo ES Hessian
Data: initial policy θ0, adaptation step size α, meta step size β, number of queries K
1 for t = 0, 1, . . . do 2 Sample n tasks T1, . . . , Tn; 3 foreach Ti do 4 d
(i) 1 ← ESGRAD(fTi , θt,K, σ);
5 H(i) ← ESHESS(fTi , θt,K, σ); 6 θ
(i) t ← θt + α · di;
7 d (i) 2 ← ESGRAD(fTi , θ (i) t ,K, σ); 8 end 9 θt+1 ← θt + βn ∑n i=1(I+ αH (i))d (i) 2 ;
10 end Algorithm 5: First Order ES-MAML
A central problem, as discussed in (Rothfuss et al., 2019; Liu et al., 2019) is the estimation of ∇2f̃T (θ). However, a simple expression exists for this object in the ES setting; it can be shown that
∇2f̃T (θ) = 1 σ2 (Eh∼N (0,I)[fT (θ + σh)hhT ]− f̃T (θ)I]. (10)
Note that for the vector h, hT is the transpose (and unrelated to tasks T ). A basic MC estimator is shown in Algorithm 4. Given an independent estimator for ∇f̃T (θ + α∇f̃T (θ)), we can then take the product to obtain an estimator for∇J .
A.1.2 EXPERIMENTS WITH FIRST-ORDER ES-MAML
Unlike zero-order ES-MAML (Algorithm 3), the first-order ES-MAML explicitly builds an approximation of the Hessian of fT . Given the literature on PG-MAML, we expect that estimating the Hessian ∇2f̃T (θ) with Algorithm 4 without any control variates may have high variance. We compare two variants of first-order ES-MAML:
1. The full version (FO-Hessian) specified in Algorithm 5.
2. The ‘first-order approximation’ (FO-NoHessian) which ignores the term I+α∇2f̃T (θ) and approximates the MAML gradient as ET∼P(T )∇f̃T (θ + α∇f̃T (θ)). This is equivalent to setting H(i) = 0 in line 5 of Algorithm 5.
The results on the four corner exploration problem (Section 4.1) and the Forward-Backward Ant, using Linear policies, are shown in Figure A1. On Forward-Backward Ant, FO-NoHessian actually outperformed FO-Hessian, so the inclusion of the Hessian term actually slowed convergence. On the four corners task, both FO-Hessian and FO-NoHessian have large error bars, and FO-Hessian slightly outperforms FO-NoHessian.
There is conflicting evidence as to whether the same phenomenon occurs with PG-MAML; (Finn et al., 2017, §5.2) found that on supervised learning MAML, omitting Hessian terms is competitive
Figure A1: Comparisons between the FO-Hessian and FO-NoHessian variants of Algorithm 5.
but slightly worse than the full PG-MAML, and does not report comparisons with and without the Hessian on RL MAML. (Rothfuss et al., 2019; Liu et al., 2019) argue for the importance of the second-order terms in proper credit assignment, but use heavily modified estimators (LVC, control variates; see Section 2) in their experiments, so the performance is not directly comparable to the ‘naive’ estimator in Algorithm 4. Our interpretation is that Algorithm 4 has high variance, making the Hessian estimates inaccurate, which can slow training on relatively ‘easier’ tasks like ForwardBackward walking but possibly increase the exploration on four corners.
We also compare FO-NoHessian against Algorithm 3 on Forward-Backward HalfCheetah and Ant in Figure A2. In this experiment, the two methods ran on servers with different number of workers available, so we measure the score by the total number of rollouts. We found that FO-NoHessian was slightly faster than Algorithm 3 when measured by rollouts on Ant, but FO-NoHessian had notably poor performance when the number of queries was low (K = 5) on HalfCheetah, and failed to reach similar scores as the others even after running for many more rollouts.
Figure A2: Comparisons between FO-NoHessian and Algorithm 3, by rollouts
.
A.2 HANDLING ESTIMATOR BIAS
Since the adapted policy U(θ, T ) generally cannot be evaluated exactly, we cannot easily obtain unbiased estimates of fT (U(θ, T )). This problem arises for both PG-MAML and ES-MAML.
We consider PG-MAML first as an example. In PG-MAML, the adaptation operator is U(θ, T ) = θ+α∇θEτ∼PT (τ |θ)[R(τ)]. In general, we can only obtain an estimate of∇θEτ∼PT (τ |θ)[R(τ)] and not its exact value. However, the MAML gradient is given by
∇θJ(θ) = ET ∼P(T )[Er′∼PT (τ ′|θ′)[∇θ′ logPT (τ ′|θ′)R(τ ′)∇θU(θ, T )]] (11)
which requires exact sampling from the adapted trajectories τ ′ ∼ PT (τ ′|U(θ, T )). Since this is a nonlinear function of U(θ, T ), we cannot obtain unbiased estimates of ∇J(θ) by sampling τ ′ generated by an estimate of U(θ, T ).
In the case of ES-MAML, the adaptation operator is U(θ, T ) = θ+α∇f̃(θ, T ) = Ehu(θ, T ;h) for h ∼ N (0, I), where u(θ, T ;h) = θ + ασ f
T (θ + σh)h. Clearly, fT (u(θ, T ;h)) is not an unbiased estimator of fT (U(θ, T )).
We may question whether using an unbiased estimator of fT (U(θ, T )) is likely to improve performance. One natural strategy is to reformulate the objective function so as to make the desired estimator unbiased. This happens to be the case for the algorithm E-MAML (Al-Shedivat et al., 2018), which treats the adaptation operator as an explicit function of K sampled trajectories and “moves the expectation outside”. That is, we now have an adaptation operator U(θ, T ; τ1, . . . , τK), and the objective function becomes
ET [Eτ1,...,τk∼PT (τ |θ)f T (U(θ, T ; τ1, . . . , τK))] (12)
An unbiased estimator for the E-MAML gradient can be obtained by sampling only from τ ∼ PT (τ |θ) (Al-Shedivat et al., 2018). However, it has been argued that by doing so, E-MAML does not properly assign credit to the pre-adaptation policy (Rothfuss et al., 2019). Thus, this particular mathematical strategy seems to be disadvantageous for RL.
The problem of finding estimators for function-of-expectations f(EX) is difficult and while general unbiased estimation methods exist (Blanchet et al., 2017), they are often complicated and suffer from high variance. In the context of MAML, ProMP compares the low variance curvature (LVC) estimator (Rothfuss et al., 2019), which is biased, against the unbiased DiCE estimator (Foerster et al., 2018), for the Hessian term in the MAML gradient, and found that the lower variance of LVC produced better performance than DiCE. Alternatively, control variates can be used to reduce the variance of the DiCE estimator, which is the approach followed in (Liu et al., 2019).
In the ES framework, the problem can also be formulated to avoid exactly evaluating U(·, T ), and hence circumvents the question of estimator bias. We observe an interesting connection between MAML and the stochastic composition problem. Let us define uh(θ, T ) = u(θ, T ;h) and fTg (θ) = fT (θ + σg). For a given task T , the MAML reward is given by
f̃T (U(θ, T )) = f̃T [Ehuh(θ, T )] = EgfTg (Ehuh(θ, T )). (13)
This is a two-layer nested stochastic composition problem with outer function f̃T = EgfTg and inner function U(·, T ) = Ehuh(·, T ). An accelerated algorithm (ASC-PG) was developed in (Wang et al., 2017)] for this class of problems. While neither fTg nor uh(·, T ) is smooth, which is assumed in (Wang et al., 2017), we can verify that the crucial content of the assumptions hold:
1. Ehuh(θ, T ) = U(θ, T ) 2. We can define two functions
ζTg (θ) = 1
σ fTg (θ)g, ξ T h (θ) = I+
α
σ2 (fTh (θ)hh T − fTh (θ)I)
such that for any θ1, θ2,
Eg,h[ξTh (θ1)ζTg (θ2)] = JU(θ1, T )∇f̃T (θ2)
where JU denotes the Jacobian of U(·, T ), and g,h are independent vectors sampled from N (0, I). This follows immediately from equation 4 and equation 10.
The ASC-PG algorithm does not immediately extend to the full MAML problem, as upon taking an outer expectation over T , the MAML reward J(θ) = ETEgfTg (Ehuh(θ, T )) is no longer a stochastic composition of the required form. In particular, there are conceptual difficulties when the number of tasks in T is infinite. However, it can be used to solve the MAML problem for each task within a consensus framework, such as consensus ADMM (Hong et al., 2016).
A.3 EXTENSIONS OF ES
In this section, we discuss several general techniques for improving the basic ES gradient estimator (Algorithm 1). These can be applied both to the ES gradient of the meta-training (the ‘outer loop’ of Algorithm 3), and more interestingly, to the adaptation operator itself. That is, given U(θ, T ) =
θ + α∇f̃Tσ (θ), we replace the estimation of U by ESGRAD on line 4 of Algorithm 3 with an improved estimator of ∇f̃Tσ (θ), which even may depend on data collected during the meta-training stage. Many techniques exist for reducing the variance of the estimator such as Quasi Monte Carlo sampling (Choromanski et al., 2018). Aside from variance reduction, there are also methods with special properties.
A.3.1 ACTIVE SUBSPACES
Active Subspaces is a method for finding a low-dimensional subspace where the contribution of the gradient is maximized. Conceptually, the goal is to find and update on-the-fly a low-rank subspace L so that the projection ∇fT (θ)L of ∇fT (θ) into L is maximized and apply ∇fT (θ)L instead of ∇fT (θ). This should be done in such a way that ∇fT (θ) does not need to be computed explicitly. Optimizing in lower-dimensional subspaces might be computationally more efficient and can be thought of as an example of guided ES methods, where the algorithm is guided how to explore space in the anisotropic way, leveraging its knowledge about function optimization landscape that it gained in the previous steps of optimization. In the context of RL, the active subspace method ASEBO (Choromanski et al., 2019b) was successfully applied to speed up policy training algorithms. This strategy can be made data-dependent also in the MAML context, by learning an optimal subspace using data from the meta-training stage, and sampling from that subspace in the adaptation step.
A.3.2 REGRESSION-BASED OPTIMIZATION
Regression-Based Optimization (RBO) is an alternative method of gradient estimation. From Taylor series expansion we have f(θ + d) − f(θ) = ∇f(θ)Td + O(‖d‖2). By taking multiple finite difference expressions f(θ + d) − f(θ) for different d, we can recover the gradient by solving a regularized regression problem. The regularization has an additional advantage - it was shown that the gradient can be recovered even if a substantial fraction of the rewards f(θ + d) are corrupted (Choromanski et al., 2019c). Strictly speaking, this is not based on the Gaussian smoothing as in ES, but is another method for estimating gradients using only zero-th order evaluations.
A.3.3 EXPERIMENTS
We present a preliminary experiment with RBO and ASEBO gradient adaptation in Figure A3. To be precise, the algorithms used are identical to Algorithm 3 except that in line 4, d(i) ← ESGRAD is replaced by d(i) ← RBO (yielding RBO-MAML) and d(i) ← ASEBO (yielding ASEBO-MAML) respectively.
Figure A3: RBO-MAML and ASEBO-MAML compared to ES-MAML.
On the left plot, we test for noise robustness on the Forward-Backward Swimmer MAML task, comparing standard ES-MAML (Algorithm 3) to RBO-MAML. To simulate noisy data, we randomly corrupt 25% of the queries fT (θ + σg) used to estimate the adaptation operator U(θ, T ) with an enormous additive noise. This is the same type of corruption used in (Choromanski et al., 2019c).
Interestingly, RBO does not appear to be more robust against noise than the standard MC estimator, which suggests that the original ES-MAML has some inherent robustness to noise.
On the right plot, we compare ASEBO-MAML to ES-MAML on the Goal-Velocity HalfCheetah task in the low-K setting. We found that when measured in iterations, ASEBO-MAML outperforms ES-MAML. However, ASEBO requires additional linear algebra operations and thus uses significantly more wall-clock time (not shown in plot) per iteration, so if measured by real time, then ES-MAML was more effective.
A.4 NAVIGATION-2D EXPLORATION TASK
Navigation-2D (Finn et al., 2017) is a classic environment where the agent must explore to adapt to the task. The agent is represented by a point on a 2D square, and at each time step, receives reward equal to its distance from a given target point on the square. Note that unlike the four corners and six circles tasks, the reward for Navigation-2D is dense. We visualize the differing exploration strategies learned by PG-MAML and ES-MAML in Figure A4. Notice that PG-MAML makes many tiny movements in multiple directions to ‘triangulate’ the target location using the differences in reward for different state-action pairs. On the other hand, ES-MAML learns a meta-policy such that each perturbation of the meta-policy causes the agent to move in a different direction (represented by red paths), so it can determine the target location from the total rewards of each path.
Figure A4: Comparing the exploration behavior of PG-MAML and ES-MAML on the Navigation2D task. We use K = 20 queries for each algorithm.
A.5 PG-MAML RL BENCHMARKS
In Figure A5, we compare ES-MAML and PG-MAML on the Forward-Backward and Goal-Velocity tasks for HalfCheetah, Swimmer, Walker2d, and Ant, using the same values of K that were used in the original experiments of (Finn et al., 2017).
Figure A5: Comparisons between ES-MAML and PG-MAML using the queriesK from (Finn et al., 2017).
A.6 REGRESSION AND SUPERVISED LEARNING
MAML has also been applied to supervised learning. We demonstrate ES-MAML on sine regression (Finn et al., 2017), where the task is to fit a sine curve f with unknown amplitude and phase given a set of K pairs (xi, f(xi)). The meta-policy must be able to learn that all of tasks have a common periodic nature, so that it can correctly adapt to an unknown sine curve outside of the points xi.
For regression, the loss is the mean-squared error (MSE) between the adapted policy πθ(x) and the true curve f(x). Given data samples {(xi, f(xi)}Ki=1, the empirical loss isL(θ) = 1K ∑K i=1(f(xi)− πθ(xi)) 2. Note that unlike in reinforcement learning, we can exactly compute∇L(θ); for deep networks, this is by automatic differentiation. Thus, we opt to use Tensorflow to compute the adaptation operator U(θ, T ) in Algorithm 3. This is in accordance with the general principle that when gradients are available, it is more efficient to use the gradient than to approximate it by a zero-order method (Nesterov & Spokoiny, 2017).
We show several results in Figure A6. The adaptation step size is α = 0.01, which is the same as in (Finn et al., 2017). For comparison, (Finn et al., 2017) reports that PG-MAML can obtain a loss of ≈ 0.5 after one adaptation step with K = 5, though it is not specified how many iterations the meta-policy was trained for. ES-MAML approaches the same level of performance, though the number of training iterations required is higher than for the RL tasks, and surprisingly high for what appears to be a simpler problem. This is likely again a reflection of the fact that for problems such as regression where the gradients are available, it is more efficient to use gradients.
As an aside, this leads to a related question of the correct interpretation of the query number K in the supervised setting. There is a distinction between obtaining a data sample (xi, f(xi)), and doing a computation (such as a gradient) using that sample. If the main bottleneck is collecting the data {(xi, f(xi)}, then we may be satisfied with any algorithm that performs any number of operations on the data, as long as it uses only K samples. On the other hand, in the (on-policy) RL setting, samples cannot typically be ‘re-used’ to the same extent, because rollouts τ sampled with a given
Figure A6: The MSE of the adapted policy, for varying number of gradient steps and query number K. Runs are averaged across 3 seeds.
policy πθ follow an unknown distribution P(τ |θ) which reduces their usefulness away from θ. Thus, the corresponding notion to rollouts in the SL setting would be the number of backpropagations (for PG-MAML) or perturbations (for ES-MAML), but clearly these have different relative costs than doing simulations in RL.
A.7 HYPERPARAMETERS AND SETUPS
A.7.1 ENVIRONMENTS
Unless otherwise explicitly stated, we default to K = 20 and horizon = 200 for all RL experiments. We also use the standard reward normalization in (Mania et al., 2018), and use a global state normalization (i.e. the same mean, standard deviation normalization values for MDP states are shared across workers).
For the Ant environments (Goal-Position Ant, Forward-Backward Ant), there are significant differences in weighting on the auxiliary rewards such as control costs, contact costs, and survival rewards across different previous work (e.g. those costs are downweighted in (Finn et al., 2017) whereas the coefficients are vanilla Gym weightings in (Liu et al., 2019)). These auxiliary rewards can lead to local minima, such as the agent staying stationary to collect the survival bonus which may be confused with movement progress when presenting a training curve. To make sure the agent is explicitly performing the required task, we opted to remove such costs in our work and only present the main goal-distance cost and forward-movement reward respectively.
For the other environments, we used default weightings and rewards, since they do not change across previous works.
A.7.2 ES-MAML HYPERPARAMETERS
Let N be the number of possible distinct tasks possible. We sample tasks without replacement, which is important if N 5, as each worker performs adaptations on all possible tasks. For standard ES-MAML (Algorithm 3), we used the following settings.
Setting Value (Total Workers, # Perturbations, # Current Evals) (300, 150, 150) (Train Set Size, Task Batch Size, Test Set Size) (50,5,5) or (N,N,N) Number of rollouts per parameter 1 Number of Perturbations per worker 1 Outer-Loop Precision Parameter 0.1 Adaptation Precision Parameter 0.1 Outer-Loop Step Size 0.01 Adaptation Step Size (α) 0.05 Hidden Layer Width 32 ES Estimation Type Forward-FD Reward Normalization True State Normalization True
For ES-MAML and PG-MAML, we took 3 seeded runs, using the default TRPO hyperparameters found in (Liu et al., 2019). | 1. What are the advantages of the proposed method over prior works?
2. How does the method compare to PG-MAML regarding performance and robustness?
3. Are there any concerns or questions about hyperparameter sensitivity and how they were chosen?
4. Would it be interesting to explore the method's advantages over vanilla MAML in other applications?
5. Is there a comparison of the efficiency of ES-MAML and PG-MAML? | Review | Review
The authors propose a new method for model agnostic meta learning (MAML) based on evolution strategies (ES) rather than policy gradients (PG). The proposed method has clear advantages over prior work: it is conceptually much simpler, simpler to implement and is a zero-order method (while PG-MAML requires 2nd order derivatives and differentiation through the update steps). Also, the method natively allows to incorporate methods from evolution strategies, e.g., to improve exploration. Empirical results are convincing: ES-MAML consistently outperforms PG-MAML (or is at least not worse) on various tasks. Also, ES-MAML seems to be much more robust compared to PG-MAML, which is known to be brittle. The paper is well motivated and well written. The mathematical formalism is precise.
Comment/questions:
- PG-MAML is known to be very sensitive w.r.t. hyperparameters, is this also the case for ES-MAML? How were good hyperparameters found for ES-MAML?
- While this work focuses on RL, it would be interesint to see if ES-MAML is also advantages over vanilla MAML for common few-shot learning image classification problems.
- What’s the efficiency of ES-MAML compared to PG-MAML in terms of wall-clock time?
- (minor:) multiple times in the paper, \citep{} and \citept{} are used incorrectly. |
ICLR | Title
ES-MAML: Simple Hessian-Free Meta Learning
Abstract
We introduce ES-MAML, a new framework for solving the model agnostic meta learning (MAML) problem based on Evolution Strategies (ES). Existing algorithms for MAML are based on policy gradients, and incur significant difficulties when attempting to estimate second derivatives using backpropagation on stochastic policies. We show how ES can be applied to MAML to obtain an algorithm which avoids the problem of estimating second derivatives, and is also conceptually simple and easy to implement. Moreover, ES-MAML can handle new types of non-smooth adaptation operators, and other techniques for improving performance and estimation of ES methods become applicable. We show empirically that ES-MAML is competitive with existing methods and often yields better adaptation with fewer queries.
1 INTRODUCTION
Meta-learning is a paradigm in machine learning that aims to develop models and training algorithms which can quickly adapt to new tasks and data. Our focus in this paper is on meta-learning in reinforcement learning (RL), where data efficiency is of paramount importance because gathering new samples often requires costly simulations or interactions with the real world. A popular technique for RL meta-learning is Model Agnostic Meta Learning (MAML) (Finn et al., 2017; 2018), a model for training an agent which can quickly adapt to new and unknown tasks by performing one (or a few) gradient updates in the new environment. We provide a formal description of MAML in Section 2.
MAML has proven to be successful for many applications. However, implementing and running MAML continues to be challenging. One major complication is that the standard version of MAML requires estimating second derivatives of the RL reward function, which is difficult when using backpropagation on stochastic policies; indeed, the original implementation of MAML (Finn et al., 2017) did so incorrectly, which spurred the development of unbiased higher-order estimators (DiCE, (Foerster et al., 2018)) and further analysis of the credit assignment mechanism in MAML (Rothfuss et al., 2019). Another challenge arises from the high variance inherent in policy gradient methods, which can be ameliorated through control variates such as in T-MAML (Liu et al., 2019), through careful adaptive hyperparameter tuning (Behl et al., 2019; Antoniou et al., 2019) and learning rate annealing (Loshchilov & Hutter, 2017).
To avoid these issues, we propose an alternative approach to MAML based on Evolution Strategies (ES), as opposed to the policy gradient underlying previous MAML algorithms. We provide a detailed discussion of ES in Section 3.1. ES has several advantages:
∗Equal contribution. †Work performed during Google internship. ‡Work performed during the Google AI Residency Program. http://g.co/airesidency
1. Our zero-order formulation of ES-MAML (Section 3.2, Algorithm 3) does not require estimating any second derivatives. This dodges the many issues caused by estimating second derivatives with backpropagation on stochastic policies (see Section 2 for details).
2. ES is conceptually much simpler than policy gradients, which also translates to ease of implementation. It does not use backpropagation, so it can be run on CPUs only.
3. ES is highly flexible with different adaptation operators (Section 3.3).
4. ES allows us to use deterministic policies, which can be safer when doing adaptation (Section 4.3). ES is also capable of learning linear and other compact policies (Section 4.2).
On the point (4), a feature of ES algorithms is that exploration takes place in the parameter space. Whereas policy gradient methods are primarily motivated by interactions with the environment through randomized actions, ES is driven by optimization in high-dimensional parameter spaces with an expensive querying model. In the context of MAML, the notions of “exploration” and “task identification” have thus been shifted to the parameter space instead of the action space. This distinction plays a key role in the stability of the algorithm. One immediate implication is that we can use deterministic policies, unlike policy gradients which is based on stochastic policies. Another difference is that ES uses only the total reward and not the individual state-action pairs within each episode. While this may appear to be a weakness, since less information is being used, we find in practice that it seems to lead to more stable training profiles.
This paper is organized as follows. In Section 2, we give a formal definition of MAML, and discuss related works. In Section 3, we introduce Evolutionary Strategies and show how ES can be applied to create a new framework for MAML. In Section 4, we present numerical experiments, highlighting the topics of exploration (Section 4.1), the utility of compact architectures (Section 4.2), the stability of deterministic policies (Section 4.3), and comparisons against existing MAML algorithms in the few-shot regime (Section 4.4). Additional material can be found in the Appendix.
2 MODEL AGNOSTIC META LEARNING IN RL
We first discuss the original formulation of MAML (Finn et al., 2017). Let T be a set of reinforcement learning tasks with common state and action spaces S,A, and P(T ) a distribution over T . In the standard MAML setting, each task Ti ∈ T has an associated Markov Decision Process (MDP) with transition distribution qi(st+1|st, at), an episode length H , and a reward function RTi which maps a trajectory τ = (s0, a1, ..., aH−1, sH) to the total reward R(τ). A stochastic policy is a function π : S → P(A) which maps states to probability distributions over the action space. A deterministic policy is a function π : S → A. Policies are typically encoded by a neural network with parameters θ, and we often refer to the policy πθ simply by θ.
The MAML problem is to find the so-called MAML point (called also a meta-policy), which is a policy θ∗ that can be ‘adapted’ quickly to solve an unknown task T ∈ T by taking a (few)1 policy gradient steps with respect to T . The optimization problem to be solved in training (in its one-shot version) is thus of the form:
max θ J(θ) := ET∼P(T )[Eτ ′∼PT (τ ′|θ′)[RT (τ
′)]], (1)
where: θ′ = U(θ, T ) = θ + α∇θEτ∼PT (τ |θ)[RT (τ)] is called the adapted policy for a step size α > 0 and PT (·|η) is a distribution over trajectories given task T ∈ T and conditioned on the policy parameterized by η.
Standard MAML approaches are based on the following expression for the gradient of the MAML objective function (1) to conduct training:
∇θJ(θ) = ET∼P(T )[Er′∼PT (τ ′|θ′)[∇θ′ logPT (τ ′|θ′)RT (τ ′)∇θU(θ, T )]]. (2)
We collectively refer to algorithms based on computing (2) using policy gradients as PG-MAML.
1We adopt the common convention of defining the adaptation operator with a single gradient step, to simplify notation. It can be extended to multiple steps.
Since the adaptation operator U(θ, T ) contains the policy gradient ∇θEτ∼PT (τ |θ)[R(τ)], its own gradient∇θU(θ, T ) is second-order in θ:
∇θU = I+α ∫ PT (τ |θ)∇2θ log πθ(τ)RT (τ)dτ+α ∫ PT (τ |θ)∇θ log πθ(τ)∇θ log πθ(τ)TRT (τ)dτ.
(3) Correctly computing the gradient (2) with the term (3) using automatic differentiation is known to be tricky. Multiple authors (Foerster et al., 2018; Rothfuss et al., 2019; Liu et al., 2019) have pointed out that the original implementation of MAML incorrectly estimates the term (3), which inadvertently causes the training to lose ‘pre-adaptation credit assignment’. Moreover, even when correctly implemented, the variance when estimating (3) can be extremely high, which impedes training. To improve on this, extensions to the original MAML include ProMP (Rothfuss et al., 2019), which introduces a new low-variance curvature (LVC) estimator for the Hessian, and T-MAML (Liu et al., 2019), which adds control variates to reduce the variance of the unbiased DiCE estimator (Foerster et al., 2018). However, these are not without their drawbacks: the proposed solutions are complicated, the variance of the Hessian estimate remains problematic, and LVC introduces unknown estimator bias.
Another issue that arises in PG-MAML is that policies are necessarily stochastic. However, randomized actions can lead to risky exploration behavior when computing the adaptation, especially for robotics applications where the collection of tasks may involve differing system dynamics as opposed to only differing rewards (Yang et al., 2019). We explore this further in Section 4.3.
These issues: the difficulty of estimating the Hessian term (3), the typically high variance of∇θJ(θ) for policy gradient algorithms in general, and the unsuitability of stochastic policies in some domains, lead us to the proposed method ES-MAML in Section 3.
Aside from policy gradients, there have also been biologically-inspired algorithms for MAML, based on concepts such as the Baldwin effect (Fernando et al., 2018). However, we note that despite the similar naming, methods such as ‘Evolvability ES’ (Gajewski et al., 2019) bear little resemblance to our proposed ES-MAML. The problem solved by our algorithm is the standard MAML, whereas (Gajewski et al., 2019) aims to maximize loosely related notions of the diversity of behavioral characteristics. Moreover, ES-MAML and its extensions we consider are all derived notions such as smoothings and approximations, with rigorous mathematical definitions as stated below.
3 ES-MAML ALGORITHMS
Formulating MAML with ES allows us to employ numerous techniques originally developed for enhancing ES, to MAML. We aim to improve both phases of MAML algorithm: the meta-learning training algorithm, and the efficiency of the adaptation operator.
3.1 EVOLUTION STRATEGIES METHODS (ES)
Evolution Strategies (ES) (Wierstra et al., 2008; 2014), which recently became popular for RL (Salimans et al., 2017), rely on optimizing the smoothing of the blackbox function f : Rd → R, which takes as input parameters θ ∈ Rd of the policy and outputs total discounted (expected) reward obtained by an agent applying that policy in the given environment. Instead of optimizing the function f directly, we optimize a smoothed objective. We define the Gaussian smoothing of F as f̃σ(θ) = Eg∼N (0,Id)[f(θ + σg)]. The gradient of this smoothed objective, sometimes called an ES-gradient, is given as (see: (Nesterov & Spokoiny, 2017)):
∇θf̃σ(θ) = 1
σ Eg∼N (0,Id)[f(θ + σg)g]. (4)
Note that the gradient can be approximated via Monte Carlo (MC) samples:
In ES literature the above algorithm is often modified by adding control variates to equation 4 to obtain other unbiased estimators with reduced variance. The forward finite difference (Forward-FD) estimator (Choromanski et al., 2018) is given by subtracting the current policy value f(θ), yielding ∇θf̃σ(θ) = 1σEg∼N (0,Id)[(f(θ + σg) − f(θ))g]. The antithetic estimator (Nesterov & Spokoiny, 2017; Mania et al., 2018) is given by the symmetric difference ∇θf̃σ(θ) = 12σEg∼N (0,Id)[(f(θ +
1 ESGrad (f, θ, n, σ) inputs: function f , policy θ, number of perturbations n, precision σ 2 Sample n i.i.d N(0, I) vectors g1, . . . , gn; 3 return 1nσ ∑n i=1 f(θ + σgi)gi;
Algorithm 1: Monte Carlo ES Gradient
σg) − f(θ − σg))g]. Notice that the variance of the Forward-FD and antithetic estimators is translation-invariant with respect to f . In practice, the Forward-FD or antithetic estimator is usually preferred over the basic version expressed in equation 4.
In the next sections we will refer to Algorithm 1 for computing the gradient though we emphasize that there are several other recently developed variants of computing ES-gradients as well as applying them for optimization. We describe some of these variants in Section 3.3 and appendix A.3. A key feature of ES-MAML is that we can directly make use of new enhancements of ES.
3.2 META-TRAINING MAML WITH ES
To formulate MAML in the ES framework, we take a more abstract viewpoint. For each task T ∈ T , let fT (θ) be the (expected) cumulative reward of the policy θ. We treat fT as a blackbox, and make no assumptions on its structure (so the task need not even be MDP, and fT may be nonsmooth). The MAML problem is then
max θ J(θ) := ET∼P(T )fT (U(θ, T )). (5)
As argued in (Liu et al., 2019; Rothfuss et al., 2019) (see also Section 2), a major challenge for policy gradient MAML is estimating the Hessian, which is both conceptually subtle and difficult to correctly implement using automatic differentiation. The algorithm we propose obviates the need to calculate any second derivatives, and thus avoids this issue.
Suppose that we can evaluate (or approximate) fT (θ) and U(θ, T ), but fT and U(·, T ) may be nonsmooth or their gradients may be intractable. We consider the Gaussian smoothing J̃σ of the MAML reward (5), and optimize J̃σ using ES methods. The gradient∇J̃σ(θ) is given by
∇J̃σ(θ) = E T∼P(T ) g∼N (0,I)
[ 1
σ fT (U(θ + σg, T ))g
] (6)
and can be estimated by jointly sampling over (T,g) and evaluating fT (U(θ + σg, T )). This algorithm is specified in Algorithm 2 box, and we refer to it as (zero-order) ES-MAML.
Data: initial policy θ0, meta step size β 1 for t = 0, 1, . . . do 2 Sample n tasks T1, . . . , Tn and iid vectors g1, . . . ,gn ∼ N (0, I); 3 foreach (Ti,gi) do 4 vi ← fTi(U(θt + σgi, Ti)) 5 end 6 θt+1 ← θt + βσn ∑n i=1 vigi 7 end Algorithm 2: Zero-Order ES-MAML (general adaptation operator U(·, T ))
Data: initial policy θ0, adaptation step size α, meta step size β, number of queries K 1 for t = 0, 1, . . . do 2 Sample n tasks T1, . . . , Tn and iid vectors g1, . . . ,gn ∼ N (0, I); 3 foreach (Ti,gi) do 4 d(i) ← ESGRAD(fTi , θt + σgi,K, σ); 5 θ
(i) t ← θt + σgi + αd(i);
6 vi ← fTi(θ(i)t ); 7 end 8 θt+1 ← θt + βσn ∑n i=1 vigi; 9 end Algorithm 3: Zero-Order ES-MAML with ESGradient Adaptation
The standard adaptation operator U(·, T ) is the one-step task gradient. Since fT is permitted to be nonsmooth in our setting, we use the adaptation operator U(θ, T ) = θ + α∇f̃Tσ (θ) acting on its smoothing. Expanding the definition of J̃σ , the gradient of the smoothed MAML is then given by
∇J̃σ(θ) = 1
σ E T∼P(T ) g∼N (0,I)
[ fT ( θ + σg + 1
σ Eh∼N (0,I)[fT (θ + σg + σh)h]
) g ] . (7)
This leads to the algorithm that we specify in Algorithm 3, where the adaptation operator U(·, T ) is itself estimated using the ES gradient in the inner loop.
We can also derive an algorithm analogous to PG-MAML by applying a first-order method to the MAML reward ET∼P(T )f̃T (θ + α∇f̃T (θ)) directly, without smoothing. The gradient is given by
∇J(θ) = ET∼P(T )∇f̃T (θ + α∇f̃T (θ))(I+ α∇2f̃T (θ)), (8)
which corresponds to equation (3) in (Liu et al., 2019) when expressed in terms of policy gradients. Every term in this expression has a simple Monte Carlo estimator (see Algorithm 4 in the appendix for the MC Hessian estimator). We discuss this algorithm in greater detail in Appendix A.1. This formulation can be viewed as the “MAML of the smoothing”, compared to the “smoothing of the MAML” which is the basis for Algorithm 3. It is the additional smoothing present in equation 6 which eliminates the gradient of U(·, T ) (and hence, the Hessian of fT ). Just as with the Hessian estimation in the original PG-MAML, we find empirically that the MC estimator of the Hessian (Algorithm 4) has high variance, making it often harmful in training. We present some comparisons between Algorithm 3 and Algorithm 5, with and without the Hessian term, in Appendix A.1.2.
Note that when U(·, T ) is estimated, such as in Algorithm 3, the resulting estimator for∇J̃σ will in general be biased. This is similar to the estimator bias which occurs in PG-MAML because we do not have access to the true adapted trajectory distribution. We discuss this further in Appendix A.2.
3.3 IMPROVING THE ADAPTATION OPERATOR WITH ES
Algorithm 2 allows for great flexibility in choosing new adaptation operators. The simplest extension is to modify the ES gradient step: we can draw on general techniques for improving the ES gradient estimator, some of which are described in Appendix A.3. Some other methods are explored below.
3.3.1 IMPROVED EXPLORATION
Instead of using i.i.d Gaussian vectors to estimate the ES gradient in U(·, T ), we consider samples constructed according to Determinantal Point Processes (DPP). DPP sampling (Kulesza & Taskar, 2012; Wachinger & Golland, 2015) is a method of selecting a subset of samples so as to maximize the ‘diversity’ of the subset. It has been applied to ES to select perturbations gi so that the gradient estimator has lower variance (Choromanski et al., 2019a). The sampling matrix determining DPP sampling can also be data-dependent and use information from the meta-training stage to construct a learned kernel with better properties for the adaptation phase. In the experimental section we show that DPP-ES can help in improving adaptation in MAML.
3.3.2 HILL CLIMBING AND POPULATION SEARCH
Nondifferentiable operators U(·, T ) can be also used in Algorithm 2. One particularly interesting example is the local search operator given by U(θ, T ) = argmax{fT (θ′) : ‖θ′ − θ‖ ≤ R}, where R > 0 is the search radius. That is, U(θ, T ) selects the best policy for task T which is in a ‘neighborhood’ of θ. For simplicity, we took the search neighborhood to be the ball B(θ,R) here, but we may also use more general neighborhoods of θ. In general, exactly solving for the maximizer of fT over B(θ,R) is intractable, but local search can often be well approximated by a hill climbing algorithm. Hill climbing creates a population of candidate policies by perturbing the best observed policy (which is initialized to θ), evaluates the reward fT for each candidate, and then updates the best observed policy. This is repeated for several iterations. A key property of this search method is that the progress is monotonic, so the reward of the returned policy U(θ, T ) will always improve over θ. This does not hold for the stochastic gradient operator, and appears to be beneficial on some difficult problems (see Section 4.1). It has been claimed that hill climbing and other genetic algorithms (Moriarty et al., 1999) are competitive with gradient-based methods for solving difficult RL tasks (Such et al., 2017; Risi & Stanley, 2019). Another stochastic algorithm approximating local search is CMA-ES (Hansen et al., 2003; Igel, 2003; Krause et al., 2016), which performs more sophisticated search by adapting the covariance matrix of the perturbations.
4 EXPERIMENTS
The performance of MAML algorithms can be evaluated in several ways. One important measure is the performance of the final meta-policy: whether the algorithm can consistently produce metapolicies with better adaptation. In the RL setting, the adaptation of the meta-policy is also a function of the number K of queries used: that is, the number of rollouts used by the adaptation operator U(·, T ). The meta-learning goal of data efficiency corresponds to adapting with low K. The speed of the meta-training is also important, and can be measured in several ways: the number of metapolicy updates, wall-clock time, and the number of rollouts used for meta-training. In this section, we present experiments which evaluate various aspects of ES-MAML and PG-MAML in terms of data efficiency (K) and meta-training time. Further details of the environments and hyperparameters are given in Appendix A.7.
In the RL setting, the amount of information used drastically decreases if ES methods are applied in comparison to the PG setting. To be precise, ES uses only the cumulative reward over an episode, whereas policy gradients use every state-action pair. Intuitively, we may thus expect that ES should have worse sampling complexity because it uses less information for the same number of rollouts. However, it seems that in practice ES often matches or even exceeds policy gradients approaches (Salimans et al., 2017; Mania et al., 2018). Several explanations have been proposed: In the PG case, especially with algorithms such as PPO, the network must optimize multiple additional surrogate objectives such as entropy bonuses and value functions as well as hyperparameters such as the TDstep number. Furthermore, it has been argued that ES is more robust against delayed rewards, action infrequency, and long time horizons (Salimans et al., 2017). These advantages of ES in traditional RL also transfer to MAML, as we show empirically in this section. ES may lead to additional advantages (even if the numbers of rollouts needed in training is comparable with PG ones) in terms of wall-clock time, because it does not require backpropagation, and can be parallelized over CPUs.
4.1 EXPLORATION: TARGET ENVIRONMENTS
In this section, we present two experiments on environments with very sparse rewards where the meta-policy must exhibit exploratory behavior to determine the correct adaptation.
The four corners benchmark was introduced in (Rothfuss et al., 2019) to demonstrate the weaknesses of exploration in PG-MAML. An agent on a 2D square receives reward for moving towards a selected corner of the square, but only observes rewards once it is sufficiently close to the target corner, making the reward sparse. An effective exploration strategy for this set of tasks is for the meta-policy θ∗ to travel in circular trajectories to observe which corner produces rewards; however, for a single policy to produce this exploration behavior is difficult. In Figure 1, we demonstrate the behavior of ES-MAML on the four corners problem. When K = 20, the same number of rollouts for adaptation as used in (Rothfuss et al., 2019), the basic version of Algorithm 3 is able to correctly explore and adapt to the task by finding the target corner. Moreover, it does not require any modifications to encourage exploration, unlike PG-MAML. We further used K = 10, 5, which caused the performance to drop. For better performance in this low-information environment, we experimented with two different adaptation operators U(·, T ) in Algorithm 2, which are HC (hill climbing) and DPP-ES. The standard ES gradient is denoted MC.
Furthermore, ES-MAML is not limited to “single goal” exploration. We created a more difficult task, six circles, where the agent continuously accrues negative rewards until it reaches six target points to “deactivate” them. Solving this task requires the agent to explore in circular trajectories, similar to the trajectory used by PG-MAML on the four corners task. We visualize the behavior in Figure 2. Observe that ES-MAML with the HC operator is able to develop a strategy to explore the target locations.
Additional examples on the classic Navigation-2D task are presented in Appendix A.4, highlighting the differences in exploration behavior between PG-MAML and ES-MAML.
4.2 GOOD ADAPTATION WITH COMPACT ARCHITECTURES
One of the main benefits of ES is due to its ability to train compact linear policies, which can outperform hidden-layer policies. We demonstrate this on several benchmark MAML problems in the HalfCheetah and Ant environments in Figure 3. In contrast, (Finn & Levine, 2018) observed that PG-MAML empirically and theoretically suggested that training with more deeper layers under SGD increases performance. We demonstrate that on the Forward-Backward and Goal-Velocity MAML benchmarks, ES-MAML is consistently able to train successful linear policies faster than deep networks. We also show that, for the Forward-Backward Ant problem, ES-MAML with the new HC operator is the most performant. Using more compact policies also directly speeds up ES-MAML, since fewer perturbations are needed for gradient estimation.
4.3 DETERMINISTIC POLICIES
We find that deterministic policies often produce more stable behaviors than the stochastic ones that are required for PG, where randomized actions in unstable environments can lead to catastrophic outcomes. In PG, this is often mitigated by reducing the entropy bonus, but this has an undesirable side effect of reducing exploration. In contrast, ES-MAML explores in parameter space, which mitigates this issue. To demonstrate this, we use the “Biased-Sensor CartPole” environment from (Yang et al., 2019). This environment has unstable dynamics and sparse rewards, so it requires exploration but is also risky. We see in Figure 4 that ES-MAML is able to stably maintain the maximum reward (500).
We also include results in Figure 4 from two other environments, Swimmer and Walker2d, for which it is known that PG is surprisingly unstable, and ES yields better training (Mania et al., 2018). Notice that we again find linear policies (L) outperforming policies with one (H) or two (HH) hidden layers.
4.4 LOW-K BENCHMARKS
For real-world applications, we may be constrained to use fewer queries K than has typically been demonstrated in previous MAML works. Hence, it is of interest to compare how ES-MAML compares to PG-MAML for adapting with very low K.
One possible concern is that low K might harm ES in particular because it uses only the cumulative rewards; if for example K = 5, then the ES adaptation gradient can make use of only 5 values. In comparison, PG-MAML uses K · H state-action pairs, so for K = 5, H = 200, PG-MAML still has 1000 pieces of information available.
However, we find experimentally that the standard ES-MAML (Algorithm 3) remains competitive with PG-MAML even in the low-K setting. In Figure 5, we compare ES-MAML and PG-MAML on the Forward-Backward and Goal-Velocity tasks across four environments (HalfCheetah, Swimmer, Walker2d, Ant) and two model architectures. While PG-MAML can generally outperform ESMAML on the Goal-Velocity task, ES-MAML is similar or better on the Forward-Backward task. Moreover, we observed that for low K, PG-MAML can be highly unstable (note the wide error bars), with some trajectories failing catastrophically, whereas ES-MAML is relatively stable. This is an important consideration in real applications, where the risk of catastrophic failure is undesirable.
5 CONCLUSION
We have presented a new framework for MAML based on ES algorithms. The ES-MAML approach avoids the problems of Hessian estimation which necessitated complicated alterations in PG-MAML and is straightforward to implement. ES-MAML is flexible in the choice of adaptation operators, and can be augmented with general improvements to ES, along with more exotic adaptation operators. In particular, ES-MAML can be paired with nonsmooth adaptation operators such as hill climbing, which we found empirically to yield better exploratory behavior and better performance on sparse-reward environments. ES-MAML performs well with linear or compact deterministic policies, which is an advantage when adapting if the state dynamics are possibly unstable.
A.1 FIRST-ORDER ES-MAML
A.1.1 ALGORITHM
Suppose that we first apply Gaussian smoothing to the task rewards and then form the MAML problem, so we have J(θ) = ET∼P(T )f̃T (U(θ, T )). The function J is then itself differentiable, and we can directly apply first-order methods to it. The classical case where U(θ, T ) = θ + α∇f̃T (θ) yields the gradient
∇J(θ) = ET∼P(T )∇f̃T (θ + α∇f̃T (θ))(I+ α∇2f̃T (θ)). (9) This is analogous to formulas obtained in e.g (Liu et al., 2019) for the policy gradient MAML. We can then approximate this gradient as an input to stochastic first-order methods. An example with standard SGD is shown in Algorithm 5.
1 ESHess (f, θ, n, σ) inputs: function f , policy θ, number of perturbations n, precision σ 2 Sample i.i.d N (0, I) vectors g1, . . . ,gn; 3 v ← 1n ∑n i=1 f(θ + σgi);
4 H0 ← 1n ∑n i=1 f(θ + σgi)gig T i ; 5 return 1σ2 (H 0 − v · I);
Algorithm 4: Monte Carlo ES Hessian
Data: initial policy θ0, adaptation step size α, meta step size β, number of queries K
1 for t = 0, 1, . . . do 2 Sample n tasks T1, . . . , Tn; 3 foreach Ti do 4 d
(i) 1 ← ESGRAD(fTi , θt,K, σ);
5 H(i) ← ESHESS(fTi , θt,K, σ); 6 θ
(i) t ← θt + α · di;
7 d (i) 2 ← ESGRAD(fTi , θ (i) t ,K, σ); 8 end 9 θt+1 ← θt + βn ∑n i=1(I+ αH (i))d (i) 2 ;
10 end Algorithm 5: First Order ES-MAML
A central problem, as discussed in (Rothfuss et al., 2019; Liu et al., 2019) is the estimation of ∇2f̃T (θ). However, a simple expression exists for this object in the ES setting; it can be shown that
∇2f̃T (θ) = 1 σ2 (Eh∼N (0,I)[fT (θ + σh)hhT ]− f̃T (θ)I]. (10)
Note that for the vector h, hT is the transpose (and unrelated to tasks T ). A basic MC estimator is shown in Algorithm 4. Given an independent estimator for ∇f̃T (θ + α∇f̃T (θ)), we can then take the product to obtain an estimator for∇J .
A.1.2 EXPERIMENTS WITH FIRST-ORDER ES-MAML
Unlike zero-order ES-MAML (Algorithm 3), the first-order ES-MAML explicitly builds an approximation of the Hessian of fT . Given the literature on PG-MAML, we expect that estimating the Hessian ∇2f̃T (θ) with Algorithm 4 without any control variates may have high variance. We compare two variants of first-order ES-MAML:
1. The full version (FO-Hessian) specified in Algorithm 5.
2. The ‘first-order approximation’ (FO-NoHessian) which ignores the term I+α∇2f̃T (θ) and approximates the MAML gradient as ET∼P(T )∇f̃T (θ + α∇f̃T (θ)). This is equivalent to setting H(i) = 0 in line 5 of Algorithm 5.
The results on the four corner exploration problem (Section 4.1) and the Forward-Backward Ant, using Linear policies, are shown in Figure A1. On Forward-Backward Ant, FO-NoHessian actually outperformed FO-Hessian, so the inclusion of the Hessian term actually slowed convergence. On the four corners task, both FO-Hessian and FO-NoHessian have large error bars, and FO-Hessian slightly outperforms FO-NoHessian.
There is conflicting evidence as to whether the same phenomenon occurs with PG-MAML; (Finn et al., 2017, §5.2) found that on supervised learning MAML, omitting Hessian terms is competitive
Figure A1: Comparisons between the FO-Hessian and FO-NoHessian variants of Algorithm 5.
but slightly worse than the full PG-MAML, and does not report comparisons with and without the Hessian on RL MAML. (Rothfuss et al., 2019; Liu et al., 2019) argue for the importance of the second-order terms in proper credit assignment, but use heavily modified estimators (LVC, control variates; see Section 2) in their experiments, so the performance is not directly comparable to the ‘naive’ estimator in Algorithm 4. Our interpretation is that Algorithm 4 has high variance, making the Hessian estimates inaccurate, which can slow training on relatively ‘easier’ tasks like ForwardBackward walking but possibly increase the exploration on four corners.
We also compare FO-NoHessian against Algorithm 3 on Forward-Backward HalfCheetah and Ant in Figure A2. In this experiment, the two methods ran on servers with different number of workers available, so we measure the score by the total number of rollouts. We found that FO-NoHessian was slightly faster than Algorithm 3 when measured by rollouts on Ant, but FO-NoHessian had notably poor performance when the number of queries was low (K = 5) on HalfCheetah, and failed to reach similar scores as the others even after running for many more rollouts.
Figure A2: Comparisons between FO-NoHessian and Algorithm 3, by rollouts
.
A.2 HANDLING ESTIMATOR BIAS
Since the adapted policy U(θ, T ) generally cannot be evaluated exactly, we cannot easily obtain unbiased estimates of fT (U(θ, T )). This problem arises for both PG-MAML and ES-MAML.
We consider PG-MAML first as an example. In PG-MAML, the adaptation operator is U(θ, T ) = θ+α∇θEτ∼PT (τ |θ)[R(τ)]. In general, we can only obtain an estimate of∇θEτ∼PT (τ |θ)[R(τ)] and not its exact value. However, the MAML gradient is given by
∇θJ(θ) = ET ∼P(T )[Er′∼PT (τ ′|θ′)[∇θ′ logPT (τ ′|θ′)R(τ ′)∇θU(θ, T )]] (11)
which requires exact sampling from the adapted trajectories τ ′ ∼ PT (τ ′|U(θ, T )). Since this is a nonlinear function of U(θ, T ), we cannot obtain unbiased estimates of ∇J(θ) by sampling τ ′ generated by an estimate of U(θ, T ).
In the case of ES-MAML, the adaptation operator is U(θ, T ) = θ+α∇f̃(θ, T ) = Ehu(θ, T ;h) for h ∼ N (0, I), where u(θ, T ;h) = θ + ασ f
T (θ + σh)h. Clearly, fT (u(θ, T ;h)) is not an unbiased estimator of fT (U(θ, T )).
We may question whether using an unbiased estimator of fT (U(θ, T )) is likely to improve performance. One natural strategy is to reformulate the objective function so as to make the desired estimator unbiased. This happens to be the case for the algorithm E-MAML (Al-Shedivat et al., 2018), which treats the adaptation operator as an explicit function of K sampled trajectories and “moves the expectation outside”. That is, we now have an adaptation operator U(θ, T ; τ1, . . . , τK), and the objective function becomes
ET [Eτ1,...,τk∼PT (τ |θ)f T (U(θ, T ; τ1, . . . , τK))] (12)
An unbiased estimator for the E-MAML gradient can be obtained by sampling only from τ ∼ PT (τ |θ) (Al-Shedivat et al., 2018). However, it has been argued that by doing so, E-MAML does not properly assign credit to the pre-adaptation policy (Rothfuss et al., 2019). Thus, this particular mathematical strategy seems to be disadvantageous for RL.
The problem of finding estimators for function-of-expectations f(EX) is difficult and while general unbiased estimation methods exist (Blanchet et al., 2017), they are often complicated and suffer from high variance. In the context of MAML, ProMP compares the low variance curvature (LVC) estimator (Rothfuss et al., 2019), which is biased, against the unbiased DiCE estimator (Foerster et al., 2018), for the Hessian term in the MAML gradient, and found that the lower variance of LVC produced better performance than DiCE. Alternatively, control variates can be used to reduce the variance of the DiCE estimator, which is the approach followed in (Liu et al., 2019).
In the ES framework, the problem can also be formulated to avoid exactly evaluating U(·, T ), and hence circumvents the question of estimator bias. We observe an interesting connection between MAML and the stochastic composition problem. Let us define uh(θ, T ) = u(θ, T ;h) and fTg (θ) = fT (θ + σg). For a given task T , the MAML reward is given by
f̃T (U(θ, T )) = f̃T [Ehuh(θ, T )] = EgfTg (Ehuh(θ, T )). (13)
This is a two-layer nested stochastic composition problem with outer function f̃T = EgfTg and inner function U(·, T ) = Ehuh(·, T ). An accelerated algorithm (ASC-PG) was developed in (Wang et al., 2017)] for this class of problems. While neither fTg nor uh(·, T ) is smooth, which is assumed in (Wang et al., 2017), we can verify that the crucial content of the assumptions hold:
1. Ehuh(θ, T ) = U(θ, T ) 2. We can define two functions
ζTg (θ) = 1
σ fTg (θ)g, ξ T h (θ) = I+
α
σ2 (fTh (θ)hh T − fTh (θ)I)
such that for any θ1, θ2,
Eg,h[ξTh (θ1)ζTg (θ2)] = JU(θ1, T )∇f̃T (θ2)
where JU denotes the Jacobian of U(·, T ), and g,h are independent vectors sampled from N (0, I). This follows immediately from equation 4 and equation 10.
The ASC-PG algorithm does not immediately extend to the full MAML problem, as upon taking an outer expectation over T , the MAML reward J(θ) = ETEgfTg (Ehuh(θ, T )) is no longer a stochastic composition of the required form. In particular, there are conceptual difficulties when the number of tasks in T is infinite. However, it can be used to solve the MAML problem for each task within a consensus framework, such as consensus ADMM (Hong et al., 2016).
A.3 EXTENSIONS OF ES
In this section, we discuss several general techniques for improving the basic ES gradient estimator (Algorithm 1). These can be applied both to the ES gradient of the meta-training (the ‘outer loop’ of Algorithm 3), and more interestingly, to the adaptation operator itself. That is, given U(θ, T ) =
θ + α∇f̃Tσ (θ), we replace the estimation of U by ESGRAD on line 4 of Algorithm 3 with an improved estimator of ∇f̃Tσ (θ), which even may depend on data collected during the meta-training stage. Many techniques exist for reducing the variance of the estimator such as Quasi Monte Carlo sampling (Choromanski et al., 2018). Aside from variance reduction, there are also methods with special properties.
A.3.1 ACTIVE SUBSPACES
Active Subspaces is a method for finding a low-dimensional subspace where the contribution of the gradient is maximized. Conceptually, the goal is to find and update on-the-fly a low-rank subspace L so that the projection ∇fT (θ)L of ∇fT (θ) into L is maximized and apply ∇fT (θ)L instead of ∇fT (θ). This should be done in such a way that ∇fT (θ) does not need to be computed explicitly. Optimizing in lower-dimensional subspaces might be computationally more efficient and can be thought of as an example of guided ES methods, where the algorithm is guided how to explore space in the anisotropic way, leveraging its knowledge about function optimization landscape that it gained in the previous steps of optimization. In the context of RL, the active subspace method ASEBO (Choromanski et al., 2019b) was successfully applied to speed up policy training algorithms. This strategy can be made data-dependent also in the MAML context, by learning an optimal subspace using data from the meta-training stage, and sampling from that subspace in the adaptation step.
A.3.2 REGRESSION-BASED OPTIMIZATION
Regression-Based Optimization (RBO) is an alternative method of gradient estimation. From Taylor series expansion we have f(θ + d) − f(θ) = ∇f(θ)Td + O(‖d‖2). By taking multiple finite difference expressions f(θ + d) − f(θ) for different d, we can recover the gradient by solving a regularized regression problem. The regularization has an additional advantage - it was shown that the gradient can be recovered even if a substantial fraction of the rewards f(θ + d) are corrupted (Choromanski et al., 2019c). Strictly speaking, this is not based on the Gaussian smoothing as in ES, but is another method for estimating gradients using only zero-th order evaluations.
A.3.3 EXPERIMENTS
We present a preliminary experiment with RBO and ASEBO gradient adaptation in Figure A3. To be precise, the algorithms used are identical to Algorithm 3 except that in line 4, d(i) ← ESGRAD is replaced by d(i) ← RBO (yielding RBO-MAML) and d(i) ← ASEBO (yielding ASEBO-MAML) respectively.
Figure A3: RBO-MAML and ASEBO-MAML compared to ES-MAML.
On the left plot, we test for noise robustness on the Forward-Backward Swimmer MAML task, comparing standard ES-MAML (Algorithm 3) to RBO-MAML. To simulate noisy data, we randomly corrupt 25% of the queries fT (θ + σg) used to estimate the adaptation operator U(θ, T ) with an enormous additive noise. This is the same type of corruption used in (Choromanski et al., 2019c).
Interestingly, RBO does not appear to be more robust against noise than the standard MC estimator, which suggests that the original ES-MAML has some inherent robustness to noise.
On the right plot, we compare ASEBO-MAML to ES-MAML on the Goal-Velocity HalfCheetah task in the low-K setting. We found that when measured in iterations, ASEBO-MAML outperforms ES-MAML. However, ASEBO requires additional linear algebra operations and thus uses significantly more wall-clock time (not shown in plot) per iteration, so if measured by real time, then ES-MAML was more effective.
A.4 NAVIGATION-2D EXPLORATION TASK
Navigation-2D (Finn et al., 2017) is a classic environment where the agent must explore to adapt to the task. The agent is represented by a point on a 2D square, and at each time step, receives reward equal to its distance from a given target point on the square. Note that unlike the four corners and six circles tasks, the reward for Navigation-2D is dense. We visualize the differing exploration strategies learned by PG-MAML and ES-MAML in Figure A4. Notice that PG-MAML makes many tiny movements in multiple directions to ‘triangulate’ the target location using the differences in reward for different state-action pairs. On the other hand, ES-MAML learns a meta-policy such that each perturbation of the meta-policy causes the agent to move in a different direction (represented by red paths), so it can determine the target location from the total rewards of each path.
Figure A4: Comparing the exploration behavior of PG-MAML and ES-MAML on the Navigation2D task. We use K = 20 queries for each algorithm.
A.5 PG-MAML RL BENCHMARKS
In Figure A5, we compare ES-MAML and PG-MAML on the Forward-Backward and Goal-Velocity tasks for HalfCheetah, Swimmer, Walker2d, and Ant, using the same values of K that were used in the original experiments of (Finn et al., 2017).
Figure A5: Comparisons between ES-MAML and PG-MAML using the queriesK from (Finn et al., 2017).
A.6 REGRESSION AND SUPERVISED LEARNING
MAML has also been applied to supervised learning. We demonstrate ES-MAML on sine regression (Finn et al., 2017), where the task is to fit a sine curve f with unknown amplitude and phase given a set of K pairs (xi, f(xi)). The meta-policy must be able to learn that all of tasks have a common periodic nature, so that it can correctly adapt to an unknown sine curve outside of the points xi.
For regression, the loss is the mean-squared error (MSE) between the adapted policy πθ(x) and the true curve f(x). Given data samples {(xi, f(xi)}Ki=1, the empirical loss isL(θ) = 1K ∑K i=1(f(xi)− πθ(xi)) 2. Note that unlike in reinforcement learning, we can exactly compute∇L(θ); for deep networks, this is by automatic differentiation. Thus, we opt to use Tensorflow to compute the adaptation operator U(θ, T ) in Algorithm 3. This is in accordance with the general principle that when gradients are available, it is more efficient to use the gradient than to approximate it by a zero-order method (Nesterov & Spokoiny, 2017).
We show several results in Figure A6. The adaptation step size is α = 0.01, which is the same as in (Finn et al., 2017). For comparison, (Finn et al., 2017) reports that PG-MAML can obtain a loss of ≈ 0.5 after one adaptation step with K = 5, though it is not specified how many iterations the meta-policy was trained for. ES-MAML approaches the same level of performance, though the number of training iterations required is higher than for the RL tasks, and surprisingly high for what appears to be a simpler problem. This is likely again a reflection of the fact that for problems such as regression where the gradients are available, it is more efficient to use gradients.
As an aside, this leads to a related question of the correct interpretation of the query number K in the supervised setting. There is a distinction between obtaining a data sample (xi, f(xi)), and doing a computation (such as a gradient) using that sample. If the main bottleneck is collecting the data {(xi, f(xi)}, then we may be satisfied with any algorithm that performs any number of operations on the data, as long as it uses only K samples. On the other hand, in the (on-policy) RL setting, samples cannot typically be ‘re-used’ to the same extent, because rollouts τ sampled with a given
Figure A6: The MSE of the adapted policy, for varying number of gradient steps and query number K. Runs are averaged across 3 seeds.
policy πθ follow an unknown distribution P(τ |θ) which reduces their usefulness away from θ. Thus, the corresponding notion to rollouts in the SL setting would be the number of backpropagations (for PG-MAML) or perturbations (for ES-MAML), but clearly these have different relative costs than doing simulations in RL.
A.7 HYPERPARAMETERS AND SETUPS
A.7.1 ENVIRONMENTS
Unless otherwise explicitly stated, we default to K = 20 and horizon = 200 for all RL experiments. We also use the standard reward normalization in (Mania et al., 2018), and use a global state normalization (i.e. the same mean, standard deviation normalization values for MDP states are shared across workers).
For the Ant environments (Goal-Position Ant, Forward-Backward Ant), there are significant differences in weighting on the auxiliary rewards such as control costs, contact costs, and survival rewards across different previous work (e.g. those costs are downweighted in (Finn et al., 2017) whereas the coefficients are vanilla Gym weightings in (Liu et al., 2019)). These auxiliary rewards can lead to local minima, such as the agent staying stationary to collect the survival bonus which may be confused with movement progress when presenting a training curve. To make sure the agent is explicitly performing the required task, we opted to remove such costs in our work and only present the main goal-distance cost and forward-movement reward respectively.
For the other environments, we used default weightings and rewards, since they do not change across previous works.
A.7.2 ES-MAML HYPERPARAMETERS
Let N be the number of possible distinct tasks possible. We sample tasks without replacement, which is important if N 5, as each worker performs adaptations on all possible tasks. For standard ES-MAML (Algorithm 3), we used the following settings.
Setting Value (Total Workers, # Perturbations, # Current Evals) (300, 150, 150) (Train Set Size, Task Batch Size, Test Set Size) (50,5,5) or (N,N,N) Number of rollouts per parameter 1 Number of Perturbations per worker 1 Outer-Loop Precision Parameter 0.1 Adaptation Precision Parameter 0.1 Outer-Loop Step Size 0.01 Adaptation Step Size (α) 0.05 Hidden Layer Width 32 ES Estimation Type Forward-FD Reward Normalization True State Normalization True
For ES-MAML and PG-MAML, we took 3 seeded runs, using the default TRPO hyperparameters found in (Liu et al., 2019). | 1. What are the main contributions and novel aspects introduced by the paper in the field of Model agnostic meta learning using ES?
2. What are the strengths and weaknesses of the proposed approach compared to prior works in ES, particularly regarding the choice of ES algorithm and its limitations?
3. How does the reviewer assess the relevance and adequacy of the references provided in the paper, especially in relation to SOTA in ES?
4. What are the concerns regarding the experimental setup and comparisons made in the paper, specifically about hyperparameter tuning, sample size, and tasks considered?
5. Are there any questions or suggestions regarding the presentation and clarity of the paper's content, such as the use of sigma, alpha, and other parameters? | Review | Review
The paper proposes ES for the task of Model agnostic meta learning. Instead of the gradient-approximation which requires computing a hessian matrix, MC samples from a search distribution are used to estimate a search direction. The approach is validated on a number of experiments.
Unfortunatly, I am unable to accept this paper for a number of reasons. Mainly that the ES used is inferior and the constant step-size used can have a major effect on the experimental outcome.
Almost all proper ES literature with real working ES algorithms are missing and ESGrad is more than 20 years behind SOTA in the field. Since ES are central to the paper, an algorithm that would not even be considered a baseline at any conference in that field is difficult to accept.
The reason for this is that nowadays all ES use dynamic sample-variances based on progress measures, e.g. Cumulative step-size adaptation and Two-Point-Adaptation as the SOTA. Without this, it can be very difficult to find reasonable solutions.
Most important missing references from the ES-field in this context:
1. and most importantly The original ES-based RL paper:
Heidrich-Meisner, Verena, and Christian Igel. "Neuroevolution strategies for episodic reinforcement learning." Journal of Algorithms 64.4 (2009): 152-168.
2. CMA-ES and NES
Hansen, N., Müller, S. D., & Koumoutsakos, P. (2003). Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary computation, 11(1), 1-18.
Krause, O., Arbonès, D. R., & Igel, C. (2016). CMA-ES with optimal covariance update and storage complexity. In Advances in Neural Information Processing Systems (pp. 370-378).
Wierstra, D., Schaul, T., Peters, J., & Schmidhuber, J. (2008, June). Natural evolution strategies. In 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence) (pp. 3381-3387). IEEE.
3. Review of SOTA in large-scale ES:
Varelas, K., Auger, A., Brockhoff, D., Hansen, N., ElHara, O. A., Semet, Y., ... & Barbaresco, F. (2018, September). A comparative study of large-scale variants of CMA-ES. In International Conference on Parallel Problem Solving from Nature (pp. 3-15). Springer, Cham.
4. Recent developments for noisy functions (also references other relevant algorithms with noise-handling)
Krause, O. (2019, July). Large-scale noise-resilient evolution-strategies. In Proceedings of the Genetic and Evolutionary Computation Conference (pp. 682-690). ACM.
Section 3.2
-Why should in (7) the same sigma be used as in (6)? Sigma, alpha etc should be learnable parameters learned by the outer ES.
-3.3.2: you are writing below (1) that rollouts come from a distribution, i.e. are stochastic. How would you implement a hill-climber in the stochastic setting? e.g. consider the case when the rewards are heavy-tailed.
- using a hill-climber goes completely against the SOTA in ES which showed repeatedly over the last 20 years that hill-climbing is inferior, especially in larger dimension search-spaces (>100).
Experiments:
- I am not an expert of MAML, but i would not consider this as different tasks, just as different environments for the same task. i.e. a circular running strategy should be optimal for all environments. but when considering different tasks, we would consider different policies to be optimal.
- The experiments use the same hyper parameters for all variants. However, i am not sure this is a fair comparison. E.g. HC has way more spread over the search-space than the other two methods for a given sigma, with following sample steps allowing for fixing the "too large" or "too small" spread.
Since the graph of the objective function is flat in a large area of the search space, the additional exploration through stocasticity alone might explain the results of Figure 1. In this case, the result would be pretty artificial, because real ES would adapt their step-size.
- Similar holds for the number of samples used by the outer ES (n, but named differently in th appendix?). The gradient-based approaches might require a lot more initial points with a smaller K , especially on the flat surfaces of the objectives.
- In Figure 3, middle image, why does the green curve appear to have decreasing performance after iteration 200?
- Figure 3/ 4.2 why do the three settings have different values for number of iterations and K? Why does L-DPP only appear in the third task?
-Section 4.3 and Figure 4: why is there no L-PG and HH-ES? the only curve which is is available for both algorithms has the same performance. |
ICLR | Title
ES-MAML: Simple Hessian-Free Meta Learning
Abstract
We introduce ES-MAML, a new framework for solving the model agnostic meta learning (MAML) problem based on Evolution Strategies (ES). Existing algorithms for MAML are based on policy gradients, and incur significant difficulties when attempting to estimate second derivatives using backpropagation on stochastic policies. We show how ES can be applied to MAML to obtain an algorithm which avoids the problem of estimating second derivatives, and is also conceptually simple and easy to implement. Moreover, ES-MAML can handle new types of non-smooth adaptation operators, and other techniques for improving performance and estimation of ES methods become applicable. We show empirically that ES-MAML is competitive with existing methods and often yields better adaptation with fewer queries.
1 INTRODUCTION
Meta-learning is a paradigm in machine learning that aims to develop models and training algorithms which can quickly adapt to new tasks and data. Our focus in this paper is on meta-learning in reinforcement learning (RL), where data efficiency is of paramount importance because gathering new samples often requires costly simulations or interactions with the real world. A popular technique for RL meta-learning is Model Agnostic Meta Learning (MAML) (Finn et al., 2017; 2018), a model for training an agent which can quickly adapt to new and unknown tasks by performing one (or a few) gradient updates in the new environment. We provide a formal description of MAML in Section 2.
MAML has proven to be successful for many applications. However, implementing and running MAML continues to be challenging. One major complication is that the standard version of MAML requires estimating second derivatives of the RL reward function, which is difficult when using backpropagation on stochastic policies; indeed, the original implementation of MAML (Finn et al., 2017) did so incorrectly, which spurred the development of unbiased higher-order estimators (DiCE, (Foerster et al., 2018)) and further analysis of the credit assignment mechanism in MAML (Rothfuss et al., 2019). Another challenge arises from the high variance inherent in policy gradient methods, which can be ameliorated through control variates such as in T-MAML (Liu et al., 2019), through careful adaptive hyperparameter tuning (Behl et al., 2019; Antoniou et al., 2019) and learning rate annealing (Loshchilov & Hutter, 2017).
To avoid these issues, we propose an alternative approach to MAML based on Evolution Strategies (ES), as opposed to the policy gradient underlying previous MAML algorithms. We provide a detailed discussion of ES in Section 3.1. ES has several advantages:
∗Equal contribution. †Work performed during Google internship. ‡Work performed during the Google AI Residency Program. http://g.co/airesidency
1. Our zero-order formulation of ES-MAML (Section 3.2, Algorithm 3) does not require estimating any second derivatives. This dodges the many issues caused by estimating second derivatives with backpropagation on stochastic policies (see Section 2 for details).
2. ES is conceptually much simpler than policy gradients, which also translates to ease of implementation. It does not use backpropagation, so it can be run on CPUs only.
3. ES is highly flexible with different adaptation operators (Section 3.3).
4. ES allows us to use deterministic policies, which can be safer when doing adaptation (Section 4.3). ES is also capable of learning linear and other compact policies (Section 4.2).
On the point (4), a feature of ES algorithms is that exploration takes place in the parameter space. Whereas policy gradient methods are primarily motivated by interactions with the environment through randomized actions, ES is driven by optimization in high-dimensional parameter spaces with an expensive querying model. In the context of MAML, the notions of “exploration” and “task identification” have thus been shifted to the parameter space instead of the action space. This distinction plays a key role in the stability of the algorithm. One immediate implication is that we can use deterministic policies, unlike policy gradients which is based on stochastic policies. Another difference is that ES uses only the total reward and not the individual state-action pairs within each episode. While this may appear to be a weakness, since less information is being used, we find in practice that it seems to lead to more stable training profiles.
This paper is organized as follows. In Section 2, we give a formal definition of MAML, and discuss related works. In Section 3, we introduce Evolutionary Strategies and show how ES can be applied to create a new framework for MAML. In Section 4, we present numerical experiments, highlighting the topics of exploration (Section 4.1), the utility of compact architectures (Section 4.2), the stability of deterministic policies (Section 4.3), and comparisons against existing MAML algorithms in the few-shot regime (Section 4.4). Additional material can be found in the Appendix.
2 MODEL AGNOSTIC META LEARNING IN RL
We first discuss the original formulation of MAML (Finn et al., 2017). Let T be a set of reinforcement learning tasks with common state and action spaces S,A, and P(T ) a distribution over T . In the standard MAML setting, each task Ti ∈ T has an associated Markov Decision Process (MDP) with transition distribution qi(st+1|st, at), an episode length H , and a reward function RTi which maps a trajectory τ = (s0, a1, ..., aH−1, sH) to the total reward R(τ). A stochastic policy is a function π : S → P(A) which maps states to probability distributions over the action space. A deterministic policy is a function π : S → A. Policies are typically encoded by a neural network with parameters θ, and we often refer to the policy πθ simply by θ.
The MAML problem is to find the so-called MAML point (called also a meta-policy), which is a policy θ∗ that can be ‘adapted’ quickly to solve an unknown task T ∈ T by taking a (few)1 policy gradient steps with respect to T . The optimization problem to be solved in training (in its one-shot version) is thus of the form:
max θ J(θ) := ET∼P(T )[Eτ ′∼PT (τ ′|θ′)[RT (τ
′)]], (1)
where: θ′ = U(θ, T ) = θ + α∇θEτ∼PT (τ |θ)[RT (τ)] is called the adapted policy for a step size α > 0 and PT (·|η) is a distribution over trajectories given task T ∈ T and conditioned on the policy parameterized by η.
Standard MAML approaches are based on the following expression for the gradient of the MAML objective function (1) to conduct training:
∇θJ(θ) = ET∼P(T )[Er′∼PT (τ ′|θ′)[∇θ′ logPT (τ ′|θ′)RT (τ ′)∇θU(θ, T )]]. (2)
We collectively refer to algorithms based on computing (2) using policy gradients as PG-MAML.
1We adopt the common convention of defining the adaptation operator with a single gradient step, to simplify notation. It can be extended to multiple steps.
Since the adaptation operator U(θ, T ) contains the policy gradient ∇θEτ∼PT (τ |θ)[R(τ)], its own gradient∇θU(θ, T ) is second-order in θ:
∇θU = I+α ∫ PT (τ |θ)∇2θ log πθ(τ)RT (τ)dτ+α ∫ PT (τ |θ)∇θ log πθ(τ)∇θ log πθ(τ)TRT (τ)dτ.
(3) Correctly computing the gradient (2) with the term (3) using automatic differentiation is known to be tricky. Multiple authors (Foerster et al., 2018; Rothfuss et al., 2019; Liu et al., 2019) have pointed out that the original implementation of MAML incorrectly estimates the term (3), which inadvertently causes the training to lose ‘pre-adaptation credit assignment’. Moreover, even when correctly implemented, the variance when estimating (3) can be extremely high, which impedes training. To improve on this, extensions to the original MAML include ProMP (Rothfuss et al., 2019), which introduces a new low-variance curvature (LVC) estimator for the Hessian, and T-MAML (Liu et al., 2019), which adds control variates to reduce the variance of the unbiased DiCE estimator (Foerster et al., 2018). However, these are not without their drawbacks: the proposed solutions are complicated, the variance of the Hessian estimate remains problematic, and LVC introduces unknown estimator bias.
Another issue that arises in PG-MAML is that policies are necessarily stochastic. However, randomized actions can lead to risky exploration behavior when computing the adaptation, especially for robotics applications where the collection of tasks may involve differing system dynamics as opposed to only differing rewards (Yang et al., 2019). We explore this further in Section 4.3.
These issues: the difficulty of estimating the Hessian term (3), the typically high variance of∇θJ(θ) for policy gradient algorithms in general, and the unsuitability of stochastic policies in some domains, lead us to the proposed method ES-MAML in Section 3.
Aside from policy gradients, there have also been biologically-inspired algorithms for MAML, based on concepts such as the Baldwin effect (Fernando et al., 2018). However, we note that despite the similar naming, methods such as ‘Evolvability ES’ (Gajewski et al., 2019) bear little resemblance to our proposed ES-MAML. The problem solved by our algorithm is the standard MAML, whereas (Gajewski et al., 2019) aims to maximize loosely related notions of the diversity of behavioral characteristics. Moreover, ES-MAML and its extensions we consider are all derived notions such as smoothings and approximations, with rigorous mathematical definitions as stated below.
3 ES-MAML ALGORITHMS
Formulating MAML with ES allows us to employ numerous techniques originally developed for enhancing ES, to MAML. We aim to improve both phases of MAML algorithm: the meta-learning training algorithm, and the efficiency of the adaptation operator.
3.1 EVOLUTION STRATEGIES METHODS (ES)
Evolution Strategies (ES) (Wierstra et al., 2008; 2014), which recently became popular for RL (Salimans et al., 2017), rely on optimizing the smoothing of the blackbox function f : Rd → R, which takes as input parameters θ ∈ Rd of the policy and outputs total discounted (expected) reward obtained by an agent applying that policy in the given environment. Instead of optimizing the function f directly, we optimize a smoothed objective. We define the Gaussian smoothing of F as f̃σ(θ) = Eg∼N (0,Id)[f(θ + σg)]. The gradient of this smoothed objective, sometimes called an ES-gradient, is given as (see: (Nesterov & Spokoiny, 2017)):
∇θf̃σ(θ) = 1
σ Eg∼N (0,Id)[f(θ + σg)g]. (4)
Note that the gradient can be approximated via Monte Carlo (MC) samples:
In ES literature the above algorithm is often modified by adding control variates to equation 4 to obtain other unbiased estimators with reduced variance. The forward finite difference (Forward-FD) estimator (Choromanski et al., 2018) is given by subtracting the current policy value f(θ), yielding ∇θf̃σ(θ) = 1σEg∼N (0,Id)[(f(θ + σg) − f(θ))g]. The antithetic estimator (Nesterov & Spokoiny, 2017; Mania et al., 2018) is given by the symmetric difference ∇θf̃σ(θ) = 12σEg∼N (0,Id)[(f(θ +
1 ESGrad (f, θ, n, σ) inputs: function f , policy θ, number of perturbations n, precision σ 2 Sample n i.i.d N(0, I) vectors g1, . . . , gn; 3 return 1nσ ∑n i=1 f(θ + σgi)gi;
Algorithm 1: Monte Carlo ES Gradient
σg) − f(θ − σg))g]. Notice that the variance of the Forward-FD and antithetic estimators is translation-invariant with respect to f . In practice, the Forward-FD or antithetic estimator is usually preferred over the basic version expressed in equation 4.
In the next sections we will refer to Algorithm 1 for computing the gradient though we emphasize that there are several other recently developed variants of computing ES-gradients as well as applying them for optimization. We describe some of these variants in Section 3.3 and appendix A.3. A key feature of ES-MAML is that we can directly make use of new enhancements of ES.
3.2 META-TRAINING MAML WITH ES
To formulate MAML in the ES framework, we take a more abstract viewpoint. For each task T ∈ T , let fT (θ) be the (expected) cumulative reward of the policy θ. We treat fT as a blackbox, and make no assumptions on its structure (so the task need not even be MDP, and fT may be nonsmooth). The MAML problem is then
max θ J(θ) := ET∼P(T )fT (U(θ, T )). (5)
As argued in (Liu et al., 2019; Rothfuss et al., 2019) (see also Section 2), a major challenge for policy gradient MAML is estimating the Hessian, which is both conceptually subtle and difficult to correctly implement using automatic differentiation. The algorithm we propose obviates the need to calculate any second derivatives, and thus avoids this issue.
Suppose that we can evaluate (or approximate) fT (θ) and U(θ, T ), but fT and U(·, T ) may be nonsmooth or their gradients may be intractable. We consider the Gaussian smoothing J̃σ of the MAML reward (5), and optimize J̃σ using ES methods. The gradient∇J̃σ(θ) is given by
∇J̃σ(θ) = E T∼P(T ) g∼N (0,I)
[ 1
σ fT (U(θ + σg, T ))g
] (6)
and can be estimated by jointly sampling over (T,g) and evaluating fT (U(θ + σg, T )). This algorithm is specified in Algorithm 2 box, and we refer to it as (zero-order) ES-MAML.
Data: initial policy θ0, meta step size β 1 for t = 0, 1, . . . do 2 Sample n tasks T1, . . . , Tn and iid vectors g1, . . . ,gn ∼ N (0, I); 3 foreach (Ti,gi) do 4 vi ← fTi(U(θt + σgi, Ti)) 5 end 6 θt+1 ← θt + βσn ∑n i=1 vigi 7 end Algorithm 2: Zero-Order ES-MAML (general adaptation operator U(·, T ))
Data: initial policy θ0, adaptation step size α, meta step size β, number of queries K 1 for t = 0, 1, . . . do 2 Sample n tasks T1, . . . , Tn and iid vectors g1, . . . ,gn ∼ N (0, I); 3 foreach (Ti,gi) do 4 d(i) ← ESGRAD(fTi , θt + σgi,K, σ); 5 θ
(i) t ← θt + σgi + αd(i);
6 vi ← fTi(θ(i)t ); 7 end 8 θt+1 ← θt + βσn ∑n i=1 vigi; 9 end Algorithm 3: Zero-Order ES-MAML with ESGradient Adaptation
The standard adaptation operator U(·, T ) is the one-step task gradient. Since fT is permitted to be nonsmooth in our setting, we use the adaptation operator U(θ, T ) = θ + α∇f̃Tσ (θ) acting on its smoothing. Expanding the definition of J̃σ , the gradient of the smoothed MAML is then given by
∇J̃σ(θ) = 1
σ E T∼P(T ) g∼N (0,I)
[ fT ( θ + σg + 1
σ Eh∼N (0,I)[fT (θ + σg + σh)h]
) g ] . (7)
This leads to the algorithm that we specify in Algorithm 3, where the adaptation operator U(·, T ) is itself estimated using the ES gradient in the inner loop.
We can also derive an algorithm analogous to PG-MAML by applying a first-order method to the MAML reward ET∼P(T )f̃T (θ + α∇f̃T (θ)) directly, without smoothing. The gradient is given by
∇J(θ) = ET∼P(T )∇f̃T (θ + α∇f̃T (θ))(I+ α∇2f̃T (θ)), (8)
which corresponds to equation (3) in (Liu et al., 2019) when expressed in terms of policy gradients. Every term in this expression has a simple Monte Carlo estimator (see Algorithm 4 in the appendix for the MC Hessian estimator). We discuss this algorithm in greater detail in Appendix A.1. This formulation can be viewed as the “MAML of the smoothing”, compared to the “smoothing of the MAML” which is the basis for Algorithm 3. It is the additional smoothing present in equation 6 which eliminates the gradient of U(·, T ) (and hence, the Hessian of fT ). Just as with the Hessian estimation in the original PG-MAML, we find empirically that the MC estimator of the Hessian (Algorithm 4) has high variance, making it often harmful in training. We present some comparisons between Algorithm 3 and Algorithm 5, with and without the Hessian term, in Appendix A.1.2.
Note that when U(·, T ) is estimated, such as in Algorithm 3, the resulting estimator for∇J̃σ will in general be biased. This is similar to the estimator bias which occurs in PG-MAML because we do not have access to the true adapted trajectory distribution. We discuss this further in Appendix A.2.
3.3 IMPROVING THE ADAPTATION OPERATOR WITH ES
Algorithm 2 allows for great flexibility in choosing new adaptation operators. The simplest extension is to modify the ES gradient step: we can draw on general techniques for improving the ES gradient estimator, some of which are described in Appendix A.3. Some other methods are explored below.
3.3.1 IMPROVED EXPLORATION
Instead of using i.i.d Gaussian vectors to estimate the ES gradient in U(·, T ), we consider samples constructed according to Determinantal Point Processes (DPP). DPP sampling (Kulesza & Taskar, 2012; Wachinger & Golland, 2015) is a method of selecting a subset of samples so as to maximize the ‘diversity’ of the subset. It has been applied to ES to select perturbations gi so that the gradient estimator has lower variance (Choromanski et al., 2019a). The sampling matrix determining DPP sampling can also be data-dependent and use information from the meta-training stage to construct a learned kernel with better properties for the adaptation phase. In the experimental section we show that DPP-ES can help in improving adaptation in MAML.
3.3.2 HILL CLIMBING AND POPULATION SEARCH
Nondifferentiable operators U(·, T ) can be also used in Algorithm 2. One particularly interesting example is the local search operator given by U(θ, T ) = argmax{fT (θ′) : ‖θ′ − θ‖ ≤ R}, where R > 0 is the search radius. That is, U(θ, T ) selects the best policy for task T which is in a ‘neighborhood’ of θ. For simplicity, we took the search neighborhood to be the ball B(θ,R) here, but we may also use more general neighborhoods of θ. In general, exactly solving for the maximizer of fT over B(θ,R) is intractable, but local search can often be well approximated by a hill climbing algorithm. Hill climbing creates a population of candidate policies by perturbing the best observed policy (which is initialized to θ), evaluates the reward fT for each candidate, and then updates the best observed policy. This is repeated for several iterations. A key property of this search method is that the progress is monotonic, so the reward of the returned policy U(θ, T ) will always improve over θ. This does not hold for the stochastic gradient operator, and appears to be beneficial on some difficult problems (see Section 4.1). It has been claimed that hill climbing and other genetic algorithms (Moriarty et al., 1999) are competitive with gradient-based methods for solving difficult RL tasks (Such et al., 2017; Risi & Stanley, 2019). Another stochastic algorithm approximating local search is CMA-ES (Hansen et al., 2003; Igel, 2003; Krause et al., 2016), which performs more sophisticated search by adapting the covariance matrix of the perturbations.
4 EXPERIMENTS
The performance of MAML algorithms can be evaluated in several ways. One important measure is the performance of the final meta-policy: whether the algorithm can consistently produce metapolicies with better adaptation. In the RL setting, the adaptation of the meta-policy is also a function of the number K of queries used: that is, the number of rollouts used by the adaptation operator U(·, T ). The meta-learning goal of data efficiency corresponds to adapting with low K. The speed of the meta-training is also important, and can be measured in several ways: the number of metapolicy updates, wall-clock time, and the number of rollouts used for meta-training. In this section, we present experiments which evaluate various aspects of ES-MAML and PG-MAML in terms of data efficiency (K) and meta-training time. Further details of the environments and hyperparameters are given in Appendix A.7.
In the RL setting, the amount of information used drastically decreases if ES methods are applied in comparison to the PG setting. To be precise, ES uses only the cumulative reward over an episode, whereas policy gradients use every state-action pair. Intuitively, we may thus expect that ES should have worse sampling complexity because it uses less information for the same number of rollouts. However, it seems that in practice ES often matches or even exceeds policy gradients approaches (Salimans et al., 2017; Mania et al., 2018). Several explanations have been proposed: In the PG case, especially with algorithms such as PPO, the network must optimize multiple additional surrogate objectives such as entropy bonuses and value functions as well as hyperparameters such as the TDstep number. Furthermore, it has been argued that ES is more robust against delayed rewards, action infrequency, and long time horizons (Salimans et al., 2017). These advantages of ES in traditional RL also transfer to MAML, as we show empirically in this section. ES may lead to additional advantages (even if the numbers of rollouts needed in training is comparable with PG ones) in terms of wall-clock time, because it does not require backpropagation, and can be parallelized over CPUs.
4.1 EXPLORATION: TARGET ENVIRONMENTS
In this section, we present two experiments on environments with very sparse rewards where the meta-policy must exhibit exploratory behavior to determine the correct adaptation.
The four corners benchmark was introduced in (Rothfuss et al., 2019) to demonstrate the weaknesses of exploration in PG-MAML. An agent on a 2D square receives reward for moving towards a selected corner of the square, but only observes rewards once it is sufficiently close to the target corner, making the reward sparse. An effective exploration strategy for this set of tasks is for the meta-policy θ∗ to travel in circular trajectories to observe which corner produces rewards; however, for a single policy to produce this exploration behavior is difficult. In Figure 1, we demonstrate the behavior of ES-MAML on the four corners problem. When K = 20, the same number of rollouts for adaptation as used in (Rothfuss et al., 2019), the basic version of Algorithm 3 is able to correctly explore and adapt to the task by finding the target corner. Moreover, it does not require any modifications to encourage exploration, unlike PG-MAML. We further used K = 10, 5, which caused the performance to drop. For better performance in this low-information environment, we experimented with two different adaptation operators U(·, T ) in Algorithm 2, which are HC (hill climbing) and DPP-ES. The standard ES gradient is denoted MC.
Furthermore, ES-MAML is not limited to “single goal” exploration. We created a more difficult task, six circles, where the agent continuously accrues negative rewards until it reaches six target points to “deactivate” them. Solving this task requires the agent to explore in circular trajectories, similar to the trajectory used by PG-MAML on the four corners task. We visualize the behavior in Figure 2. Observe that ES-MAML with the HC operator is able to develop a strategy to explore the target locations.
Additional examples on the classic Navigation-2D task are presented in Appendix A.4, highlighting the differences in exploration behavior between PG-MAML and ES-MAML.
4.2 GOOD ADAPTATION WITH COMPACT ARCHITECTURES
One of the main benefits of ES is due to its ability to train compact linear policies, which can outperform hidden-layer policies. We demonstrate this on several benchmark MAML problems in the HalfCheetah and Ant environments in Figure 3. In contrast, (Finn & Levine, 2018) observed that PG-MAML empirically and theoretically suggested that training with more deeper layers under SGD increases performance. We demonstrate that on the Forward-Backward and Goal-Velocity MAML benchmarks, ES-MAML is consistently able to train successful linear policies faster than deep networks. We also show that, for the Forward-Backward Ant problem, ES-MAML with the new HC operator is the most performant. Using more compact policies also directly speeds up ES-MAML, since fewer perturbations are needed for gradient estimation.
4.3 DETERMINISTIC POLICIES
We find that deterministic policies often produce more stable behaviors than the stochastic ones that are required for PG, where randomized actions in unstable environments can lead to catastrophic outcomes. In PG, this is often mitigated by reducing the entropy bonus, but this has an undesirable side effect of reducing exploration. In contrast, ES-MAML explores in parameter space, which mitigates this issue. To demonstrate this, we use the “Biased-Sensor CartPole” environment from (Yang et al., 2019). This environment has unstable dynamics and sparse rewards, so it requires exploration but is also risky. We see in Figure 4 that ES-MAML is able to stably maintain the maximum reward (500).
We also include results in Figure 4 from two other environments, Swimmer and Walker2d, for which it is known that PG is surprisingly unstable, and ES yields better training (Mania et al., 2018). Notice that we again find linear policies (L) outperforming policies with one (H) or two (HH) hidden layers.
4.4 LOW-K BENCHMARKS
For real-world applications, we may be constrained to use fewer queries K than has typically been demonstrated in previous MAML works. Hence, it is of interest to compare how ES-MAML compares to PG-MAML for adapting with very low K.
One possible concern is that low K might harm ES in particular because it uses only the cumulative rewards; if for example K = 5, then the ES adaptation gradient can make use of only 5 values. In comparison, PG-MAML uses K · H state-action pairs, so for K = 5, H = 200, PG-MAML still has 1000 pieces of information available.
However, we find experimentally that the standard ES-MAML (Algorithm 3) remains competitive with PG-MAML even in the low-K setting. In Figure 5, we compare ES-MAML and PG-MAML on the Forward-Backward and Goal-Velocity tasks across four environments (HalfCheetah, Swimmer, Walker2d, Ant) and two model architectures. While PG-MAML can generally outperform ESMAML on the Goal-Velocity task, ES-MAML is similar or better on the Forward-Backward task. Moreover, we observed that for low K, PG-MAML can be highly unstable (note the wide error bars), with some trajectories failing catastrophically, whereas ES-MAML is relatively stable. This is an important consideration in real applications, where the risk of catastrophic failure is undesirable.
5 CONCLUSION
We have presented a new framework for MAML based on ES algorithms. The ES-MAML approach avoids the problems of Hessian estimation which necessitated complicated alterations in PG-MAML and is straightforward to implement. ES-MAML is flexible in the choice of adaptation operators, and can be augmented with general improvements to ES, along with more exotic adaptation operators. In particular, ES-MAML can be paired with nonsmooth adaptation operators such as hill climbing, which we found empirically to yield better exploratory behavior and better performance on sparse-reward environments. ES-MAML performs well with linear or compact deterministic policies, which is an advantage when adapting if the state dynamics are possibly unstable.
A.1 FIRST-ORDER ES-MAML
A.1.1 ALGORITHM
Suppose that we first apply Gaussian smoothing to the task rewards and then form the MAML problem, so we have J(θ) = ET∼P(T )f̃T (U(θ, T )). The function J is then itself differentiable, and we can directly apply first-order methods to it. The classical case where U(θ, T ) = θ + α∇f̃T (θ) yields the gradient
∇J(θ) = ET∼P(T )∇f̃T (θ + α∇f̃T (θ))(I+ α∇2f̃T (θ)). (9) This is analogous to formulas obtained in e.g (Liu et al., 2019) for the policy gradient MAML. We can then approximate this gradient as an input to stochastic first-order methods. An example with standard SGD is shown in Algorithm 5.
1 ESHess (f, θ, n, σ) inputs: function f , policy θ, number of perturbations n, precision σ 2 Sample i.i.d N (0, I) vectors g1, . . . ,gn; 3 v ← 1n ∑n i=1 f(θ + σgi);
4 H0 ← 1n ∑n i=1 f(θ + σgi)gig T i ; 5 return 1σ2 (H 0 − v · I);
Algorithm 4: Monte Carlo ES Hessian
Data: initial policy θ0, adaptation step size α, meta step size β, number of queries K
1 for t = 0, 1, . . . do 2 Sample n tasks T1, . . . , Tn; 3 foreach Ti do 4 d
(i) 1 ← ESGRAD(fTi , θt,K, σ);
5 H(i) ← ESHESS(fTi , θt,K, σ); 6 θ
(i) t ← θt + α · di;
7 d (i) 2 ← ESGRAD(fTi , θ (i) t ,K, σ); 8 end 9 θt+1 ← θt + βn ∑n i=1(I+ αH (i))d (i) 2 ;
10 end Algorithm 5: First Order ES-MAML
A central problem, as discussed in (Rothfuss et al., 2019; Liu et al., 2019) is the estimation of ∇2f̃T (θ). However, a simple expression exists for this object in the ES setting; it can be shown that
∇2f̃T (θ) = 1 σ2 (Eh∼N (0,I)[fT (θ + σh)hhT ]− f̃T (θ)I]. (10)
Note that for the vector h, hT is the transpose (and unrelated to tasks T ). A basic MC estimator is shown in Algorithm 4. Given an independent estimator for ∇f̃T (θ + α∇f̃T (θ)), we can then take the product to obtain an estimator for∇J .
A.1.2 EXPERIMENTS WITH FIRST-ORDER ES-MAML
Unlike zero-order ES-MAML (Algorithm 3), the first-order ES-MAML explicitly builds an approximation of the Hessian of fT . Given the literature on PG-MAML, we expect that estimating the Hessian ∇2f̃T (θ) with Algorithm 4 without any control variates may have high variance. We compare two variants of first-order ES-MAML:
1. The full version (FO-Hessian) specified in Algorithm 5.
2. The ‘first-order approximation’ (FO-NoHessian) which ignores the term I+α∇2f̃T (θ) and approximates the MAML gradient as ET∼P(T )∇f̃T (θ + α∇f̃T (θ)). This is equivalent to setting H(i) = 0 in line 5 of Algorithm 5.
The results on the four corner exploration problem (Section 4.1) and the Forward-Backward Ant, using Linear policies, are shown in Figure A1. On Forward-Backward Ant, FO-NoHessian actually outperformed FO-Hessian, so the inclusion of the Hessian term actually slowed convergence. On the four corners task, both FO-Hessian and FO-NoHessian have large error bars, and FO-Hessian slightly outperforms FO-NoHessian.
There is conflicting evidence as to whether the same phenomenon occurs with PG-MAML; (Finn et al., 2017, §5.2) found that on supervised learning MAML, omitting Hessian terms is competitive
Figure A1: Comparisons between the FO-Hessian and FO-NoHessian variants of Algorithm 5.
but slightly worse than the full PG-MAML, and does not report comparisons with and without the Hessian on RL MAML. (Rothfuss et al., 2019; Liu et al., 2019) argue for the importance of the second-order terms in proper credit assignment, but use heavily modified estimators (LVC, control variates; see Section 2) in their experiments, so the performance is not directly comparable to the ‘naive’ estimator in Algorithm 4. Our interpretation is that Algorithm 4 has high variance, making the Hessian estimates inaccurate, which can slow training on relatively ‘easier’ tasks like ForwardBackward walking but possibly increase the exploration on four corners.
We also compare FO-NoHessian against Algorithm 3 on Forward-Backward HalfCheetah and Ant in Figure A2. In this experiment, the two methods ran on servers with different number of workers available, so we measure the score by the total number of rollouts. We found that FO-NoHessian was slightly faster than Algorithm 3 when measured by rollouts on Ant, but FO-NoHessian had notably poor performance when the number of queries was low (K = 5) on HalfCheetah, and failed to reach similar scores as the others even after running for many more rollouts.
Figure A2: Comparisons between FO-NoHessian and Algorithm 3, by rollouts
.
A.2 HANDLING ESTIMATOR BIAS
Since the adapted policy U(θ, T ) generally cannot be evaluated exactly, we cannot easily obtain unbiased estimates of fT (U(θ, T )). This problem arises for both PG-MAML and ES-MAML.
We consider PG-MAML first as an example. In PG-MAML, the adaptation operator is U(θ, T ) = θ+α∇θEτ∼PT (τ |θ)[R(τ)]. In general, we can only obtain an estimate of∇θEτ∼PT (τ |θ)[R(τ)] and not its exact value. However, the MAML gradient is given by
∇θJ(θ) = ET ∼P(T )[Er′∼PT (τ ′|θ′)[∇θ′ logPT (τ ′|θ′)R(τ ′)∇θU(θ, T )]] (11)
which requires exact sampling from the adapted trajectories τ ′ ∼ PT (τ ′|U(θ, T )). Since this is a nonlinear function of U(θ, T ), we cannot obtain unbiased estimates of ∇J(θ) by sampling τ ′ generated by an estimate of U(θ, T ).
In the case of ES-MAML, the adaptation operator is U(θ, T ) = θ+α∇f̃(θ, T ) = Ehu(θ, T ;h) for h ∼ N (0, I), where u(θ, T ;h) = θ + ασ f
T (θ + σh)h. Clearly, fT (u(θ, T ;h)) is not an unbiased estimator of fT (U(θ, T )).
We may question whether using an unbiased estimator of fT (U(θ, T )) is likely to improve performance. One natural strategy is to reformulate the objective function so as to make the desired estimator unbiased. This happens to be the case for the algorithm E-MAML (Al-Shedivat et al., 2018), which treats the adaptation operator as an explicit function of K sampled trajectories and “moves the expectation outside”. That is, we now have an adaptation operator U(θ, T ; τ1, . . . , τK), and the objective function becomes
ET [Eτ1,...,τk∼PT (τ |θ)f T (U(θ, T ; τ1, . . . , τK))] (12)
An unbiased estimator for the E-MAML gradient can be obtained by sampling only from τ ∼ PT (τ |θ) (Al-Shedivat et al., 2018). However, it has been argued that by doing so, E-MAML does not properly assign credit to the pre-adaptation policy (Rothfuss et al., 2019). Thus, this particular mathematical strategy seems to be disadvantageous for RL.
The problem of finding estimators for function-of-expectations f(EX) is difficult and while general unbiased estimation methods exist (Blanchet et al., 2017), they are often complicated and suffer from high variance. In the context of MAML, ProMP compares the low variance curvature (LVC) estimator (Rothfuss et al., 2019), which is biased, against the unbiased DiCE estimator (Foerster et al., 2018), for the Hessian term in the MAML gradient, and found that the lower variance of LVC produced better performance than DiCE. Alternatively, control variates can be used to reduce the variance of the DiCE estimator, which is the approach followed in (Liu et al., 2019).
In the ES framework, the problem can also be formulated to avoid exactly evaluating U(·, T ), and hence circumvents the question of estimator bias. We observe an interesting connection between MAML and the stochastic composition problem. Let us define uh(θ, T ) = u(θ, T ;h) and fTg (θ) = fT (θ + σg). For a given task T , the MAML reward is given by
f̃T (U(θ, T )) = f̃T [Ehuh(θ, T )] = EgfTg (Ehuh(θ, T )). (13)
This is a two-layer nested stochastic composition problem with outer function f̃T = EgfTg and inner function U(·, T ) = Ehuh(·, T ). An accelerated algorithm (ASC-PG) was developed in (Wang et al., 2017)] for this class of problems. While neither fTg nor uh(·, T ) is smooth, which is assumed in (Wang et al., 2017), we can verify that the crucial content of the assumptions hold:
1. Ehuh(θ, T ) = U(θ, T ) 2. We can define two functions
ζTg (θ) = 1
σ fTg (θ)g, ξ T h (θ) = I+
α
σ2 (fTh (θ)hh T − fTh (θ)I)
such that for any θ1, θ2,
Eg,h[ξTh (θ1)ζTg (θ2)] = JU(θ1, T )∇f̃T (θ2)
where JU denotes the Jacobian of U(·, T ), and g,h are independent vectors sampled from N (0, I). This follows immediately from equation 4 and equation 10.
The ASC-PG algorithm does not immediately extend to the full MAML problem, as upon taking an outer expectation over T , the MAML reward J(θ) = ETEgfTg (Ehuh(θ, T )) is no longer a stochastic composition of the required form. In particular, there are conceptual difficulties when the number of tasks in T is infinite. However, it can be used to solve the MAML problem for each task within a consensus framework, such as consensus ADMM (Hong et al., 2016).
A.3 EXTENSIONS OF ES
In this section, we discuss several general techniques for improving the basic ES gradient estimator (Algorithm 1). These can be applied both to the ES gradient of the meta-training (the ‘outer loop’ of Algorithm 3), and more interestingly, to the adaptation operator itself. That is, given U(θ, T ) =
θ + α∇f̃Tσ (θ), we replace the estimation of U by ESGRAD on line 4 of Algorithm 3 with an improved estimator of ∇f̃Tσ (θ), which even may depend on data collected during the meta-training stage. Many techniques exist for reducing the variance of the estimator such as Quasi Monte Carlo sampling (Choromanski et al., 2018). Aside from variance reduction, there are also methods with special properties.
A.3.1 ACTIVE SUBSPACES
Active Subspaces is a method for finding a low-dimensional subspace where the contribution of the gradient is maximized. Conceptually, the goal is to find and update on-the-fly a low-rank subspace L so that the projection ∇fT (θ)L of ∇fT (θ) into L is maximized and apply ∇fT (θ)L instead of ∇fT (θ). This should be done in such a way that ∇fT (θ) does not need to be computed explicitly. Optimizing in lower-dimensional subspaces might be computationally more efficient and can be thought of as an example of guided ES methods, where the algorithm is guided how to explore space in the anisotropic way, leveraging its knowledge about function optimization landscape that it gained in the previous steps of optimization. In the context of RL, the active subspace method ASEBO (Choromanski et al., 2019b) was successfully applied to speed up policy training algorithms. This strategy can be made data-dependent also in the MAML context, by learning an optimal subspace using data from the meta-training stage, and sampling from that subspace in the adaptation step.
A.3.2 REGRESSION-BASED OPTIMIZATION
Regression-Based Optimization (RBO) is an alternative method of gradient estimation. From Taylor series expansion we have f(θ + d) − f(θ) = ∇f(θ)Td + O(‖d‖2). By taking multiple finite difference expressions f(θ + d) − f(θ) for different d, we can recover the gradient by solving a regularized regression problem. The regularization has an additional advantage - it was shown that the gradient can be recovered even if a substantial fraction of the rewards f(θ + d) are corrupted (Choromanski et al., 2019c). Strictly speaking, this is not based on the Gaussian smoothing as in ES, but is another method for estimating gradients using only zero-th order evaluations.
A.3.3 EXPERIMENTS
We present a preliminary experiment with RBO and ASEBO gradient adaptation in Figure A3. To be precise, the algorithms used are identical to Algorithm 3 except that in line 4, d(i) ← ESGRAD is replaced by d(i) ← RBO (yielding RBO-MAML) and d(i) ← ASEBO (yielding ASEBO-MAML) respectively.
Figure A3: RBO-MAML and ASEBO-MAML compared to ES-MAML.
On the left plot, we test for noise robustness on the Forward-Backward Swimmer MAML task, comparing standard ES-MAML (Algorithm 3) to RBO-MAML. To simulate noisy data, we randomly corrupt 25% of the queries fT (θ + σg) used to estimate the adaptation operator U(θ, T ) with an enormous additive noise. This is the same type of corruption used in (Choromanski et al., 2019c).
Interestingly, RBO does not appear to be more robust against noise than the standard MC estimator, which suggests that the original ES-MAML has some inherent robustness to noise.
On the right plot, we compare ASEBO-MAML to ES-MAML on the Goal-Velocity HalfCheetah task in the low-K setting. We found that when measured in iterations, ASEBO-MAML outperforms ES-MAML. However, ASEBO requires additional linear algebra operations and thus uses significantly more wall-clock time (not shown in plot) per iteration, so if measured by real time, then ES-MAML was more effective.
A.4 NAVIGATION-2D EXPLORATION TASK
Navigation-2D (Finn et al., 2017) is a classic environment where the agent must explore to adapt to the task. The agent is represented by a point on a 2D square, and at each time step, receives reward equal to its distance from a given target point on the square. Note that unlike the four corners and six circles tasks, the reward for Navigation-2D is dense. We visualize the differing exploration strategies learned by PG-MAML and ES-MAML in Figure A4. Notice that PG-MAML makes many tiny movements in multiple directions to ‘triangulate’ the target location using the differences in reward for different state-action pairs. On the other hand, ES-MAML learns a meta-policy such that each perturbation of the meta-policy causes the agent to move in a different direction (represented by red paths), so it can determine the target location from the total rewards of each path.
Figure A4: Comparing the exploration behavior of PG-MAML and ES-MAML on the Navigation2D task. We use K = 20 queries for each algorithm.
A.5 PG-MAML RL BENCHMARKS
In Figure A5, we compare ES-MAML and PG-MAML on the Forward-Backward and Goal-Velocity tasks for HalfCheetah, Swimmer, Walker2d, and Ant, using the same values of K that were used in the original experiments of (Finn et al., 2017).
Figure A5: Comparisons between ES-MAML and PG-MAML using the queriesK from (Finn et al., 2017).
A.6 REGRESSION AND SUPERVISED LEARNING
MAML has also been applied to supervised learning. We demonstrate ES-MAML on sine regression (Finn et al., 2017), where the task is to fit a sine curve f with unknown amplitude and phase given a set of K pairs (xi, f(xi)). The meta-policy must be able to learn that all of tasks have a common periodic nature, so that it can correctly adapt to an unknown sine curve outside of the points xi.
For regression, the loss is the mean-squared error (MSE) between the adapted policy πθ(x) and the true curve f(x). Given data samples {(xi, f(xi)}Ki=1, the empirical loss isL(θ) = 1K ∑K i=1(f(xi)− πθ(xi)) 2. Note that unlike in reinforcement learning, we can exactly compute∇L(θ); for deep networks, this is by automatic differentiation. Thus, we opt to use Tensorflow to compute the adaptation operator U(θ, T ) in Algorithm 3. This is in accordance with the general principle that when gradients are available, it is more efficient to use the gradient than to approximate it by a zero-order method (Nesterov & Spokoiny, 2017).
We show several results in Figure A6. The adaptation step size is α = 0.01, which is the same as in (Finn et al., 2017). For comparison, (Finn et al., 2017) reports that PG-MAML can obtain a loss of ≈ 0.5 after one adaptation step with K = 5, though it is not specified how many iterations the meta-policy was trained for. ES-MAML approaches the same level of performance, though the number of training iterations required is higher than for the RL tasks, and surprisingly high for what appears to be a simpler problem. This is likely again a reflection of the fact that for problems such as regression where the gradients are available, it is more efficient to use gradients.
As an aside, this leads to a related question of the correct interpretation of the query number K in the supervised setting. There is a distinction between obtaining a data sample (xi, f(xi)), and doing a computation (such as a gradient) using that sample. If the main bottleneck is collecting the data {(xi, f(xi)}, then we may be satisfied with any algorithm that performs any number of operations on the data, as long as it uses only K samples. On the other hand, in the (on-policy) RL setting, samples cannot typically be ‘re-used’ to the same extent, because rollouts τ sampled with a given
Figure A6: The MSE of the adapted policy, for varying number of gradient steps and query number K. Runs are averaged across 3 seeds.
policy πθ follow an unknown distribution P(τ |θ) which reduces their usefulness away from θ. Thus, the corresponding notion to rollouts in the SL setting would be the number of backpropagations (for PG-MAML) or perturbations (for ES-MAML), but clearly these have different relative costs than doing simulations in RL.
A.7 HYPERPARAMETERS AND SETUPS
A.7.1 ENVIRONMENTS
Unless otherwise explicitly stated, we default to K = 20 and horizon = 200 for all RL experiments. We also use the standard reward normalization in (Mania et al., 2018), and use a global state normalization (i.e. the same mean, standard deviation normalization values for MDP states are shared across workers).
For the Ant environments (Goal-Position Ant, Forward-Backward Ant), there are significant differences in weighting on the auxiliary rewards such as control costs, contact costs, and survival rewards across different previous work (e.g. those costs are downweighted in (Finn et al., 2017) whereas the coefficients are vanilla Gym weightings in (Liu et al., 2019)). These auxiliary rewards can lead to local minima, such as the agent staying stationary to collect the survival bonus which may be confused with movement progress when presenting a training curve. To make sure the agent is explicitly performing the required task, we opted to remove such costs in our work and only present the main goal-distance cost and forward-movement reward respectively.
For the other environments, we used default weightings and rewards, since they do not change across previous works.
A.7.2 ES-MAML HYPERPARAMETERS
Let N be the number of possible distinct tasks possible. We sample tasks without replacement, which is important if N 5, as each worker performs adaptations on all possible tasks. For standard ES-MAML (Algorithm 3), we used the following settings.
Setting Value (Total Workers, # Perturbations, # Current Evals) (300, 150, 150) (Train Set Size, Task Batch Size, Test Set Size) (50,5,5) or (N,N,N) Number of rollouts per parameter 1 Number of Perturbations per worker 1 Outer-Loop Precision Parameter 0.1 Adaptation Precision Parameter 0.1 Outer-Loop Step Size 0.01 Adaptation Step Size (α) 0.05 Hidden Layer Width 32 ES Estimation Type Forward-FD Reward Normalization True State Normalization True
For ES-MAML and PG-MAML, we took 3 seeded runs, using the default TRPO hyperparameters found in (Liu et al., 2019). | 1. What is the focus of the paper regarding optimizing the Model Agnostic Meta Learning objective?
2. What are the strengths of the proposed method, particularly in addressing issues related to policy gradient algorithms?
3. Do you have any concerns about the theoretical and algorithmic contributions of the paper?
4. What are the differences between ES-MAML and PG-MAML in terms of exploratory behaviors, adaptation abilities, stability, and low-K benchmarks?
5. How does the reviewer assess the rigorousness and promising nature of the experimental results?
6. Are there any potential extensions to ES-MAML that the authors could explore in future work? | Review | Review
This paper proposes a method, ES-MAML, for optimizing the Model Agnostic Meta Learning (MAML) objective by using Evolution Strategies (ES) gradients instead of policy gradients (PG) as in the previous approaches in the literature. As a result, the use of ES avoids the need of second-order derivative estimation resulted from PG in computing the gradients of the MAML objective; second-order derivatives in MAML are known to be tricky for proper estimation. They also explore ES-MAML with different advanced adaptation operators to improve the ES gradient estimator. They perform empirical study to demonstrate the benefits of ES-MAML as compared with PG-MAML. In particular, they evaluate the comparable algorithms (ES-MAML and variants vs PG-MAML) in terms of exploratory behaviors in sparse-reward environments, adaptation ability, the stability of deterministic policies in unstable environments, and low-K benchmarks. The experimental results are rigorous and promising. They also discuss several potential extensions to ES-MAML in the appendix.
Regarding the theoretical and algorithmic contributions, this paper combines existing techniques from ES and gradient estimators to make ES gradients work for MAML. Thus, I feel that the paper does not provide significantly new results on these dimensions. For example, the substitution of policy gradient for evolution strategies in Eq. (6) is straightforward, and there is no theoretical justification for the choice of algorithmic designs made in the paper. However, given that the paper attempts to address an important problem (stably optimizing the MAML objective) with interesting perspective (using ES), that the proposed methods are well developed and extended, and that rigorous experiments to evaluate the proposed methods are provided, this paper could be an interesting contribution to the conference where it can encourage different perspective beyond the gradient policy view for MAML problems.
Questions and comments.
1. On page 3, with reference to the text “These issues: the difficulty of estimating the Hessian term (3), the typically high variance of ∇θJ(θ) for policy gradient algorithms in general, and the unsuitability of stochastic policies in some domains, lead us to the proposed method ES-MAML in Section 3.” I agree that the use of ES gradients avoids the need of second-order derivative estimation; however I am not very sure if we could say that ES-MAML here can address the high variance issue of PG given that ES can also suffer from high variance and that there is a rich literature in reducing variance of PG.
2. Could you clarify which version of PG-MAML was used as the baseline in your experiments? Is this the “vanilla” version from Eq. (2) without any variance reduction techniques (e.g., Rothfuss et al. (2019), Liu et al. (2019)) or did you include one of the variance reduction techniques to the baseline PG-MAML?
3. In section 4.2, with reference to the text “one of the main benefits of ES is due to its ability to train compact linear policies, which can outperform hidden-layer policies”, could you clarify what did this text mean? Did you mean that compact linear policies are better than hidden-layer policies for MAML, or does it mean that ES is not good at training hidden-layer policies, so it can train linear policies better than hidden-layer policies?
Minor comments.
1. Page 2, R has not been introduced.
2. Page 3, Section 3.1: does F mean f? |
ICLR | Title
Convergence rate of sign stochastic gradient descent for non-convex functions
Abstract
The sign stochastic gradient descent method (signSGD) utilises only the sign of the stochastic gradient in its updates. For deep networks, this one-bit quantisation has surprisingly little impact on convergence speed or generalisation performance compared to SGD. Since signSGD is effectively compressing the gradients, it is very relevant for distributed optimisation where gradients need to be aggregated from different processors. What’s more, signSGD has close connections to common deep learning algorithms like RMSprop and Adam. We study the base theoretical properties of this simple yet powerful algorithm. For the first time, we establish convergence rates for signSGD on general non-convex functions under transparent conditions. We show that the rate of signSGD to reach first-order critical points matches that of SGD in terms of number of stochastic gradient calls, but loses out by roughly a linear factor in the dimension for general non-convex functions. We carry out simple experiments to explore the behaviour of sign gradient descent (without the stochasticity) close to saddle points and show that it can help to completely avoid certain kinds of saddle points without using either stochasticity or curvature information.
1 INTRODUCTION
Deep neural network training takes place in an error landscape that is high-dimensional, non-convex and stochastic. In practice, simple optimization techniques perform surprisingly well but have very limited theoretical understanding. While stochastic gradient descent (SGD) is widely used, algorithms like Adam (Kingma & Ba, 2015), RMSprop (Tieleman & Hinton, 2012) and Rprop (Riedmiller & Braun, 1993) are also popular. These latter algorithms involve component-wise rescaling of gradients, and so bear closer relation to signSGD than SGD. Currently, convergence rates have only been derived for close variants of SGD for general non-convex functions, and indeed the Adam paper gives convex theory.
Recently, another class of optimization algorithms has emerged which also pays attention to the resource requirements for training, in addition to obtaining good performance. Primarily, they focus on reducing costs for communicating gradients across different machines in a distributed training environment (Seide et al., 2014; Strom, 2015; Li et al., 2016; Alistarh et al., 2017; Wen et al., 2017). Often, the techniques involve quantizing the stochastic gradients at radically low numerical precision. Empirically, it was demonstrated that one can get away with using only one-bit per dimension without losing much accuracy (Seide et al., 2014; Strom, 2015). The theoretical properties of these approaches are however not well-understood. In particular, it was not known until now how quickly signSGD (the simplest incarnation of one-bit SGD) converges or even whether it converges at all to the neighborhood of a meaningful solution.
Our contribution: we supply the non-convex rate of convergence to first order critical points for signSGD. The algorithm updates parameter vector xk according to
xk+1 = xk ksign(ḡk) (1)
where ḡk is the mini-batch stochastic gradient and k is the learning rate. We show that for nonconvex problems, signSGD entertains convergence rates as good as SGD, up to a linear factor in the dimension. Our statements impose a particular learning rate and mini-batch schedule.
Ours is the first work to provide non-convex convergence rates for a biased quantisation procedure as far as we know, and therefore does not require the randomisation that other gradient quantisation algorithms need to ensure unbiasedness. The technical challenge we overcome is in showing how to carry the stochasticity in the gradient through the sign non-linearity of the algorithm in a controlledfashion.
Whilst our analysis is for first order critical points, we experimentally test the performance of sign gradient descent without stochasticity (signGD) around saddle points. We removed stochasticity in order to investigate whether signGD has an inherent ability to escape saddle points, which would suggest superiority over gradient descent (GD) which can take exponential time to escape saddle points if it gets too close to them (Du et al., 2017).
In our work we make three assumptions. Informally, we assume that the objective function is lowerbounded, smooth, and that each component of the stochastic gradient has bounded variance. These assumptions are very general and hold for a much wider class of functions than just the ones encountered in deep learning.
Outline of paper: in Sections 3, 4 and 5 we give non-convex theory of signSGD. In Section 6 we experimentally test the ability of the signGD (without the S) to escape saddle points. And in Section 7 we pit signSGD against SGD and Adam on CIFAR-10.
2 RELATED WORK
Deep learning: the prototypical optimisation algorithm for neural networks is stochastic gradient descent (SGD)—see Algorithm 2. The deep learning community has discovered many practical tweaks to ease the training of large neural network models. In Rprop (Riedmiller & Braun, 1993) each weight update ignores the magnitude of the gradient and pays attention only to the sign, bringing it close to signSGD. It differs in that the learning rate for each component is modified depending on the consistency of the sign of consecutive steps. RMSprop (Tieleman & Hinton, 2012) is Rprop adapted for the minibatch setting—instead of dividing each component of the gradient by its magnitude, the authors estimate the rescaling factor as an average over recent iterates. Adam (Kingma & Ba, 2015) is RMSprop with momentum, meaning both gradient and gradient rescaling factors are estimated as bias-corrected averages over iterates. Indeed switching off the averaging in Adam yields signSGD. These algorithms have been applied to a breadth of interesting practical problems, e.g. (Xu et al., 2015; Gregor et al., 2015).
In an effort to characterise the typical deep learning error landscape, Dauphin et al. (2014) frame the primary obstacle to neural network training as the proliferation of saddle points in high dimensional objectives. Practitioners challenge this view, suggesting that saddle points may be seldom encountered at least in retrospectively successful applications of deep learning (Goodfellow et al., 2015).
Optimisation theory: in convex optimisation there is a natural notion of success—rate of convergence to the global minimum x⇤. Convex optimisation is eased by the fact that local information in the gradient provides global information about the direction towards the minimum, i.e. rf(x) tells you information about x⇤ x. In non-convex problems finding the global minimum is in general intractable, so theorists usually settle for measuring some restricted notion of success, such as rate of convergence to stationary points (e.g. Allen-Zhu (2017a)) or local minima (e.g. Nesterov & Polyak (2006)). Given the importance placed by Dauphin et al. (2014) upon evading saddle points, recent work considers the efficient use of noise (Jin et al., 2017; Levy, 2016) and curvature information (Allen-Zhu, 2017b) to escape saddle points and find local minima.
Distributed machine learning: whilst Rprop and Adam were proposed by asking how we can use gradient information to make better optimisation steps, another school asks how much information can we throw away from the gradient and still converge at all. Seide et al. (2014); Strom (2015) demonstrated empirically that one-bit quantisation can still give good performance whilst dramatically reducing gradient communication costs in distributed systems. Convergence properties of quantized stochastic gradient methods remain largely unknown. Alistarh et al. (2017) provide convergence rates for quantisation schemes that are unbiased estimators of the true gradient, and are
thus able to rely upon vanilla SGD convergence results. Wen et al. (2017) prove asymptotic convergence of a { 1, 0, 1} ternary quantization scheme that also retains the unbiasedness of the stochastic gradient. Our proposed approach is different, in that we directly employ the sign gradient which is biased. This avoids the randomization needed for constructing an unbiased quantized estimate. To the best of our knowledge, the current work is the first to establish a convergence rate for a biased quantisation scheme, and our proof differs to that of vanilla SGD.
Parallel work: signSGD is related to both attempts to improve gradient descent like Rprop and Adam, and attempts to damage it but not too badly like quantised SGD. After submitting we became aware that Anonymous (2018) also made this link in a work submitted to the same conference. Our work gives non-convex theory of signSGD, whereas their work analyses Adam in greater depth, but only in the convex world.
3 ASSUMPTIONS
Assumption 1 (The objective function is bounded below). For all x and some constant f⇤, the objective function satisfies
f(x) f⇤ (2)
Remark: this assumption applies to every practical objective function that we are aware of.
Assumption 2 (The objective function is L-Lipschitz smooth). Let g(x) denote the gradient of the objective f(.) evaluated at point x. Then for every y we assume that
f(y) ⇥ f(x) + g(x) T (y x) ⇤ L
2
ky xk22 (3)
Remark: this assumption allows us to measure the error in trusting the local linearisation of our objective, which will be useful for bounding the error in a single step of the algorithm. For signSGD we can actually relax this assumption to only hold only for y within a local neighbourhood of x, since signSGD takes steps of bounded size.
Assumption 3 (Stochastic gradient oracle). Upon receiving query x, the stochastic gradient oracle gives us an independent estimate ĝ satisfying
E[ĝ(x)] = g(x), Var(ĝ(x)[i]) 2 8i = 1, ..., d.
Remark: this assumption is standard for stochastic optimization, except that the variance upper bound is now stated for every dimension separately. A realization of the above oracle is to choose a data point uniformly at random, and to evaluate its gradient at point x. In the algorithm, we will be working with a minibatch of size nk in the kth iteration, and the corresponding minibatch stochastic gradient is modeled as the average of nk calls of the above stochastic gradient oracle at xk. Therefore in this case the variance bound is squashed to 2/nk.
4 NON-CONVEX CONVERGENCE RATE OF SIGNSGD
Informally, our primary result says that if we run signSGD with the prescribed learning rate and mini-batch schedules, then after N stochastic gradient evaluations, we should expect that somewhere along the optimisation trajectory will be a place with gradient 1-norm smaller than O(N 0.25). This matches the non-convex SGD rate, insofar as they can be compared, and ignoring all (dimensiondependent!) constants.
Before we dive into the theorems, here’s a refresher on our notation—deep breath—gk is the gradient at step k, f⇤ is the lower bound on the objective function, f0 is the initial value of the objective function, d is the dimension of the space, K is the total number of iterations, NK is the cumulative number of stochastic gradient calls at step K, is the intrinsic variance-proxy for each component of the stochastic gradient, and finally L is the maximum curvature (see Assumption 2).
Algorithm 1 Sign stochastic gradient descent (signSGD) 1: Inputs: x0, K . initial point and time budget 2: for k 2 [0,K 1] do 3: k learningRate(k) 4: nk miniBatchSize(k) 5: ḡk 1nk Pnk i=1 stochasticGradient(xk)
6: xk+1 xk ksign(ḡk) . the sign operation is element-wise
Algorithm 2 Stochastic gradient descent 1: Inputs: x0, K . initial point and time budget 2: for k 2 [0,K 1] do 3: k learningRate(k) 4: nk miniBatchSize(k) 5: ĝk 1nk Pnk i=1 stochasticGradient(xk)
6: xk+1 xk kḡk
Theorem 1 (Non-convex convergence rate of signSGD). Apply Algorithm 1 under Assumptions 1, 2 and 3. Schedule the learning rate and mini-batch size as
k = p k + 1
nk = k + 1 (4)
Let NK be the cumulative number of stochastic gradient calls up to step K, i.e. NK = O(K2) Then we have
E min
0kK 1 kgkk1
2 1p
NK 2
f0 f⇤
+ d(2 + log(2NK 1))( + L)
2 (5)
Theorem 2 (Non-convex convergence rate of stochastic gradient descent). Apply Algorithm 2 under Assumptions 1, 2 and 3. Schedule the learning rate and mini-batch size as
k = p k + 1
nk = 1 (6)
Let NK be the cumulative number of stochastic gradient calls up to step K, i.e. NK = K. Then we have that
E min
0kK 1 kgkk22 1p
NK
" f0 f⇤
1 L2
+ d(1 + logNK)
L 2
1 L2
2 # (7)
The proofs are deferred to Appendix B and here we sketch the intuition for Theorem 1. First consider the non-stochastic case: we know that if we take lots of steps for which the gradient is large, we will make lots of progress downhill. But since the objective function has a lower bound, it is impossible to keep taking large gradient steps downhill indefinitely, therefore increasing the number of steps requires that we must run into somewhere with small gradient.
To get a handle on this analytically, we must bound the per-step improvement in terms of the norm of the gradient. Assumption 2 allows us to do exactly this. Then we know that the sum of the per-step improvements over all steps must be smaller than the total possible improvement, and that gives us a bound on how large the minimum gradient that we see can be.
In the non-stochastic case, the obstacle to this process is curvature. Curvature means that if we take too large a step the gradient becomes unreliable, and we might move uphill instead of downhill. Since the step size in signSGD is set purely by the learning rate, this means we must anneal the learning rate if we wish to be sure to control the curvature-induced error and make good progress downhill. Stochasticity also poses a problem in signSGD. In regions where the gradient signal is
smaller than the noise, the noise is enough to flip the sign of the gradient. This is more severe than the additive noise in SGD, and so the batch size must be grown to control this effect.
You might expect that growing the batch size should lead to a worse convergence rate than SGD. This is forgetting that signSGD has an advantage in that it takes large steps even when the gradient is small. It turns out that this positive effect cancels out the fact that the batch size needs to grow, and the convergence rate ends up being the same as SGD.
For completeness, we also present the convergence rate for SGD derived under our assumptions. The proof is given in Appendix C. Note that this appears to be a classic result, although we are not sure of the earliest reference. Authors often hide the dimension dependence of the variance bound. SGD does not require an increasing batch size since the effect of the noise is second order in the learning rate, and therefore gets squashed as the learning rate decays. The rate ends up being the same in NK as signSGD because SGD makes slower progress when the gradient is small.
5 COMPARING THE CONVERGENCE RATE TO SGD
To make a clean comparison, let us set = 1L (as is often recommended) and hide all numerical constants in Theorems 1 and 2. Then for signSGD, we get
E h minkgkk1 i2 ⇠ 1p
N
h L(f0 f⇤) + d( + 1) logN i2 ; (8)
and for SGD we get
E h minkgkk22 i ⇠ 1p
N
h L(f0 f⇤) + d 2 logN i (9)
where ⇠ denotes general scaling. What do these bounds mean? They say that after we have made a cumulative number of stochastic gradient evaluations N , that we should expect somewhere along our trajectory to have hit a point with gradient norm smaller than N 14 .
One important remark should be made. SignSGD more naturally deals with the one norm of the gradient vector, hence we had to square the bound to enable direct comparison with SGD. This means that the constant factor in signSGD is roughly worse by a square. Paying attention only to dimension, this looks like
signSGD: E h minkgkk1 i2 ⇠ d 2
p N
SGD: E h minkgkk22 i ⇠ dp
N
(10)
This defect in dimensionality should be expected in the bound, since signSGD almost never takes the direction of steepest descent, and the direction only gets worse as dimensionality grows. This raises the question, why do algorithms like Adam, which closely resemble signSGD, work well in practice?
Whilst answering this question fully is beyond the scope of this paper, we want to point out one important detail. Whilst the signSGD bound is worse by a factor d, it is also making a statement about the 1-norm of the gradient. Since the 1-norm of the gradient is always larger than the 2- norm, the signSGD bound is stronger in this respect. Indeed, if the gradient is distributed roughly uniformly across all dimensions, then the squared 1-norm is roughly d times bigger than the squared 2-norm, i.e.
kgkk21 ⇠ dkgkk 2 2
and in this limit both SGD and signSGD have a bound that scales as dp N .
6 SWINGING BY SADDLE POINTS? AN EXPERIMENT
Seeing as our theoretical analysis only deals with convergence to stationary points, it does not address how signSGD might behave around saddle points. We wanted to investigate the naı̈ve intuition that gradient rescaling should help flee saddle points—or in the words of Zeyuan Allen-Zhu—swing by them.
For a testbed, the authors of (Du et al., 2017) kindly provided their 10-dimensional ‘tube’ function. The tube is a specially arranged gauntlet of saddle points, each with only one escape direction, that must be navigated in sequence before reaching the global minimum of the objective. The tube was designed to demonstrate how stochasticity can help escape saddles. Gradient descent takes much longer to navigate the tube than perturbed gradient descent of (Jin et al., 2017). It is interesting to ask, even empirically, whether the sign non-linearity in signSGD can also help escape saddle points efficiently. For this reason we strip out the stochasticity and pit the sign gradient descent method (signGD) against the tube function.
There are good reasons to expect that signGD might help escape saddles—for one, it takes large steps even when the gradient is small, which could drive the method away from regions of small gradient. For another, it is able to move in directions orthogonal to the gradient, which might help discover escape directions of the saddle. We phrase this as signGD’s greater ability to explore.
Our experiments revealed that these intuitions sometimes hold out, but there are cases where they break down. In Figure 1, we compare the sign gradient method against gradient descent, perturbed gradient descent (Jin et al., 2017) and rescaled gradient descent
⇣ xk+1 = xk gkgk2 ⌘ which is a
noiseless version of the algorithm considered in (Levy, 2016). No learning rate tuning was conducted, so we suggest paying attention to the qualitative behaviour rather than the ultimate convergence speed. The left hand plot pits the algorithms against the vanilla tube function. SignGD has very different qualitative behaviour to the other algorithms—it appears to make progress completely unimpeded by the saddles. We showed that this behaviour is partly due to the axis alignment of the tube function, since after randomly rotating the objective the behaviour changes (although it is still qualitatively different to the other algorithms).
One unexpected result was that for certain random rotations of the objective, signGD could get stuck at saddle points (see right panel in Figure 1). On closer inspection, we found that the algorithm was getting stuck in perfect periodic orbits around the saddle. Since the update is given by the learning rate multiplied by a binary vector, if the learning rate is constant it is perfectly possible for a sequence of updates to sum to zero. We expect that this behaviour relies on a remarkable structure in both the tube function and the algorithm. We hypothesise that for higher dimensional objectives and a non-fixed learning rate, this phenomenon might become extremely unlikely. This seems like a worthy direction of future research. Indeed we found empirically that introducing momentum into the update rule was enough to break the symmetry and avoid this periodic behaviour.
7 CIFAR-10 EXPERIMENTS
To compare SGD, signSGD and Adam on less of a toy problem, we ran a large grid search over hyperparameters for training Resnet-20 (He et al., 2016) on the CIFAR-10 dataset (Krizhevsky, 2009). Results are plotted in Figure 2. We evaluate over the hyperparamater 3-space of (initial learning rate, weight decay, momentum), and plot slices to demonstrate the general robustness of each algorithm. We find that, as expected, signSGD and Adam have broadly similar performance. For hyperparameter configurations where SGD is stable, it appears to perform better than Adam and signSGD. But Adam and signSGD appear more robust up to larger learning rates. Full experimental details are given in Appendix A.
8 DISCUSSION
First we wish to discuss the connections between signSGD and Adam (Kingma & Ba, 2015). Note that setting the Adam hyperparameters 1 = 2 = ✏ = 0, Adam and signSGD are equivalent. Indeed the authors of the Adam paper suggest that during optimisation the Adam step will commonly look like a binary vector of ±1 (multiplied by the learning rate) and thus resemble the sign gradient step. If this algorithmic correspondence is valid, then there seems to be a discrepancy between our theoretical results and the empirical good performance of Adam. Our convergence rates suggest that signSGD should be worse than SGD by roughly a factor of dimension d. In deep neural network applications d can easily be larger than 106. We suggest a resolution to this proposed discrepancy—there is structure present in deep neural network error surfaces that is not captured by our simplistic theoretical assumptions. We have already discussed in Section 5 how the signSGD bound is improved by a factor d in the case of gradients distributed uniformly across dimensions. It is also reasonable to expect that neural network error surfaces might exhibit only weak coupling across dimensions. To provide intuition for how such an assumption can help improve the dimension scaling of signSGD, note that in the idealised case of total decoupling (the Hessian is everywhere diagonal) then the problem separates into d independent one dimensional problems, so the dimension dependence is lost.
Next, let’s talk about saddle points. Though general non-convex functions are littered with local minima, recent work rather characterises successful optimisation as the evasion of a web of saddle points (Dauphin et al., 2014). Current theoretical work focuses either on using noise Levy (2016); Jin et al. (2017) or curvature information (Allen-Zhu, 2017b) to establish bounds on the amount of time needed to escape saddle points. We noted that merely passing the gradient through the sign operation introduces an algorithmic instability close to saddle points, and we wanted to empirically investigate whether this could be enough to escape them. We removed stochasticity from the algorithm to focus purely on the effect of the sign function.
We found that when the objective function was axis aligned, then sign gradient descent without stochasticity (signGD) made progress unhindered by the saddles. We suggest that this is because signGD has a greater ability to ‘explore’, meaning it typically takes larger steps in regions of small gradient than SGD, and it can take steps almost orthogonal to the true gradient direction. This exploration ability could potentially allow it to break out of subspaces convergent on saddle points without sacrificing its convergence rate—we hypothesise that this may contribute to the often more robust practical performance of algorithms like Rprop and Adam, which bear closer relation to signSGD than SGD. For non axis-aligned objectives, signGD could sometimes get stuck in perfect periodic orbits around saddle points, though we hypothesise that this behaviour may be much less likely for higher dimensional objectives (the testbed function had dimension 10) with non-constant learning rate.
Finally we want to discuss the implications of our results for gradient quantisation schemes. Whilst we do not analyse the multi-machine case of distributed optimisation, we imagine that our results will extend naturally to that setting. In particular our results stand as a proof of concept that we can provide guarantees for biased gradient quantisation schemes. Existing quantisation schemes with guarantees require delicate randomisation to ensure unbiasedness. If a scheme as simple as ours can yield provable guarantees on convergence, then there is a hope that exploring further down this avenue can yield new and useful practical quantisation algorithms.
9 CONCLUSION
We have investigated the theoretical properties of the sign stochastic gradient method (signSGD) as an algorithm for non-convex optimisation. The study was motivated by links that the method has both to deep learning stalwarts like Adam and Rprop, as well as to newer quantisation algorithms that intend to cheapen the cost of gradient communication in distributed machine learning. We have proved non-convex convergence rates for signSGD to first order critical points. Insofar as the rates
can directly be compared, they are of the same order as SGD in terms of number of gradient evaluations, but worse by a linear factor in dimension. SignSGD has the advantage over existing gradient quantisation schemes with provable guarantees, in that it doesn’t need to employ randomisation tricks to remove bias from the quantised gradient.
We wish to propose some interesting directions for future work. First our analysis only looks at convergence to first order critical points. Whilst we present preliminary experiments exhibiting success and failure modes of the algorithm around saddle points, a more detailed study attempting to pin down exactly when we can expect signSGD to escape saddle points efficiently would be welcome. This is an interesting direction seeing as existing work always relies on either stochasticity or second order curvature information to avoid saddles. Second the link that signSGD has to both Adam-like algorithms and gradient quantisation schemes is enticing. In future work we intend to investigate whether this connection can be exploited to develop large scale machine learning algorithms that get the best of both worlds in terms of optimisation speed and communication efficiency.
A EXPERIMENTAL DETAILS
Here we describe the experimental setup for the CIFAR-10 (Krizhevsky, 2009) experiments using the Resnet-20 architecture (He et al., 2016). We tuned over {weight decay, momentum, initial learning rate} for optimisers in {SGD, signSGD, Adam}. We used our own implementation of each optimisiation algorithm. Adam was implemented as in (Kingma & Ba, 2015) with 2 = 0.999 and ✏ = 10 8, and 1 was tuned over. For both SGD and signSGD we used a momentum sequence
mk+1 = mk + (1 )g̃k (11)
and then used the following updates:
SGD : xk+1 = xk kmk+1 (12) signSGD : xk+1 = xk ksign(mk+1) (13)
Weight decay was implemented in the traditional manner of augmenting the objective function with a quadratic penalty.
All other details not mentioned (learning rate schedules, network architecture, data augmentation, etc.) are as in (He et al., 2016). In particular for signSGD we did not use the learning rate or mini-batch schedules as provided by our theory. Code will be released if the paper is accepted.
B PROVING THE CONVERGENCE RATE OF THE SIGN GRADIENT METHOD
Theorem 1 (Non-convex convergence rate of signSGD). Apply Algorithm 1 under Assumptions 1, 2 and 3. Schedule the learning rate and mini-batch size as
k = p k + 1
nk = k + 1 (4)
Let NK be the cumulative number of stochastic gradient calls up to step K, i.e. NK = O(K2) Then we have
E min
0kK 1 kgkk1
2 1p
NK 2
f0 f⇤
+ d(2 + log(2NK 1))( + L)
2 (5)
Proof. Our general strategy will be to show that the expected objective improvement at each step will be good enough to guarantee a convergence rate in expectation. First let’s bound the improvement of the objective during a single step of the algorithm for one instantiation of the noise. Note that I[.] is the indicator function, and gk,i denotes the ith component of the vector gk. First use Assumption 2, plug in the step from Algorithm 1, and decompose the improvement to expose the stochasticity-induced error:
fk+1 fk gTk (xk+1 xk) + L
2
kxk+1 xkk22
= kgTk sign(ḡk) + 2k L
2
d
= kkgkk1 + 2 k dX
i=1
|gk,i| I[sign(ḡk,i) 6= sign(gk,i)] + 2k L
2
d
Next we find the expected improvement at time k + 1 conditioned on the previous iterates.
E[fk+1 fk|xk] kkgkk1 + 2 k dX
i=1
|gk,i|P[sign(ḡk,i) 6= sign(gk,i)] + 2k L
2
d
Note that the expected improvement crucially depends on the probability that each component of the sign vector is correct. Intuition suggests that when the magnitude of the gradient |gk,i| is much larger than the typical scale of the noise , then the sign of the stochastic gradient will most likely be correct. Mistakes will typically only be made when |gk,i| is smaller than . We can make this intuition rigorous using Markov’s inequality and our variance bound on the noise (Assumption 3).
P[sign(ḡk,i) 6= sign(gk,i)] P[|ḡk,i gk,i| |gk,i|] relaxation
E[|ḡk,i gk,i|]|gk,i| Markov’s inequality p
E[(ḡk,i gk,i)2] |gk,i|
Jensen’s inequality
k|gk,i| Assumption 3
This says explicitly that the probability of the sign being incorrect is controlled by the relative scale of the noise to each component of the gradient magnitude. We denote the noise scale as k since it refers to the stochastic gradient with a mini-batch size of nk = k + 1. We can plug this result into the previous expression, take the sum over i, and substitute in our learning rate and mini-batch schedules as follows:
E[fk+1 fk|xk] kkgkk1 + 2 kd k + 2 k L
2
d
= p k + 1 kgkk1 + 2d k + 1 +
2
k + 1
L
2
d
p K kgkk1 + 2 d k + 1 ( + L)
In the last line we made some relaxations which will not affect the general scaling of the rate. Now take the expectation over the noise in all previous iterates, and sum over k:
f0 f⇤ f0 E[fK ] Assumption 1
= E " K 1X
k=0
fk fk+1
# telescope
E " K 1X
k=0
p K kgkk1 2 d k + 1 ( + L)
# previous result
E " K 1X
k=0
dp K kgkk1
# 2 d(1 + logK)( + L) harmonic sum
We can rearrange this inequality to yield a rate:
E min
0kK 1 kgkk1 E
" K 1X
k=0
1
K
kgkk1
#
1p K
f0 f⇤
+ 2d(1 + logK)( + L)
Since we are growing our mini-batch size, it will take NK 1 = K(K+1)
2 gradient evaluations to reach step K 1. Using that 2NK 2 K2 2NK 1 yields the result. For the sake of presentation, we take the final step of squaring the bound, to make it more comparable with the SGD bound.
C PROVING THE CONVERGENCE RATE OF STOCHASTIC GRADIENT DESCENT
Theorem 2 (Non-convex convergence rate of stochastic gradient descent). Apply Algorithm 2 under Assumptions 1, 2 and 3. Schedule the learning rate and mini-batch size as
k = p k + 1
nk = 1 (6)
Let NK be the cumulative number of stochastic gradient calls up to step K, i.e. NK = K. Then we have that
E min
0kK 1 kgkk22 1p
NK
" f0 f⇤
1 L2
+ d(1 + logNK)
L 2
1 L2
2 # (7)
Proof. Consider the objective improvement in a single step, under one instantiation of the noise. Use Assumption 2 followed by the definition of the algorithm.
fk+1 fk gTk (xk+1 xk) + L
2
kxk+1 xkk22
= kgTk ḡk + 2k L
2
kḡkk22
Take the expectation conditioned on previous iterates, and decompose the mean squared stochastic gradient into its mean and variance. Note that since 2 is the variance bound for each component, the variance bound for the full vector will be d 2.
E[fk+1 fk|xk] kkgkk22 + 2 k L
2
⇣ kgkk22 + d 2 ⌘
Plugging in the learning rate schedule, and using that 1k+1 1p k+1 , we get that
E[fk+1 fk|xk] p k + 1 kgkk22 +
2
k + 1
L
2
kgkk22 +
2
k + 1
L
2
2 d
p k + 1
kgkk22 ✓ 1 L
2
◆ + 2
k + 1
L
2
2 d
Take the expectation over xk, sum over k, and we get that
f0 f⇤ f0 E[fK ]
= E " K 1X
k=0
fk fk+1
#
K 1X
k=0
p k + 1 E h kgkk22 i✓ 1 L 2 ◆
2
k + 1
L
2
2 d
K
p K
E min
0kK 1 kgkk22
✓ 1 L
2
◆
K 1X
k=0
2
k + 1
L
2
2 d
p K E min
0kK 1 kgkk22
✓ 1 L
2
◆ (1 + logK) 2L
2
2 d
And rearranging yields the result. | 1. What are the strengths and weaknesses of the paper's theoretical analysis, particularly regarding Assumption 3?
2. How does the reviewer suggest improving the theory, and what additional assumptions or modifications might help address the issues?
3. Are there any concerns or suggestions regarding the experimental results, such as the need for more numerical experiments or a better demonstration of the advantage of using SignSGD? | Review | Review
UPDATED REVIEW:
I have checked all the reviews, also checked the most recent version.
I like the new experiments, but I am not impressed much with them to increase my score. The assumption about the variance is fixing my concern, but as you have pointed out, it is a bit more tricky :) I would really suggest you work on the paper a bit more and re-submit it.
--------------------------------------------------------------------
In this paper, authors provided a convergence analysis of Sign SGD algorithm for non-covex case.
The crucial assumption for the proof was Assumption 3, otherwise, the proof technique is following a standard path in non-convex optimization.
In general, the paper is written nicely, easy to follow.
==============================================
"The major issue":
Why Assumption 3 can be problematic in practice is given below:
Let us assume just a convex case and assume we have just 2 kids of function in 2D: f_1(x) = 0.5 x_1^2 and f_2(x) = 0.5 x_2^2.
Then define the function f(x) = E [ f_i(x) ]. where $i =1$ with prob 0.5 and $i=2$ with probability 0.5.
We have that g(x) = 0.5 [ x_1, x_2 ]^T.
Let us choose $i=1$ and choose $x = [a,a]^T$, where $a$ is some parameter.
Then (4) says, that there has to exist a $\sigma$ such that
P [ | \bar g_i(x) - g_i(x) | > t ] \leq 2 exp( - t^2 / 2\sigma^2). forall "x".
plugging our function inside it should be true that
P [ | [ B ] - 0.5 a | > t ] \leq 2 exp( - t^2 / 2\sigma^2). forall "x".
where B is a random variable which has value "a" with probability 0.5 and value "0" with probability 0.5.
If we choose $t = 0.1a$ then we have that it has to be true that
1 = P [ | [ B ] - 0.5 a | > 0.1a ] \leq 2 exp( - 0.01 a^2 / 2\sigma^2) ----> 0 as $a \to \infty$.
Hence, even in this simple example, one can show that this assumption is violated unless $\sigma = \infty$.
One way to ho improve this is to put more assumption + maybe put some projection into a compact set?
==============================================
Hence, I think the theory should be improved.
In terms of experiments, I like the discussion about escaping saddle points, it is indeed a good discussion. However, it would be nicer to have more numerical experiments.
One thing I am also struggling is the "advantage" of using signSGD: one saves on communication (instead of sending 4*8 bits per dimension, one just send only 1 bit, however, one needs "d"times more iterations, hence, the theory shows that it is much worse then SGD (see (11) ). |
ICLR | Title
Convergence rate of sign stochastic gradient descent for non-convex functions
Abstract
The sign stochastic gradient descent method (signSGD) utilises only the sign of the stochastic gradient in its updates. For deep networks, this one-bit quantisation has surprisingly little impact on convergence speed or generalisation performance compared to SGD. Since signSGD is effectively compressing the gradients, it is very relevant for distributed optimisation where gradients need to be aggregated from different processors. What’s more, signSGD has close connections to common deep learning algorithms like RMSprop and Adam. We study the base theoretical properties of this simple yet powerful algorithm. For the first time, we establish convergence rates for signSGD on general non-convex functions under transparent conditions. We show that the rate of signSGD to reach first-order critical points matches that of SGD in terms of number of stochastic gradient calls, but loses out by roughly a linear factor in the dimension for general non-convex functions. We carry out simple experiments to explore the behaviour of sign gradient descent (without the stochasticity) close to saddle points and show that it can help to completely avoid certain kinds of saddle points without using either stochasticity or curvature information.
1 INTRODUCTION
Deep neural network training takes place in an error landscape that is high-dimensional, non-convex and stochastic. In practice, simple optimization techniques perform surprisingly well but have very limited theoretical understanding. While stochastic gradient descent (SGD) is widely used, algorithms like Adam (Kingma & Ba, 2015), RMSprop (Tieleman & Hinton, 2012) and Rprop (Riedmiller & Braun, 1993) are also popular. These latter algorithms involve component-wise rescaling of gradients, and so bear closer relation to signSGD than SGD. Currently, convergence rates have only been derived for close variants of SGD for general non-convex functions, and indeed the Adam paper gives convex theory.
Recently, another class of optimization algorithms has emerged which also pays attention to the resource requirements for training, in addition to obtaining good performance. Primarily, they focus on reducing costs for communicating gradients across different machines in a distributed training environment (Seide et al., 2014; Strom, 2015; Li et al., 2016; Alistarh et al., 2017; Wen et al., 2017). Often, the techniques involve quantizing the stochastic gradients at radically low numerical precision. Empirically, it was demonstrated that one can get away with using only one-bit per dimension without losing much accuracy (Seide et al., 2014; Strom, 2015). The theoretical properties of these approaches are however not well-understood. In particular, it was not known until now how quickly signSGD (the simplest incarnation of one-bit SGD) converges or even whether it converges at all to the neighborhood of a meaningful solution.
Our contribution: we supply the non-convex rate of convergence to first order critical points for signSGD. The algorithm updates parameter vector xk according to
xk+1 = xk ksign(ḡk) (1)
where ḡk is the mini-batch stochastic gradient and k is the learning rate. We show that for nonconvex problems, signSGD entertains convergence rates as good as SGD, up to a linear factor in the dimension. Our statements impose a particular learning rate and mini-batch schedule.
Ours is the first work to provide non-convex convergence rates for a biased quantisation procedure as far as we know, and therefore does not require the randomisation that other gradient quantisation algorithms need to ensure unbiasedness. The technical challenge we overcome is in showing how to carry the stochasticity in the gradient through the sign non-linearity of the algorithm in a controlledfashion.
Whilst our analysis is for first order critical points, we experimentally test the performance of sign gradient descent without stochasticity (signGD) around saddle points. We removed stochasticity in order to investigate whether signGD has an inherent ability to escape saddle points, which would suggest superiority over gradient descent (GD) which can take exponential time to escape saddle points if it gets too close to them (Du et al., 2017).
In our work we make three assumptions. Informally, we assume that the objective function is lowerbounded, smooth, and that each component of the stochastic gradient has bounded variance. These assumptions are very general and hold for a much wider class of functions than just the ones encountered in deep learning.
Outline of paper: in Sections 3, 4 and 5 we give non-convex theory of signSGD. In Section 6 we experimentally test the ability of the signGD (without the S) to escape saddle points. And in Section 7 we pit signSGD against SGD and Adam on CIFAR-10.
2 RELATED WORK
Deep learning: the prototypical optimisation algorithm for neural networks is stochastic gradient descent (SGD)—see Algorithm 2. The deep learning community has discovered many practical tweaks to ease the training of large neural network models. In Rprop (Riedmiller & Braun, 1993) each weight update ignores the magnitude of the gradient and pays attention only to the sign, bringing it close to signSGD. It differs in that the learning rate for each component is modified depending on the consistency of the sign of consecutive steps. RMSprop (Tieleman & Hinton, 2012) is Rprop adapted for the minibatch setting—instead of dividing each component of the gradient by its magnitude, the authors estimate the rescaling factor as an average over recent iterates. Adam (Kingma & Ba, 2015) is RMSprop with momentum, meaning both gradient and gradient rescaling factors are estimated as bias-corrected averages over iterates. Indeed switching off the averaging in Adam yields signSGD. These algorithms have been applied to a breadth of interesting practical problems, e.g. (Xu et al., 2015; Gregor et al., 2015).
In an effort to characterise the typical deep learning error landscape, Dauphin et al. (2014) frame the primary obstacle to neural network training as the proliferation of saddle points in high dimensional objectives. Practitioners challenge this view, suggesting that saddle points may be seldom encountered at least in retrospectively successful applications of deep learning (Goodfellow et al., 2015).
Optimisation theory: in convex optimisation there is a natural notion of success—rate of convergence to the global minimum x⇤. Convex optimisation is eased by the fact that local information in the gradient provides global information about the direction towards the minimum, i.e. rf(x) tells you information about x⇤ x. In non-convex problems finding the global minimum is in general intractable, so theorists usually settle for measuring some restricted notion of success, such as rate of convergence to stationary points (e.g. Allen-Zhu (2017a)) or local minima (e.g. Nesterov & Polyak (2006)). Given the importance placed by Dauphin et al. (2014) upon evading saddle points, recent work considers the efficient use of noise (Jin et al., 2017; Levy, 2016) and curvature information (Allen-Zhu, 2017b) to escape saddle points and find local minima.
Distributed machine learning: whilst Rprop and Adam were proposed by asking how we can use gradient information to make better optimisation steps, another school asks how much information can we throw away from the gradient and still converge at all. Seide et al. (2014); Strom (2015) demonstrated empirically that one-bit quantisation can still give good performance whilst dramatically reducing gradient communication costs in distributed systems. Convergence properties of quantized stochastic gradient methods remain largely unknown. Alistarh et al. (2017) provide convergence rates for quantisation schemes that are unbiased estimators of the true gradient, and are
thus able to rely upon vanilla SGD convergence results. Wen et al. (2017) prove asymptotic convergence of a { 1, 0, 1} ternary quantization scheme that also retains the unbiasedness of the stochastic gradient. Our proposed approach is different, in that we directly employ the sign gradient which is biased. This avoids the randomization needed for constructing an unbiased quantized estimate. To the best of our knowledge, the current work is the first to establish a convergence rate for a biased quantisation scheme, and our proof differs to that of vanilla SGD.
Parallel work: signSGD is related to both attempts to improve gradient descent like Rprop and Adam, and attempts to damage it but not too badly like quantised SGD. After submitting we became aware that Anonymous (2018) also made this link in a work submitted to the same conference. Our work gives non-convex theory of signSGD, whereas their work analyses Adam in greater depth, but only in the convex world.
3 ASSUMPTIONS
Assumption 1 (The objective function is bounded below). For all x and some constant f⇤, the objective function satisfies
f(x) f⇤ (2)
Remark: this assumption applies to every practical objective function that we are aware of.
Assumption 2 (The objective function is L-Lipschitz smooth). Let g(x) denote the gradient of the objective f(.) evaluated at point x. Then for every y we assume that
f(y) ⇥ f(x) + g(x) T (y x) ⇤ L
2
ky xk22 (3)
Remark: this assumption allows us to measure the error in trusting the local linearisation of our objective, which will be useful for bounding the error in a single step of the algorithm. For signSGD we can actually relax this assumption to only hold only for y within a local neighbourhood of x, since signSGD takes steps of bounded size.
Assumption 3 (Stochastic gradient oracle). Upon receiving query x, the stochastic gradient oracle gives us an independent estimate ĝ satisfying
E[ĝ(x)] = g(x), Var(ĝ(x)[i]) 2 8i = 1, ..., d.
Remark: this assumption is standard for stochastic optimization, except that the variance upper bound is now stated for every dimension separately. A realization of the above oracle is to choose a data point uniformly at random, and to evaluate its gradient at point x. In the algorithm, we will be working with a minibatch of size nk in the kth iteration, and the corresponding minibatch stochastic gradient is modeled as the average of nk calls of the above stochastic gradient oracle at xk. Therefore in this case the variance bound is squashed to 2/nk.
4 NON-CONVEX CONVERGENCE RATE OF SIGNSGD
Informally, our primary result says that if we run signSGD with the prescribed learning rate and mini-batch schedules, then after N stochastic gradient evaluations, we should expect that somewhere along the optimisation trajectory will be a place with gradient 1-norm smaller than O(N 0.25). This matches the non-convex SGD rate, insofar as they can be compared, and ignoring all (dimensiondependent!) constants.
Before we dive into the theorems, here’s a refresher on our notation—deep breath—gk is the gradient at step k, f⇤ is the lower bound on the objective function, f0 is the initial value of the objective function, d is the dimension of the space, K is the total number of iterations, NK is the cumulative number of stochastic gradient calls at step K, is the intrinsic variance-proxy for each component of the stochastic gradient, and finally L is the maximum curvature (see Assumption 2).
Algorithm 1 Sign stochastic gradient descent (signSGD) 1: Inputs: x0, K . initial point and time budget 2: for k 2 [0,K 1] do 3: k learningRate(k) 4: nk miniBatchSize(k) 5: ḡk 1nk Pnk i=1 stochasticGradient(xk)
6: xk+1 xk ksign(ḡk) . the sign operation is element-wise
Algorithm 2 Stochastic gradient descent 1: Inputs: x0, K . initial point and time budget 2: for k 2 [0,K 1] do 3: k learningRate(k) 4: nk miniBatchSize(k) 5: ĝk 1nk Pnk i=1 stochasticGradient(xk)
6: xk+1 xk kḡk
Theorem 1 (Non-convex convergence rate of signSGD). Apply Algorithm 1 under Assumptions 1, 2 and 3. Schedule the learning rate and mini-batch size as
k = p k + 1
nk = k + 1 (4)
Let NK be the cumulative number of stochastic gradient calls up to step K, i.e. NK = O(K2) Then we have
E min
0kK 1 kgkk1
2 1p
NK 2
f0 f⇤
+ d(2 + log(2NK 1))( + L)
2 (5)
Theorem 2 (Non-convex convergence rate of stochastic gradient descent). Apply Algorithm 2 under Assumptions 1, 2 and 3. Schedule the learning rate and mini-batch size as
k = p k + 1
nk = 1 (6)
Let NK be the cumulative number of stochastic gradient calls up to step K, i.e. NK = K. Then we have that
E min
0kK 1 kgkk22 1p
NK
" f0 f⇤
1 L2
+ d(1 + logNK)
L 2
1 L2
2 # (7)
The proofs are deferred to Appendix B and here we sketch the intuition for Theorem 1. First consider the non-stochastic case: we know that if we take lots of steps for which the gradient is large, we will make lots of progress downhill. But since the objective function has a lower bound, it is impossible to keep taking large gradient steps downhill indefinitely, therefore increasing the number of steps requires that we must run into somewhere with small gradient.
To get a handle on this analytically, we must bound the per-step improvement in terms of the norm of the gradient. Assumption 2 allows us to do exactly this. Then we know that the sum of the per-step improvements over all steps must be smaller than the total possible improvement, and that gives us a bound on how large the minimum gradient that we see can be.
In the non-stochastic case, the obstacle to this process is curvature. Curvature means that if we take too large a step the gradient becomes unreliable, and we might move uphill instead of downhill. Since the step size in signSGD is set purely by the learning rate, this means we must anneal the learning rate if we wish to be sure to control the curvature-induced error and make good progress downhill. Stochasticity also poses a problem in signSGD. In regions where the gradient signal is
smaller than the noise, the noise is enough to flip the sign of the gradient. This is more severe than the additive noise in SGD, and so the batch size must be grown to control this effect.
You might expect that growing the batch size should lead to a worse convergence rate than SGD. This is forgetting that signSGD has an advantage in that it takes large steps even when the gradient is small. It turns out that this positive effect cancels out the fact that the batch size needs to grow, and the convergence rate ends up being the same as SGD.
For completeness, we also present the convergence rate for SGD derived under our assumptions. The proof is given in Appendix C. Note that this appears to be a classic result, although we are not sure of the earliest reference. Authors often hide the dimension dependence of the variance bound. SGD does not require an increasing batch size since the effect of the noise is second order in the learning rate, and therefore gets squashed as the learning rate decays. The rate ends up being the same in NK as signSGD because SGD makes slower progress when the gradient is small.
5 COMPARING THE CONVERGENCE RATE TO SGD
To make a clean comparison, let us set = 1L (as is often recommended) and hide all numerical constants in Theorems 1 and 2. Then for signSGD, we get
E h minkgkk1 i2 ⇠ 1p
N
h L(f0 f⇤) + d( + 1) logN i2 ; (8)
and for SGD we get
E h minkgkk22 i ⇠ 1p
N
h L(f0 f⇤) + d 2 logN i (9)
where ⇠ denotes general scaling. What do these bounds mean? They say that after we have made a cumulative number of stochastic gradient evaluations N , that we should expect somewhere along our trajectory to have hit a point with gradient norm smaller than N 14 .
One important remark should be made. SignSGD more naturally deals with the one norm of the gradient vector, hence we had to square the bound to enable direct comparison with SGD. This means that the constant factor in signSGD is roughly worse by a square. Paying attention only to dimension, this looks like
signSGD: E h minkgkk1 i2 ⇠ d 2
p N
SGD: E h minkgkk22 i ⇠ dp
N
(10)
This defect in dimensionality should be expected in the bound, since signSGD almost never takes the direction of steepest descent, and the direction only gets worse as dimensionality grows. This raises the question, why do algorithms like Adam, which closely resemble signSGD, work well in practice?
Whilst answering this question fully is beyond the scope of this paper, we want to point out one important detail. Whilst the signSGD bound is worse by a factor d, it is also making a statement about the 1-norm of the gradient. Since the 1-norm of the gradient is always larger than the 2- norm, the signSGD bound is stronger in this respect. Indeed, if the gradient is distributed roughly uniformly across all dimensions, then the squared 1-norm is roughly d times bigger than the squared 2-norm, i.e.
kgkk21 ⇠ dkgkk 2 2
and in this limit both SGD and signSGD have a bound that scales as dp N .
6 SWINGING BY SADDLE POINTS? AN EXPERIMENT
Seeing as our theoretical analysis only deals with convergence to stationary points, it does not address how signSGD might behave around saddle points. We wanted to investigate the naı̈ve intuition that gradient rescaling should help flee saddle points—or in the words of Zeyuan Allen-Zhu—swing by them.
For a testbed, the authors of (Du et al., 2017) kindly provided their 10-dimensional ‘tube’ function. The tube is a specially arranged gauntlet of saddle points, each with only one escape direction, that must be navigated in sequence before reaching the global minimum of the objective. The tube was designed to demonstrate how stochasticity can help escape saddles. Gradient descent takes much longer to navigate the tube than perturbed gradient descent of (Jin et al., 2017). It is interesting to ask, even empirically, whether the sign non-linearity in signSGD can also help escape saddle points efficiently. For this reason we strip out the stochasticity and pit the sign gradient descent method (signGD) against the tube function.
There are good reasons to expect that signGD might help escape saddles—for one, it takes large steps even when the gradient is small, which could drive the method away from regions of small gradient. For another, it is able to move in directions orthogonal to the gradient, which might help discover escape directions of the saddle. We phrase this as signGD’s greater ability to explore.
Our experiments revealed that these intuitions sometimes hold out, but there are cases where they break down. In Figure 1, we compare the sign gradient method against gradient descent, perturbed gradient descent (Jin et al., 2017) and rescaled gradient descent
⇣ xk+1 = xk gkgk2 ⌘ which is a
noiseless version of the algorithm considered in (Levy, 2016). No learning rate tuning was conducted, so we suggest paying attention to the qualitative behaviour rather than the ultimate convergence speed. The left hand plot pits the algorithms against the vanilla tube function. SignGD has very different qualitative behaviour to the other algorithms—it appears to make progress completely unimpeded by the saddles. We showed that this behaviour is partly due to the axis alignment of the tube function, since after randomly rotating the objective the behaviour changes (although it is still qualitatively different to the other algorithms).
One unexpected result was that for certain random rotations of the objective, signGD could get stuck at saddle points (see right panel in Figure 1). On closer inspection, we found that the algorithm was getting stuck in perfect periodic orbits around the saddle. Since the update is given by the learning rate multiplied by a binary vector, if the learning rate is constant it is perfectly possible for a sequence of updates to sum to zero. We expect that this behaviour relies on a remarkable structure in both the tube function and the algorithm. We hypothesise that for higher dimensional objectives and a non-fixed learning rate, this phenomenon might become extremely unlikely. This seems like a worthy direction of future research. Indeed we found empirically that introducing momentum into the update rule was enough to break the symmetry and avoid this periodic behaviour.
7 CIFAR-10 EXPERIMENTS
To compare SGD, signSGD and Adam on less of a toy problem, we ran a large grid search over hyperparameters for training Resnet-20 (He et al., 2016) on the CIFAR-10 dataset (Krizhevsky, 2009). Results are plotted in Figure 2. We evaluate over the hyperparamater 3-space of (initial learning rate, weight decay, momentum), and plot slices to demonstrate the general robustness of each algorithm. We find that, as expected, signSGD and Adam have broadly similar performance. For hyperparameter configurations where SGD is stable, it appears to perform better than Adam and signSGD. But Adam and signSGD appear more robust up to larger learning rates. Full experimental details are given in Appendix A.
8 DISCUSSION
First we wish to discuss the connections between signSGD and Adam (Kingma & Ba, 2015). Note that setting the Adam hyperparameters 1 = 2 = ✏ = 0, Adam and signSGD are equivalent. Indeed the authors of the Adam paper suggest that during optimisation the Adam step will commonly look like a binary vector of ±1 (multiplied by the learning rate) and thus resemble the sign gradient step. If this algorithmic correspondence is valid, then there seems to be a discrepancy between our theoretical results and the empirical good performance of Adam. Our convergence rates suggest that signSGD should be worse than SGD by roughly a factor of dimension d. In deep neural network applications d can easily be larger than 106. We suggest a resolution to this proposed discrepancy—there is structure present in deep neural network error surfaces that is not captured by our simplistic theoretical assumptions. We have already discussed in Section 5 how the signSGD bound is improved by a factor d in the case of gradients distributed uniformly across dimensions. It is also reasonable to expect that neural network error surfaces might exhibit only weak coupling across dimensions. To provide intuition for how such an assumption can help improve the dimension scaling of signSGD, note that in the idealised case of total decoupling (the Hessian is everywhere diagonal) then the problem separates into d independent one dimensional problems, so the dimension dependence is lost.
Next, let’s talk about saddle points. Though general non-convex functions are littered with local minima, recent work rather characterises successful optimisation as the evasion of a web of saddle points (Dauphin et al., 2014). Current theoretical work focuses either on using noise Levy (2016); Jin et al. (2017) or curvature information (Allen-Zhu, 2017b) to establish bounds on the amount of time needed to escape saddle points. We noted that merely passing the gradient through the sign operation introduces an algorithmic instability close to saddle points, and we wanted to empirically investigate whether this could be enough to escape them. We removed stochasticity from the algorithm to focus purely on the effect of the sign function.
We found that when the objective function was axis aligned, then sign gradient descent without stochasticity (signGD) made progress unhindered by the saddles. We suggest that this is because signGD has a greater ability to ‘explore’, meaning it typically takes larger steps in regions of small gradient than SGD, and it can take steps almost orthogonal to the true gradient direction. This exploration ability could potentially allow it to break out of subspaces convergent on saddle points without sacrificing its convergence rate—we hypothesise that this may contribute to the often more robust practical performance of algorithms like Rprop and Adam, which bear closer relation to signSGD than SGD. For non axis-aligned objectives, signGD could sometimes get stuck in perfect periodic orbits around saddle points, though we hypothesise that this behaviour may be much less likely for higher dimensional objectives (the testbed function had dimension 10) with non-constant learning rate.
Finally we want to discuss the implications of our results for gradient quantisation schemes. Whilst we do not analyse the multi-machine case of distributed optimisation, we imagine that our results will extend naturally to that setting. In particular our results stand as a proof of concept that we can provide guarantees for biased gradient quantisation schemes. Existing quantisation schemes with guarantees require delicate randomisation to ensure unbiasedness. If a scheme as simple as ours can yield provable guarantees on convergence, then there is a hope that exploring further down this avenue can yield new and useful practical quantisation algorithms.
9 CONCLUSION
We have investigated the theoretical properties of the sign stochastic gradient method (signSGD) as an algorithm for non-convex optimisation. The study was motivated by links that the method has both to deep learning stalwarts like Adam and Rprop, as well as to newer quantisation algorithms that intend to cheapen the cost of gradient communication in distributed machine learning. We have proved non-convex convergence rates for signSGD to first order critical points. Insofar as the rates
can directly be compared, they are of the same order as SGD in terms of number of gradient evaluations, but worse by a linear factor in dimension. SignSGD has the advantage over existing gradient quantisation schemes with provable guarantees, in that it doesn’t need to employ randomisation tricks to remove bias from the quantised gradient.
We wish to propose some interesting directions for future work. First our analysis only looks at convergence to first order critical points. Whilst we present preliminary experiments exhibiting success and failure modes of the algorithm around saddle points, a more detailed study attempting to pin down exactly when we can expect signSGD to escape saddle points efficiently would be welcome. This is an interesting direction seeing as existing work always relies on either stochasticity or second order curvature information to avoid saddles. Second the link that signSGD has to both Adam-like algorithms and gradient quantisation schemes is enticing. In future work we intend to investigate whether this connection can be exploited to develop large scale machine learning algorithms that get the best of both worlds in terms of optimisation speed and communication efficiency.
A EXPERIMENTAL DETAILS
Here we describe the experimental setup for the CIFAR-10 (Krizhevsky, 2009) experiments using the Resnet-20 architecture (He et al., 2016). We tuned over {weight decay, momentum, initial learning rate} for optimisers in {SGD, signSGD, Adam}. We used our own implementation of each optimisiation algorithm. Adam was implemented as in (Kingma & Ba, 2015) with 2 = 0.999 and ✏ = 10 8, and 1 was tuned over. For both SGD and signSGD we used a momentum sequence
mk+1 = mk + (1 )g̃k (11)
and then used the following updates:
SGD : xk+1 = xk kmk+1 (12) signSGD : xk+1 = xk ksign(mk+1) (13)
Weight decay was implemented in the traditional manner of augmenting the objective function with a quadratic penalty.
All other details not mentioned (learning rate schedules, network architecture, data augmentation, etc.) are as in (He et al., 2016). In particular for signSGD we did not use the learning rate or mini-batch schedules as provided by our theory. Code will be released if the paper is accepted.
B PROVING THE CONVERGENCE RATE OF THE SIGN GRADIENT METHOD
Theorem 1 (Non-convex convergence rate of signSGD). Apply Algorithm 1 under Assumptions 1, 2 and 3. Schedule the learning rate and mini-batch size as
k = p k + 1
nk = k + 1 (4)
Let NK be the cumulative number of stochastic gradient calls up to step K, i.e. NK = O(K2) Then we have
E min
0kK 1 kgkk1
2 1p
NK 2
f0 f⇤
+ d(2 + log(2NK 1))( + L)
2 (5)
Proof. Our general strategy will be to show that the expected objective improvement at each step will be good enough to guarantee a convergence rate in expectation. First let’s bound the improvement of the objective during a single step of the algorithm for one instantiation of the noise. Note that I[.] is the indicator function, and gk,i denotes the ith component of the vector gk. First use Assumption 2, plug in the step from Algorithm 1, and decompose the improvement to expose the stochasticity-induced error:
fk+1 fk gTk (xk+1 xk) + L
2
kxk+1 xkk22
= kgTk sign(ḡk) + 2k L
2
d
= kkgkk1 + 2 k dX
i=1
|gk,i| I[sign(ḡk,i) 6= sign(gk,i)] + 2k L
2
d
Next we find the expected improvement at time k + 1 conditioned on the previous iterates.
E[fk+1 fk|xk] kkgkk1 + 2 k dX
i=1
|gk,i|P[sign(ḡk,i) 6= sign(gk,i)] + 2k L
2
d
Note that the expected improvement crucially depends on the probability that each component of the sign vector is correct. Intuition suggests that when the magnitude of the gradient |gk,i| is much larger than the typical scale of the noise , then the sign of the stochastic gradient will most likely be correct. Mistakes will typically only be made when |gk,i| is smaller than . We can make this intuition rigorous using Markov’s inequality and our variance bound on the noise (Assumption 3).
P[sign(ḡk,i) 6= sign(gk,i)] P[|ḡk,i gk,i| |gk,i|] relaxation
E[|ḡk,i gk,i|]|gk,i| Markov’s inequality p
E[(ḡk,i gk,i)2] |gk,i|
Jensen’s inequality
k|gk,i| Assumption 3
This says explicitly that the probability of the sign being incorrect is controlled by the relative scale of the noise to each component of the gradient magnitude. We denote the noise scale as k since it refers to the stochastic gradient with a mini-batch size of nk = k + 1. We can plug this result into the previous expression, take the sum over i, and substitute in our learning rate and mini-batch schedules as follows:
E[fk+1 fk|xk] kkgkk1 + 2 kd k + 2 k L
2
d
= p k + 1 kgkk1 + 2d k + 1 +
2
k + 1
L
2
d
p K kgkk1 + 2 d k + 1 ( + L)
In the last line we made some relaxations which will not affect the general scaling of the rate. Now take the expectation over the noise in all previous iterates, and sum over k:
f0 f⇤ f0 E[fK ] Assumption 1
= E " K 1X
k=0
fk fk+1
# telescope
E " K 1X
k=0
p K kgkk1 2 d k + 1 ( + L)
# previous result
E " K 1X
k=0
dp K kgkk1
# 2 d(1 + logK)( + L) harmonic sum
We can rearrange this inequality to yield a rate:
E min
0kK 1 kgkk1 E
" K 1X
k=0
1
K
kgkk1
#
1p K
f0 f⇤
+ 2d(1 + logK)( + L)
Since we are growing our mini-batch size, it will take NK 1 = K(K+1)
2 gradient evaluations to reach step K 1. Using that 2NK 2 K2 2NK 1 yields the result. For the sake of presentation, we take the final step of squaring the bound, to make it more comparable with the SGD bound.
C PROVING THE CONVERGENCE RATE OF STOCHASTIC GRADIENT DESCENT
Theorem 2 (Non-convex convergence rate of stochastic gradient descent). Apply Algorithm 2 under Assumptions 1, 2 and 3. Schedule the learning rate and mini-batch size as
k = p k + 1
nk = 1 (6)
Let NK be the cumulative number of stochastic gradient calls up to step K, i.e. NK = K. Then we have that
E min
0kK 1 kgkk22 1p
NK
" f0 f⇤
1 L2
+ d(1 + logNK)
L 2
1 L2
2 # (7)
Proof. Consider the objective improvement in a single step, under one instantiation of the noise. Use Assumption 2 followed by the definition of the algorithm.
fk+1 fk gTk (xk+1 xk) + L
2
kxk+1 xkk22
= kgTk ḡk + 2k L
2
kḡkk22
Take the expectation conditioned on previous iterates, and decompose the mean squared stochastic gradient into its mean and variance. Note that since 2 is the variance bound for each component, the variance bound for the full vector will be d 2.
E[fk+1 fk|xk] kkgkk22 + 2 k L
2
⇣ kgkk22 + d 2 ⌘
Plugging in the learning rate schedule, and using that 1k+1 1p k+1 , we get that
E[fk+1 fk|xk] p k + 1 kgkk22 +
2
k + 1
L
2
kgkk22 +
2
k + 1
L
2
2 d
p k + 1
kgkk22 ✓ 1 L
2
◆ + 2
k + 1
L
2
2 d
Take the expectation over xk, sum over k, and we get that
f0 f⇤ f0 E[fK ]
= E " K 1X
k=0
fk fk+1
#
K 1X
k=0
p k + 1 E h kgkk22 i✓ 1 L 2 ◆
2
k + 1
L
2
2 d
K
p K
E min
0kK 1 kgkk22
✓ 1 L
2
◆
K 1X
k=0
2
k + 1
L
2
2 d
p K E min
0kK 1 kgkk22
✓ 1 L
2
◆ (1 + logK) 2L
2
2 d
And rearranging yields the result. | 1. What are the major flaws in the paper's presentation of convergence rate for quantized SGD?
2. How does the paper's result compare to previous works in terms of the notion of convergence and its ratio?
3. Are there any issues with the paper's attempts at intuitive explanations and assumptions?
4. How do the experiments in the paper fare in terms of conclusiveness and accuracy? | Review | Review
The paper presents convergence rate of a quantized SGD, with biased quantization - simply taking a sign of each element of gradient.
The stated Theorem 1 is incorrect. Even if the stated result was correct, it presents much worse rate for a weaker notion of convergence.
Major flaws:
1. As far as I can see, Theorem 1 should depend on 4th root of N_K, the last (omitted) step from the proof is done incorrectly. This makes it much worse than presented.
2. Even if this was correct, the main point is that this is "only" d times worse - see eq (11). That is enormous difference, particularly in settings where such gradient compression can be relevant. Also, it is lot more worse than just d times:
3. Again in eq (11), you compare different notions of convergence - E[||g||_1]^2 vs. E[||g||_2^2]. In particular, the one for signSGD is the weaker notion - squared L1 norm can be d times bigger again. If this is not the case for some reason, more detailed explanation is needed.
Other than that, the paper contains several attempts at intuitive explanation, which I don't find correct. Inclusion of Assumption 3 would in particular require better justification.
Experiments are also inconclusive, as the plots show convergence to significantly worse accuracy than what the models converged to in original contributions. |
ICLR | Title
Convergence rate of sign stochastic gradient descent for non-convex functions
Abstract
The sign stochastic gradient descent method (signSGD) utilises only the sign of the stochastic gradient in its updates. For deep networks, this one-bit quantisation has surprisingly little impact on convergence speed or generalisation performance compared to SGD. Since signSGD is effectively compressing the gradients, it is very relevant for distributed optimisation where gradients need to be aggregated from different processors. What’s more, signSGD has close connections to common deep learning algorithms like RMSprop and Adam. We study the base theoretical properties of this simple yet powerful algorithm. For the first time, we establish convergence rates for signSGD on general non-convex functions under transparent conditions. We show that the rate of signSGD to reach first-order critical points matches that of SGD in terms of number of stochastic gradient calls, but loses out by roughly a linear factor in the dimension for general non-convex functions. We carry out simple experiments to explore the behaviour of sign gradient descent (without the stochasticity) close to saddle points and show that it can help to completely avoid certain kinds of saddle points without using either stochasticity or curvature information.
1 INTRODUCTION
Deep neural network training takes place in an error landscape that is high-dimensional, non-convex and stochastic. In practice, simple optimization techniques perform surprisingly well but have very limited theoretical understanding. While stochastic gradient descent (SGD) is widely used, algorithms like Adam (Kingma & Ba, 2015), RMSprop (Tieleman & Hinton, 2012) and Rprop (Riedmiller & Braun, 1993) are also popular. These latter algorithms involve component-wise rescaling of gradients, and so bear closer relation to signSGD than SGD. Currently, convergence rates have only been derived for close variants of SGD for general non-convex functions, and indeed the Adam paper gives convex theory.
Recently, another class of optimization algorithms has emerged which also pays attention to the resource requirements for training, in addition to obtaining good performance. Primarily, they focus on reducing costs for communicating gradients across different machines in a distributed training environment (Seide et al., 2014; Strom, 2015; Li et al., 2016; Alistarh et al., 2017; Wen et al., 2017). Often, the techniques involve quantizing the stochastic gradients at radically low numerical precision. Empirically, it was demonstrated that one can get away with using only one-bit per dimension without losing much accuracy (Seide et al., 2014; Strom, 2015). The theoretical properties of these approaches are however not well-understood. In particular, it was not known until now how quickly signSGD (the simplest incarnation of one-bit SGD) converges or even whether it converges at all to the neighborhood of a meaningful solution.
Our contribution: we supply the non-convex rate of convergence to first order critical points for signSGD. The algorithm updates parameter vector xk according to
xk+1 = xk ksign(ḡk) (1)
where ḡk is the mini-batch stochastic gradient and k is the learning rate. We show that for nonconvex problems, signSGD entertains convergence rates as good as SGD, up to a linear factor in the dimension. Our statements impose a particular learning rate and mini-batch schedule.
Ours is the first work to provide non-convex convergence rates for a biased quantisation procedure as far as we know, and therefore does not require the randomisation that other gradient quantisation algorithms need to ensure unbiasedness. The technical challenge we overcome is in showing how to carry the stochasticity in the gradient through the sign non-linearity of the algorithm in a controlledfashion.
Whilst our analysis is for first order critical points, we experimentally test the performance of sign gradient descent without stochasticity (signGD) around saddle points. We removed stochasticity in order to investigate whether signGD has an inherent ability to escape saddle points, which would suggest superiority over gradient descent (GD) which can take exponential time to escape saddle points if it gets too close to them (Du et al., 2017).
In our work we make three assumptions. Informally, we assume that the objective function is lowerbounded, smooth, and that each component of the stochastic gradient has bounded variance. These assumptions are very general and hold for a much wider class of functions than just the ones encountered in deep learning.
Outline of paper: in Sections 3, 4 and 5 we give non-convex theory of signSGD. In Section 6 we experimentally test the ability of the signGD (without the S) to escape saddle points. And in Section 7 we pit signSGD against SGD and Adam on CIFAR-10.
2 RELATED WORK
Deep learning: the prototypical optimisation algorithm for neural networks is stochastic gradient descent (SGD)—see Algorithm 2. The deep learning community has discovered many practical tweaks to ease the training of large neural network models. In Rprop (Riedmiller & Braun, 1993) each weight update ignores the magnitude of the gradient and pays attention only to the sign, bringing it close to signSGD. It differs in that the learning rate for each component is modified depending on the consistency of the sign of consecutive steps. RMSprop (Tieleman & Hinton, 2012) is Rprop adapted for the minibatch setting—instead of dividing each component of the gradient by its magnitude, the authors estimate the rescaling factor as an average over recent iterates. Adam (Kingma & Ba, 2015) is RMSprop with momentum, meaning both gradient and gradient rescaling factors are estimated as bias-corrected averages over iterates. Indeed switching off the averaging in Adam yields signSGD. These algorithms have been applied to a breadth of interesting practical problems, e.g. (Xu et al., 2015; Gregor et al., 2015).
In an effort to characterise the typical deep learning error landscape, Dauphin et al. (2014) frame the primary obstacle to neural network training as the proliferation of saddle points in high dimensional objectives. Practitioners challenge this view, suggesting that saddle points may be seldom encountered at least in retrospectively successful applications of deep learning (Goodfellow et al., 2015).
Optimisation theory: in convex optimisation there is a natural notion of success—rate of convergence to the global minimum x⇤. Convex optimisation is eased by the fact that local information in the gradient provides global information about the direction towards the minimum, i.e. rf(x) tells you information about x⇤ x. In non-convex problems finding the global minimum is in general intractable, so theorists usually settle for measuring some restricted notion of success, such as rate of convergence to stationary points (e.g. Allen-Zhu (2017a)) or local minima (e.g. Nesterov & Polyak (2006)). Given the importance placed by Dauphin et al. (2014) upon evading saddle points, recent work considers the efficient use of noise (Jin et al., 2017; Levy, 2016) and curvature information (Allen-Zhu, 2017b) to escape saddle points and find local minima.
Distributed machine learning: whilst Rprop and Adam were proposed by asking how we can use gradient information to make better optimisation steps, another school asks how much information can we throw away from the gradient and still converge at all. Seide et al. (2014); Strom (2015) demonstrated empirically that one-bit quantisation can still give good performance whilst dramatically reducing gradient communication costs in distributed systems. Convergence properties of quantized stochastic gradient methods remain largely unknown. Alistarh et al. (2017) provide convergence rates for quantisation schemes that are unbiased estimators of the true gradient, and are
thus able to rely upon vanilla SGD convergence results. Wen et al. (2017) prove asymptotic convergence of a { 1, 0, 1} ternary quantization scheme that also retains the unbiasedness of the stochastic gradient. Our proposed approach is different, in that we directly employ the sign gradient which is biased. This avoids the randomization needed for constructing an unbiased quantized estimate. To the best of our knowledge, the current work is the first to establish a convergence rate for a biased quantisation scheme, and our proof differs to that of vanilla SGD.
Parallel work: signSGD is related to both attempts to improve gradient descent like Rprop and Adam, and attempts to damage it but not too badly like quantised SGD. After submitting we became aware that Anonymous (2018) also made this link in a work submitted to the same conference. Our work gives non-convex theory of signSGD, whereas their work analyses Adam in greater depth, but only in the convex world.
3 ASSUMPTIONS
Assumption 1 (The objective function is bounded below). For all x and some constant f⇤, the objective function satisfies
f(x) f⇤ (2)
Remark: this assumption applies to every practical objective function that we are aware of.
Assumption 2 (The objective function is L-Lipschitz smooth). Let g(x) denote the gradient of the objective f(.) evaluated at point x. Then for every y we assume that
f(y) ⇥ f(x) + g(x) T (y x) ⇤ L
2
ky xk22 (3)
Remark: this assumption allows us to measure the error in trusting the local linearisation of our objective, which will be useful for bounding the error in a single step of the algorithm. For signSGD we can actually relax this assumption to only hold only for y within a local neighbourhood of x, since signSGD takes steps of bounded size.
Assumption 3 (Stochastic gradient oracle). Upon receiving query x, the stochastic gradient oracle gives us an independent estimate ĝ satisfying
E[ĝ(x)] = g(x), Var(ĝ(x)[i]) 2 8i = 1, ..., d.
Remark: this assumption is standard for stochastic optimization, except that the variance upper bound is now stated for every dimension separately. A realization of the above oracle is to choose a data point uniformly at random, and to evaluate its gradient at point x. In the algorithm, we will be working with a minibatch of size nk in the kth iteration, and the corresponding minibatch stochastic gradient is modeled as the average of nk calls of the above stochastic gradient oracle at xk. Therefore in this case the variance bound is squashed to 2/nk.
4 NON-CONVEX CONVERGENCE RATE OF SIGNSGD
Informally, our primary result says that if we run signSGD with the prescribed learning rate and mini-batch schedules, then after N stochastic gradient evaluations, we should expect that somewhere along the optimisation trajectory will be a place with gradient 1-norm smaller than O(N 0.25). This matches the non-convex SGD rate, insofar as they can be compared, and ignoring all (dimensiondependent!) constants.
Before we dive into the theorems, here’s a refresher on our notation—deep breath—gk is the gradient at step k, f⇤ is the lower bound on the objective function, f0 is the initial value of the objective function, d is the dimension of the space, K is the total number of iterations, NK is the cumulative number of stochastic gradient calls at step K, is the intrinsic variance-proxy for each component of the stochastic gradient, and finally L is the maximum curvature (see Assumption 2).
Algorithm 1 Sign stochastic gradient descent (signSGD) 1: Inputs: x0, K . initial point and time budget 2: for k 2 [0,K 1] do 3: k learningRate(k) 4: nk miniBatchSize(k) 5: ḡk 1nk Pnk i=1 stochasticGradient(xk)
6: xk+1 xk ksign(ḡk) . the sign operation is element-wise
Algorithm 2 Stochastic gradient descent 1: Inputs: x0, K . initial point and time budget 2: for k 2 [0,K 1] do 3: k learningRate(k) 4: nk miniBatchSize(k) 5: ĝk 1nk Pnk i=1 stochasticGradient(xk)
6: xk+1 xk kḡk
Theorem 1 (Non-convex convergence rate of signSGD). Apply Algorithm 1 under Assumptions 1, 2 and 3. Schedule the learning rate and mini-batch size as
k = p k + 1
nk = k + 1 (4)
Let NK be the cumulative number of stochastic gradient calls up to step K, i.e. NK = O(K2) Then we have
E min
0kK 1 kgkk1
2 1p
NK 2
f0 f⇤
+ d(2 + log(2NK 1))( + L)
2 (5)
Theorem 2 (Non-convex convergence rate of stochastic gradient descent). Apply Algorithm 2 under Assumptions 1, 2 and 3. Schedule the learning rate and mini-batch size as
k = p k + 1
nk = 1 (6)
Let NK be the cumulative number of stochastic gradient calls up to step K, i.e. NK = K. Then we have that
E min
0kK 1 kgkk22 1p
NK
" f0 f⇤
1 L2
+ d(1 + logNK)
L 2
1 L2
2 # (7)
The proofs are deferred to Appendix B and here we sketch the intuition for Theorem 1. First consider the non-stochastic case: we know that if we take lots of steps for which the gradient is large, we will make lots of progress downhill. But since the objective function has a lower bound, it is impossible to keep taking large gradient steps downhill indefinitely, therefore increasing the number of steps requires that we must run into somewhere with small gradient.
To get a handle on this analytically, we must bound the per-step improvement in terms of the norm of the gradient. Assumption 2 allows us to do exactly this. Then we know that the sum of the per-step improvements over all steps must be smaller than the total possible improvement, and that gives us a bound on how large the minimum gradient that we see can be.
In the non-stochastic case, the obstacle to this process is curvature. Curvature means that if we take too large a step the gradient becomes unreliable, and we might move uphill instead of downhill. Since the step size in signSGD is set purely by the learning rate, this means we must anneal the learning rate if we wish to be sure to control the curvature-induced error and make good progress downhill. Stochasticity also poses a problem in signSGD. In regions where the gradient signal is
smaller than the noise, the noise is enough to flip the sign of the gradient. This is more severe than the additive noise in SGD, and so the batch size must be grown to control this effect.
You might expect that growing the batch size should lead to a worse convergence rate than SGD. This is forgetting that signSGD has an advantage in that it takes large steps even when the gradient is small. It turns out that this positive effect cancels out the fact that the batch size needs to grow, and the convergence rate ends up being the same as SGD.
For completeness, we also present the convergence rate for SGD derived under our assumptions. The proof is given in Appendix C. Note that this appears to be a classic result, although we are not sure of the earliest reference. Authors often hide the dimension dependence of the variance bound. SGD does not require an increasing batch size since the effect of the noise is second order in the learning rate, and therefore gets squashed as the learning rate decays. The rate ends up being the same in NK as signSGD because SGD makes slower progress when the gradient is small.
5 COMPARING THE CONVERGENCE RATE TO SGD
To make a clean comparison, let us set = 1L (as is often recommended) and hide all numerical constants in Theorems 1 and 2. Then for signSGD, we get
E h minkgkk1 i2 ⇠ 1p
N
h L(f0 f⇤) + d( + 1) logN i2 ; (8)
and for SGD we get
E h minkgkk22 i ⇠ 1p
N
h L(f0 f⇤) + d 2 logN i (9)
where ⇠ denotes general scaling. What do these bounds mean? They say that after we have made a cumulative number of stochastic gradient evaluations N , that we should expect somewhere along our trajectory to have hit a point with gradient norm smaller than N 14 .
One important remark should be made. SignSGD more naturally deals with the one norm of the gradient vector, hence we had to square the bound to enable direct comparison with SGD. This means that the constant factor in signSGD is roughly worse by a square. Paying attention only to dimension, this looks like
signSGD: E h minkgkk1 i2 ⇠ d 2
p N
SGD: E h minkgkk22 i ⇠ dp
N
(10)
This defect in dimensionality should be expected in the bound, since signSGD almost never takes the direction of steepest descent, and the direction only gets worse as dimensionality grows. This raises the question, why do algorithms like Adam, which closely resemble signSGD, work well in practice?
Whilst answering this question fully is beyond the scope of this paper, we want to point out one important detail. Whilst the signSGD bound is worse by a factor d, it is also making a statement about the 1-norm of the gradient. Since the 1-norm of the gradient is always larger than the 2- norm, the signSGD bound is stronger in this respect. Indeed, if the gradient is distributed roughly uniformly across all dimensions, then the squared 1-norm is roughly d times bigger than the squared 2-norm, i.e.
kgkk21 ⇠ dkgkk 2 2
and in this limit both SGD and signSGD have a bound that scales as dp N .
6 SWINGING BY SADDLE POINTS? AN EXPERIMENT
Seeing as our theoretical analysis only deals with convergence to stationary points, it does not address how signSGD might behave around saddle points. We wanted to investigate the naı̈ve intuition that gradient rescaling should help flee saddle points—or in the words of Zeyuan Allen-Zhu—swing by them.
For a testbed, the authors of (Du et al., 2017) kindly provided their 10-dimensional ‘tube’ function. The tube is a specially arranged gauntlet of saddle points, each with only one escape direction, that must be navigated in sequence before reaching the global minimum of the objective. The tube was designed to demonstrate how stochasticity can help escape saddles. Gradient descent takes much longer to navigate the tube than perturbed gradient descent of (Jin et al., 2017). It is interesting to ask, even empirically, whether the sign non-linearity in signSGD can also help escape saddle points efficiently. For this reason we strip out the stochasticity and pit the sign gradient descent method (signGD) against the tube function.
There are good reasons to expect that signGD might help escape saddles—for one, it takes large steps even when the gradient is small, which could drive the method away from regions of small gradient. For another, it is able to move in directions orthogonal to the gradient, which might help discover escape directions of the saddle. We phrase this as signGD’s greater ability to explore.
Our experiments revealed that these intuitions sometimes hold out, but there are cases where they break down. In Figure 1, we compare the sign gradient method against gradient descent, perturbed gradient descent (Jin et al., 2017) and rescaled gradient descent
⇣ xk+1 = xk gkgk2 ⌘ which is a
noiseless version of the algorithm considered in (Levy, 2016). No learning rate tuning was conducted, so we suggest paying attention to the qualitative behaviour rather than the ultimate convergence speed. The left hand plot pits the algorithms against the vanilla tube function. SignGD has very different qualitative behaviour to the other algorithms—it appears to make progress completely unimpeded by the saddles. We showed that this behaviour is partly due to the axis alignment of the tube function, since after randomly rotating the objective the behaviour changes (although it is still qualitatively different to the other algorithms).
One unexpected result was that for certain random rotations of the objective, signGD could get stuck at saddle points (see right panel in Figure 1). On closer inspection, we found that the algorithm was getting stuck in perfect periodic orbits around the saddle. Since the update is given by the learning rate multiplied by a binary vector, if the learning rate is constant it is perfectly possible for a sequence of updates to sum to zero. We expect that this behaviour relies on a remarkable structure in both the tube function and the algorithm. We hypothesise that for higher dimensional objectives and a non-fixed learning rate, this phenomenon might become extremely unlikely. This seems like a worthy direction of future research. Indeed we found empirically that introducing momentum into the update rule was enough to break the symmetry and avoid this periodic behaviour.
7 CIFAR-10 EXPERIMENTS
To compare SGD, signSGD and Adam on less of a toy problem, we ran a large grid search over hyperparameters for training Resnet-20 (He et al., 2016) on the CIFAR-10 dataset (Krizhevsky, 2009). Results are plotted in Figure 2. We evaluate over the hyperparamater 3-space of (initial learning rate, weight decay, momentum), and plot slices to demonstrate the general robustness of each algorithm. We find that, as expected, signSGD and Adam have broadly similar performance. For hyperparameter configurations where SGD is stable, it appears to perform better than Adam and signSGD. But Adam and signSGD appear more robust up to larger learning rates. Full experimental details are given in Appendix A.
8 DISCUSSION
First we wish to discuss the connections between signSGD and Adam (Kingma & Ba, 2015). Note that setting the Adam hyperparameters 1 = 2 = ✏ = 0, Adam and signSGD are equivalent. Indeed the authors of the Adam paper suggest that during optimisation the Adam step will commonly look like a binary vector of ±1 (multiplied by the learning rate) and thus resemble the sign gradient step. If this algorithmic correspondence is valid, then there seems to be a discrepancy between our theoretical results and the empirical good performance of Adam. Our convergence rates suggest that signSGD should be worse than SGD by roughly a factor of dimension d. In deep neural network applications d can easily be larger than 106. We suggest a resolution to this proposed discrepancy—there is structure present in deep neural network error surfaces that is not captured by our simplistic theoretical assumptions. We have already discussed in Section 5 how the signSGD bound is improved by a factor d in the case of gradients distributed uniformly across dimensions. It is also reasonable to expect that neural network error surfaces might exhibit only weak coupling across dimensions. To provide intuition for how such an assumption can help improve the dimension scaling of signSGD, note that in the idealised case of total decoupling (the Hessian is everywhere diagonal) then the problem separates into d independent one dimensional problems, so the dimension dependence is lost.
Next, let’s talk about saddle points. Though general non-convex functions are littered with local minima, recent work rather characterises successful optimisation as the evasion of a web of saddle points (Dauphin et al., 2014). Current theoretical work focuses either on using noise Levy (2016); Jin et al. (2017) or curvature information (Allen-Zhu, 2017b) to establish bounds on the amount of time needed to escape saddle points. We noted that merely passing the gradient through the sign operation introduces an algorithmic instability close to saddle points, and we wanted to empirically investigate whether this could be enough to escape them. We removed stochasticity from the algorithm to focus purely on the effect of the sign function.
We found that when the objective function was axis aligned, then sign gradient descent without stochasticity (signGD) made progress unhindered by the saddles. We suggest that this is because signGD has a greater ability to ‘explore’, meaning it typically takes larger steps in regions of small gradient than SGD, and it can take steps almost orthogonal to the true gradient direction. This exploration ability could potentially allow it to break out of subspaces convergent on saddle points without sacrificing its convergence rate—we hypothesise that this may contribute to the often more robust practical performance of algorithms like Rprop and Adam, which bear closer relation to signSGD than SGD. For non axis-aligned objectives, signGD could sometimes get stuck in perfect periodic orbits around saddle points, though we hypothesise that this behaviour may be much less likely for higher dimensional objectives (the testbed function had dimension 10) with non-constant learning rate.
Finally we want to discuss the implications of our results for gradient quantisation schemes. Whilst we do not analyse the multi-machine case of distributed optimisation, we imagine that our results will extend naturally to that setting. In particular our results stand as a proof of concept that we can provide guarantees for biased gradient quantisation schemes. Existing quantisation schemes with guarantees require delicate randomisation to ensure unbiasedness. If a scheme as simple as ours can yield provable guarantees on convergence, then there is a hope that exploring further down this avenue can yield new and useful practical quantisation algorithms.
9 CONCLUSION
We have investigated the theoretical properties of the sign stochastic gradient method (signSGD) as an algorithm for non-convex optimisation. The study was motivated by links that the method has both to deep learning stalwarts like Adam and Rprop, as well as to newer quantisation algorithms that intend to cheapen the cost of gradient communication in distributed machine learning. We have proved non-convex convergence rates for signSGD to first order critical points. Insofar as the rates
can directly be compared, they are of the same order as SGD in terms of number of gradient evaluations, but worse by a linear factor in dimension. SignSGD has the advantage over existing gradient quantisation schemes with provable guarantees, in that it doesn’t need to employ randomisation tricks to remove bias from the quantised gradient.
We wish to propose some interesting directions for future work. First our analysis only looks at convergence to first order critical points. Whilst we present preliminary experiments exhibiting success and failure modes of the algorithm around saddle points, a more detailed study attempting to pin down exactly when we can expect signSGD to escape saddle points efficiently would be welcome. This is an interesting direction seeing as existing work always relies on either stochasticity or second order curvature information to avoid saddles. Second the link that signSGD has to both Adam-like algorithms and gradient quantisation schemes is enticing. In future work we intend to investigate whether this connection can be exploited to develop large scale machine learning algorithms that get the best of both worlds in terms of optimisation speed and communication efficiency.
A EXPERIMENTAL DETAILS
Here we describe the experimental setup for the CIFAR-10 (Krizhevsky, 2009) experiments using the Resnet-20 architecture (He et al., 2016). We tuned over {weight decay, momentum, initial learning rate} for optimisers in {SGD, signSGD, Adam}. We used our own implementation of each optimisiation algorithm. Adam was implemented as in (Kingma & Ba, 2015) with 2 = 0.999 and ✏ = 10 8, and 1 was tuned over. For both SGD and signSGD we used a momentum sequence
mk+1 = mk + (1 )g̃k (11)
and then used the following updates:
SGD : xk+1 = xk kmk+1 (12) signSGD : xk+1 = xk ksign(mk+1) (13)
Weight decay was implemented in the traditional manner of augmenting the objective function with a quadratic penalty.
All other details not mentioned (learning rate schedules, network architecture, data augmentation, etc.) are as in (He et al., 2016). In particular for signSGD we did not use the learning rate or mini-batch schedules as provided by our theory. Code will be released if the paper is accepted.
B PROVING THE CONVERGENCE RATE OF THE SIGN GRADIENT METHOD
Theorem 1 (Non-convex convergence rate of signSGD). Apply Algorithm 1 under Assumptions 1, 2 and 3. Schedule the learning rate and mini-batch size as
k = p k + 1
nk = k + 1 (4)
Let NK be the cumulative number of stochastic gradient calls up to step K, i.e. NK = O(K2) Then we have
E min
0kK 1 kgkk1
2 1p
NK 2
f0 f⇤
+ d(2 + log(2NK 1))( + L)
2 (5)
Proof. Our general strategy will be to show that the expected objective improvement at each step will be good enough to guarantee a convergence rate in expectation. First let’s bound the improvement of the objective during a single step of the algorithm for one instantiation of the noise. Note that I[.] is the indicator function, and gk,i denotes the ith component of the vector gk. First use Assumption 2, plug in the step from Algorithm 1, and decompose the improvement to expose the stochasticity-induced error:
fk+1 fk gTk (xk+1 xk) + L
2
kxk+1 xkk22
= kgTk sign(ḡk) + 2k L
2
d
= kkgkk1 + 2 k dX
i=1
|gk,i| I[sign(ḡk,i) 6= sign(gk,i)] + 2k L
2
d
Next we find the expected improvement at time k + 1 conditioned on the previous iterates.
E[fk+1 fk|xk] kkgkk1 + 2 k dX
i=1
|gk,i|P[sign(ḡk,i) 6= sign(gk,i)] + 2k L
2
d
Note that the expected improvement crucially depends on the probability that each component of the sign vector is correct. Intuition suggests that when the magnitude of the gradient |gk,i| is much larger than the typical scale of the noise , then the sign of the stochastic gradient will most likely be correct. Mistakes will typically only be made when |gk,i| is smaller than . We can make this intuition rigorous using Markov’s inequality and our variance bound on the noise (Assumption 3).
P[sign(ḡk,i) 6= sign(gk,i)] P[|ḡk,i gk,i| |gk,i|] relaxation
E[|ḡk,i gk,i|]|gk,i| Markov’s inequality p
E[(ḡk,i gk,i)2] |gk,i|
Jensen’s inequality
k|gk,i| Assumption 3
This says explicitly that the probability of the sign being incorrect is controlled by the relative scale of the noise to each component of the gradient magnitude. We denote the noise scale as k since it refers to the stochastic gradient with a mini-batch size of nk = k + 1. We can plug this result into the previous expression, take the sum over i, and substitute in our learning rate and mini-batch schedules as follows:
E[fk+1 fk|xk] kkgkk1 + 2 kd k + 2 k L
2
d
= p k + 1 kgkk1 + 2d k + 1 +
2
k + 1
L
2
d
p K kgkk1 + 2 d k + 1 ( + L)
In the last line we made some relaxations which will not affect the general scaling of the rate. Now take the expectation over the noise in all previous iterates, and sum over k:
f0 f⇤ f0 E[fK ] Assumption 1
= E " K 1X
k=0
fk fk+1
# telescope
E " K 1X
k=0
p K kgkk1 2 d k + 1 ( + L)
# previous result
E " K 1X
k=0
dp K kgkk1
# 2 d(1 + logK)( + L) harmonic sum
We can rearrange this inequality to yield a rate:
E min
0kK 1 kgkk1 E
" K 1X
k=0
1
K
kgkk1
#
1p K
f0 f⇤
+ 2d(1 + logK)( + L)
Since we are growing our mini-batch size, it will take NK 1 = K(K+1)
2 gradient evaluations to reach step K 1. Using that 2NK 2 K2 2NK 1 yields the result. For the sake of presentation, we take the final step of squaring the bound, to make it more comparable with the SGD bound.
C PROVING THE CONVERGENCE RATE OF STOCHASTIC GRADIENT DESCENT
Theorem 2 (Non-convex convergence rate of stochastic gradient descent). Apply Algorithm 2 under Assumptions 1, 2 and 3. Schedule the learning rate and mini-batch size as
k = p k + 1
nk = 1 (6)
Let NK be the cumulative number of stochastic gradient calls up to step K, i.e. NK = K. Then we have that
E min
0kK 1 kgkk22 1p
NK
" f0 f⇤
1 L2
+ d(1 + logNK)
L 2
1 L2
2 # (7)
Proof. Consider the objective improvement in a single step, under one instantiation of the noise. Use Assumption 2 followed by the definition of the algorithm.
fk+1 fk gTk (xk+1 xk) + L
2
kxk+1 xkk22
= kgTk ḡk + 2k L
2
kḡkk22
Take the expectation conditioned on previous iterates, and decompose the mean squared stochastic gradient into its mean and variance. Note that since 2 is the variance bound for each component, the variance bound for the full vector will be d 2.
E[fk+1 fk|xk] kkgkk22 + 2 k L
2
⇣ kgkk22 + d 2 ⌘
Plugging in the learning rate schedule, and using that 1k+1 1p k+1 , we get that
E[fk+1 fk|xk] p k + 1 kgkk22 +
2
k + 1
L
2
kgkk22 +
2
k + 1
L
2
2 d
p k + 1
kgkk22 ✓ 1 L
2
◆ + 2
k + 1
L
2
2 d
Take the expectation over xk, sum over k, and we get that
f0 f⇤ f0 E[fK ]
= E " K 1X
k=0
fk fk+1
#
K 1X
k=0
p k + 1 E h kgkk22 i✓ 1 L 2 ◆
2
k + 1
L
2
2 d
K
p K
E min
0kK 1 kgkk22
✓ 1 L
2
◆
K 1X
k=0
2
k + 1
L
2
2 d
p K E min
0kK 1 kgkk22
✓ 1 L
2
◆ (1 + logK) 2L
2
2 d
And rearranging yields the result. | 1. What is the focus of the paper, and what are the key contributions of the proposed approach?
2. What are the strengths and weaknesses of the theoretical analysis provided in the paper?
3. How does the reviewer assess the significance and novelty of the paper's content?
4. Are there any concerns or questions regarding the assumptions made in the paper?
5. What are the limitations of the experimental results presented in the paper? | Review | Review
Dear Authors,
After reading the revised version I still believe that the assumption about the gradients + their variances to be distributed equivalently among all direction is very non-realistic, also for the case of deep learning applications.
I think that the direction you are taking is very interesting, yet the theoretical work is still too preliminary and I believe that further investigation should be made in order to make a more complete manuscript.
The additional experiments are nice. I therefore raised my score by a bit.
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
The paper explores SignGD --- an algorithm that uses the sign of the gradients instead of actual gradients for training deep models. The authors provide some guarantees regarding the convergence of SignGD to local minima in the stochastic optimization setting, and later compare SignSG to GD in two deep learning tasks.
Exploring signSGD is an important and interesting line of research, and this paper provides some preliminary result in this direction.
However, in my view, this work is too preliminary and not ready for publish. This is since the authors do not illustrate any clear benefits of signSGD over SGD neither in theory nor in practice. I elaborate on this below:
-The theory part shows that under some conditions, signGD finds a local minima. Yet, as the authors themselves
mention, the dependence on the dimension is much worse compared to SGD.
Moreover, the authors do not mention that if the noise variance does not scale with the dimension (as is often the case), then the convergence of SGD will not depend on the dimension, while it seems that the convergence of signGD will still depend on the dimension.
-The experiments are nice as a preliminary investigation, but not enough in order to illustrate the benefits of signSGD over SGD. In order to do so, the authors should make a more extensive experimental study. |
ICLR | Title
Self-ensembling for visual domain adaptation
Abstract
This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen & Valpola (2017)) of temporal ensembling (Laine & Aila (2017)), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.
N/A
This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen & Valpola (2017)) of temporal ensembling (Laine & Aila (2017)), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.
1 INTRODUCTION
The strong performance of deep learning in computer vision tasks comes at the cost of requiring large datasets with corresponding ground truth labels for training. Such datasets are often expensive to produce, owing to the cost of the human labour required to produce the ground truth labels.
Semi-supervised learning is an active area of research that aims to reduce the quantity of ground truth labels required for training. It is aimed at common practical scenarios in which only a small subset of a large dataset has corresponding ground truth labels. Unsupervised domain adaptation is a closely related problem in which one attempts to transfer knowledge gained from a labeled source dataset to a distinct unlabeled target dataset, within the constraint that the objective (e.g.digit classification) must remain the same. Domain adaptation offers the potential to train a model using labeled synthetic data – that is often abundantly available – and unlabeled real data. The scale of the problem can be seen in the VisDA-17 domain adaptation challenge images shown in Figure 1. We will present our winning solution in Section 4.2.
Recent work (Tarvainen & Valpola (2017)) has demonstrated the effectiveness of self-ensembling with random image augmentations to achieve state of the art performance in semi-supervised learning benchmarks.
We have developed the approach proposed by Tarvainen & Valpola (2017) to work in a domain adaptation scenario. We will show that this can achieve excellent results in specific small image domain adaptation benchmarks. More challenging scenarios, notably MNIST → SVHN and the VisDA-17 domain adaptation challenge required further modifications. To this end, we developed confidence thresholding and class balancing that allowed us to achieve state of the art results in a variety of benchmarks, with some of our results coming close to those achieved by traditional supervised learning. Our approach is sufficiently flexble to be applicable to a variety of network architectures, both randomly initialized and pre-trained.
Our paper is organised as follows; in Section 2 we will discuss related work that provides context and forms the basis of our technique; our approach is described in Section 3 with our experiments and results in Section 4; and finally we present our conclusions in Section 5.
2 RELATED WORK
In this section we will cover self-ensembling based semi-supervised methods that form the basis of our approach and domain adaptation techniques to which our work can be compared.
2.1 SELF-ENSEMBLING FOR SEMI-SUPERVISED LEARNING
Recent work based on methods related to self-ensembling have achieved excellent results in semisupervised learning scenarious. A neural network is trained to make consistent predictions for unsupervised samples under different augmentation Sajjadi et al. (2016), dropout and noise conditions or through the use of adversarial training Miyato et al. (2017). We will focus in particular on the self-ensembling based approaches of Laine & Aila (2017) and Tarvainen & Valpola (2017) as they form the basis of our approach.
Laine & Aila (2017) present two models; their Π-model and their temporal model. The Π-model passes each unlabeled sample through a classifier twice, each time with different dropout, noise and image translation parameters. Their unsupervised loss is the mean of the squared difference in class probability predictions resulting from the two presentations of each sample. Their temporal model maintains a per-sample moving average of the historical network predictions and encourages subsequent predictions to be consistent with the average. Their approach achieved state of the art results in the SVHN and CIFAR-10 semi-supervised classification benchmarks.
Tarvainen & Valpola (2017) further improved on the temporal model of Laine & Aila (2017) by using an exponential moving average of the network weights rather than of the class predictions. Their approach uses two networks; a student network and a teacher network, where the student is trained using gradient descent and the weigthts of the teacher are the exponential moving average of those of the student. The unsupervised loss used to train the student is the mean square difference between the predictions of the student and the teacher, under different dropout, noise and image translation parameters.
2.2 DOMAIN ADAPTATION
There is a rich body of literature tackling the problem of domain adaptation. We focus on deep learning based methods as these are most relevant to our work.
Auto-encoders are unsupervised neural network models that reconstruct their input samples by first encoding them into a latent space and then decoding and reconstructing them. Ghifary et al. (2016) describe an auto-encoder model that is trained to reconstruct samples from both the source and target domains, while a classifier is trained to predict labels from domain invariant features present in the latent representation using source domain labels. Bousmalis et al. (2016) reckognised that samples from disparate domains have distinct domain specific characteristics that must be represented in the latent representation to support effective reconstruction. They developed a split model that separates the latent representation into shared domain invariant features and private features specific to the source and target domains. Their classifier operates on the domain invariant features only.
Ganin & Lempitsky (2015) propose a bifurcated classifier that splits into label classification and domain classification branches after common feature extraction layers. A gradient reversal layer is
placed between the common feature extraction layers and the domain classification branch; while the domain classification layers attempt to determine which domain a sample came from the gradient reversal operation encourages the feature extraction layers to confuse the domain classifier by extracting domain invariant features. An alternative and simpler implementation described in their appendix minimises the label cross-entropy loss in the feature and label classification layers, minimises the domain cross-entropy in the domain classification layers but maximises it in the feature layers. The model of Tzeng et al. (2017) runs along similar lines but uses separate feature extraction sub-networks for source and domain samples and train the model in two distinct stages.
Saito et al. (2017a) use tri-training (Zhou & Li (2005)); feature extraction layers are used to drive three classifier sub-networks. The first two are trained on samples from the source domain, while a weight similarity penalty encourages them to learn different weights. Pseudo-labels generated for target domain samples by these source domain classifiers are used to train the final classifier to operate on the target domain.
Generative Adversarial Networks (GANs; Goodfellow et al. (2014)) are unsupervised models that consist of a generator network that is trained to generate samples that match the distribution of a dataset by fooling a discriminator network that is simultaneously trained to distinguish real samples from generates samples. Some GAN based models – such as that of Sankaranarayanan et al. (2017) – use a GAN to help learn a domain invariant embedding for samples. Many GAN based domain adaptation approaches use a generator that transforms samples from one domain to another.
Bousmalis et al. (2017) propose a GAN that adapts synthetic images to better match the characteristics of real images. Their generator takes a synthetic image and noise vector as input and produces an adapted image. They train a classifier to predict annotations for source and adapted samples alonside the GAN, while encouraing the generator to preserve aspects of the image important for annotation. The model of Shrivastava et al. (2017) consists of a refiner network (in the place of a generator) and discriminator that have a limited receptive field, limiting their model to making local changes while preserving ground truth annotations. The use of refined simulated images with corresponding ground truths resulted in improved performance in gaze and hand pose estimation.
Russo et al. (2017) present a bi-directional GAN composed of two generators that transform samples from the source to the target domain and vice versa. They transform labelled source samples to the target domain using one generator and back to the source domain with the other and encourage the network to learn label class consistency. This work bears similarities to CycleGAN, by Zhu et al. (2017).
A number of domain adaptation models maximise domain confusion by minimising the difference between the distributions of features extracted from source and target domains. Deep CORAL Sun & Saenko (2016) minimises the difference between the feature covariance matrices for a mini-batch of samples from the source and target domains. Tzeng et al. (2014) and Long et al. (2015) minimise the Maximum Mean Discrepancy metric Gretton et al. (2012). Li et al. (2016) described adaptive batch normalization, a variant of batch normalization (Ioffe & Szegedy (2015)) that learns separate batch normalization statistics for the source and target domains in a two-pass process, establishing new state-of-the-art results. In the first pass standard supervised learning is used to train a classifier for samples from the source domain. In the second pass, normalization statistics for target domain samples are computed for each batch normalization layer in the network, leaving the network weights as they are.
3 METHOD
Our model builds upon the mean teacher semi-supervised learning model of Tarvainen & Valpola (2017), which we will describe. Subsequently we will present our modifications that enable domain adaptation.
The structure of the mean teacher model of Tarvainen & Valpola (2017) – also discussed in section 2.1 – is shown in Figure 2a. The student network is trained using gradient descent, while the weights of the teacher network are an exponential moving average of those of the student. During training each input sample xi is passed through both the student and teacher networks, generating predicted class probability vectors zi (student) and z̃i (teacher). Different dropout, noise and image translation parameters are used for the student and teacher pathways.
During each training iteration a mini-batch of samples is drawn from the dataset, consisting of both labeled and unlabeled samples. The training loss is the sum of a supervised and an unsupervised component. The supervised loss is cross-entropy loss computed using zi (student prediction). It is masked to 0 for unlabeled samples for which no ground truth is available. The unsupervised component is the self-ensembling loss. It penalises the difference in class predictions between student (zi) and teacher (z̃i) networks for the same input sample. It is computed using the mean squared difference between the class probability predictions zi and z̃i.
Laine & Aila (2017) and Tarvainen & Valpola (2017) found that it was necessary to apply a timedependent weighting to the unsupervised loss during training in order to prevent the network from getting stuck in a degenerate solution that gives poor classification performance. They used a function that follows a Gaussian curve from 0 to 1 during the first 80 epochs.
In the following subsections we will describe our contributions in detail along with the motivations for introducing them.
3.1 ADAPTING TO DOMAIN ADAPTATION
We minimise the same loss as in Tarvainen & Valpola (2017); we apply cross-entropy loss to labeled source samples and unsupervised self-ensembling loss to target samples. As in Tarvainen & Valpola (2017), self-ensembling loss is computed as the mean-squared difference between predictions produced by the student (zTi) and teacher (z̃Ti) networks with different augmentation, dropout and noise parameters.
The models of Tarvainen & Valpola (2017) and of Laine & Aila (2017) were designed for semisupervised learning problems in which a subset of the samples in a single dataset have ground truth labels. During training both models mix labeled and unlabeled samples together in a minibatch. In contrast, unsupervised domain adaptation problems use two distinct datasets with different underlying distributions; labeled source and unlabeled target. Our variant of the mean teacher model – shown in Figure 2b – has separate source (XSi) and target (XTi) paths. Inspired by the work of Li et al. (2016), we process mini-batches from the source and target datasets separately (per iteration) so that batch normalization uses different normalization statistics for each domain during training.1. We do not use the approach of Li et al. (2016) as-is, as they handle the source and target datasets separtely in two distinct training phases, where our approach must train using both simultaneously. We also do not maintain separate exponential moving averages of the means and variances for each dataset for use at test time.
1This is simple to implement using most neural network toolkits; evaluate the network once for source samples and a second time for target samples, compute the supervised and unsupervised losses respectively and combine.
As seen in the ‘MT+TF’ row of Table 1, the model described thus far achieves state of the art results in 5 out of 8 small image benchmarks. The MNIST→ SVHN, STL→ CIFAR-10 and Syn-digits→ SVHN benchmarks however require additional modifications to achieve good performance.
3.2 CONFIDENCE THRESHOLDING
We found that replacing the Gaussian ramp-up factor that scales the unsupervised loss with confidence thresholding stabilized training in more challenging domain adaptation scenarios. For each unlabeled sample xTi the teacher network produces the predicted class probabilty vector z̃Tij – where j is the class index drawn from the set of classes C – from which we compute the confidence f̃Ti = maxj∈C(z̃Tij); the predicted probability of the predicted class of the sample. If f̃Ti is below the confidence threshold (a parameter search found 0.968 to be an effective value for small image benchmarks), the self-ensembling loss for the sample xi is masked to 0.
Our working hypothesis is that confidence thresholding acts as a filter, shifting the balance in favour of the student learning correct labels from the teacher. While high network prediction confidence does not guarantee correctness there is a positive correlation. Given the tolerance to incorrect labels reported by Laine & Aila (2017), we believe that the higher signal-to-noise ratio underlies the success of this component of our approach.
The use of confidence thresholding achieves a state of the art results in the STL→ CIFAR-10 and Syn-digits → SVHN benchmarks, as seen in the ‘MT+CT+TF’ row of Table 1. While confidence thresholding can result in very slight reductions in performance (see the MNIST↔USPS and SVHN →MNIST results), its ability to stabilise training in challenging scenarios leads us to recommend it as a replacement for the time-dependent Gaussian ramp-up used in Laine & Aila (2017).
3.3 DATA AUGMENTATION
We explored the effect of three data augmentation schemes in our small image benchmarks (section 4.1). Our minimal scheme (that should be applicable in non-visual domains) consists of Gaussian noise (with σ = 0.1) added to the pixel values. The standard scheme (indicated by ‘TF’ in Table 1) was used by Laine & Aila (2017) and adds translations in the interval [−2, 2] and horizontal flips for the CIFAR-10 ↔ STL experiments. The affine scheme (indicated by ‘TFA’) adds random affine transformations defined by the matrix in (1), where N (0, 0.1) denotes a real value drawn from a normal distribution with mean 0 and standard deviation 0.1.[
1 +N (0, 0.1) N (0, 0.1) N (0, 0.1) 1 +N (0, 0.1)
] (1)
The use of translations and horizontal flips has a significant impact in a number of our benchmarks. It is necessary in order to outpace prior art in the MNIST↔ USPS and SVHN→MNIST benchmarks and improves performance in the CIFAR-10 ↔ STL benchmarks. The use of affine augmentation can improve performance in experiments involving digit and traffic sign recognition datasets, as seen in the ‘MT+CT+TFA’ row of Table 1. In contrast it can impair performance when used with photographic datasets, as seen in the the STL→ CIFAR-10 experiment. It also impaired performance in the VisDA-17 experiment (section 4.2).
3.4 CLASS BALANCE LOSS
With the adaptations made so far the challenging MNIST→ SVHN benchmark remains undefeated due to training instabilities. During training we noticed that the error rate on the SVHN test set decreases at first, then rises and reaches high values before training completes. We diagnosed the problem by recording the predictions for the SVHN target domain samples after each epoch. The rise in error rate correlated with the predictions evolving toward a condition in which most samples are predicted as belonging to the ‘1’ class; the most populous class in the SVHN dataset. We hypothesize that the class imbalance in the SVHN dataset caused the unsupervised loss to reinforce the ‘1’ class more often than the others, resulting in the network settling in a degenerate local minimum. Rather than distinguish between digit classes as intended it seperated MNIST from SVHN samples and assigned the latter to the ‘1’ class.
We addressed this problem by introducing a class balance loss term that penalises the network for making predictions that exhibit large class imbalance. For each target domain mini-batch we compute the mean of the predicted sample class probabilities over the sample dimension, resulting in the mini-batch mean per-class probability. The loss is computed as the binary cross entropy between the mean class probability vector and a uniform probability vector. We balance the strength of the class balance loss with that of the self-ensembling loss by multiplying the class balance loss by the average of the confidence threshold mask (e.g. if 75% of samples in a mini-batch pass the confidence threshold, then the class balance loss is multiplied by 0.75).2
We would like to note the similarity between our class balance loss and the entropy maximisation loss in the IMSAT clustering model of Hu et al. (2017); IMSAT employs entropy maximisation to encourage uniform cluster sizes and entropy minimisation to encourage unambiguous cluster assignments.
4 EXPERIMENTS
Our implementation was developed using PyTorch (Chintala et al.) and is publically available at http://github.com/Britefury/self-ensemble-visual-domain-adapt.
4.1 SMALL IMAGE DATASETS
Our results can be seen in Table 1. The ‘train on source’ and ‘train on target’ results report the target domain performance of supervised training on the source and target domains. They represent the exepected baseline and best achievable result. The ‘Specific aug.‘ experiments used data augmentation specific to the MNIST→ SVHN adaptation path that is discussed further down. The small datasets and data preparation procedures are described in Appendix A. Our training procedure is described in Appendix B and our network architectures are described in Appendix D. The same network architectures and augmentation parameters were used for domain adaptation experiments and the supervised baselines discussed above. It is worth noting that only the training sets of the small image datasets were used during training; the test sets used for reporting scores only.
MNIST↔ USPS (see Figure 3a). MNIST and USPS are both greyscale hand-written digit datasets. In both adaptation directions our approach not only demonstrates a significant improvement over prior art but nearly achieves the performance of supervised learning using the target domain ground truths. The strong performance of the base mean teacher model can be attributed to the similarity of the datasets to one another. It is worth noting that data augmentation allows our ‘train on source’ baseline to outpace prior domain adaptation methods.
CIFAR-10↔ STL (see Figure 3b). CIFAR-10 and STL are both 10-class image datasets, although we removed one class from each (see Appendix A.2). We obtained strong performance in the STL→ CIFAR-10 path, but only by using confidence thresholding. The CIFAR-10→ STL results are more interesting; the ‘train on source’ baseline performance outperforms that of a network trained on the STL target domain, most likely due to the small size of the STL training set. Our self-ensembling results outpace both the baseline performance and the ‘theoretical maximum’ of a network trained
2We expect that class balance loss is likely to adversely affect performance on target datasets with large class imbalance.
on the target domain, lending further evidence to the view of Sajjadi et al. (2016) and Laine & Aila (2017) that self-ensembling acts as an effective regulariser.
Syn-Digits → SVHN (see Figure 3c). The Syn-Digits dataset is a synthetic dataset designed by Ganin & Lempitsky (2015) to be used as a source dataset in domain adaptation experiments with SVHN as the target dataset. Other approaches have achieved good scores on this benchmark, beating
the baseline by a significant margin. Our result improves on them, reducing the error rate from 6.9% to 2.9%; even slightly outpacing the ‘train on target’ 3.4% error rate achieved using supervised learning.
Syn-Signs → GTSRB (see Figure 3d). Syn-Signs is another synthetic dataset designed by Ganin & Lempitsky (2015) to target the 43-class GTSRB (German Traffic Signs Recognition Benchmark; Stallkamp et al. (2011)) dataset. Our approach halved the best error rate of competing approaches. Once again, our approaches slightly outpaces the ‘train on target’ supervised learning upper bound.
SVHN→MNIST (see Figure 3e). Google’s SVHN (Street View House Numbers) is a colour digits dataset of house number plates. Our approach significantly outpaces other techniques and achieves an accuracy close to that of supervised learning.
MNIST→ SVHN (see Figure 3f). This adaptation path is somewhat more challenging as MNIST digits are greyscale and uniform in terms of size, aspect ratio and intensity range, in contrast to the variably sized colour digits present in SVHN. As a consequence, adapting from MNIST to SVHN required additional work. Class balancing loss was necessary to ensure training stability and additional experiment specific data augmentation was required to achieve good accuracy. The use of translations and affine augmentation (see section 3.3) results in an accuracy score of 37%. Significant improvements resulted from additional augmentation in the form of random intensity flips (negative image), and random intensity scales and offsets drawn from the intervals [0.25, 1.5] and [−0.5, 0.5] respectively. These hyper-parameters were selected in order to augment MNIST samples to match the intensity variations present in SVHN, as illustrated in Figure 3f. With these additional modifications, we achieve a result that significantly outperforms prior art and nearly achieves the accuracy of a supervised classifier trained on the target dataset. We found that applying these additional augmentations to the source MNIST dataset only yielded good results; applying them to the target SVHN dataset as well yielded a small improvement but was not essential. It should also be noted that this augmentation scheme raises the performance of the ‘train on source’ baseline to just above that of much of the prior art.
4.2 VISDA-2017 VISUAL DOMAIN ADAPTATION CHALLENGE
The VisDA-2017 image classification challenge is a 12-class domain adaptation problem consisting of three datasets: a training set consisting of 3D renderings of sketchup models, and validation and test sets consisting of real images (see Figure 1) drawn from the COCO Lin et al. (2014) and YouTube BoundingBoxes Real et al. (2017) datasets respectively. The objective is to learn from labeled computer generated images and correctly predict the class of real images. Ground truth labels were made available for the training and validation sets only; test set scores were computed by a server operated by the competition organisers.
While the algorithm is that presented above, we base our network on the pretrained ResNet-152 (He et al. (2016)) network provided by PyTorch (Chintala et al.), rather than using a randomly initialised network as before. The final 1000-class classification layer is removed and replaced with two fullyconnected layers; the first has 512 units with a ReLU non-linearity while the final layer has 12 units with a softmax non-linearity. Results from our original competition submissions and newer results using two data augmentation schemes are presented in Table 2. Our reduced augmentation scheme consists of random crops, random horizontal flips and random uniform scaling. It is very similar to scheme used for ImageNet image classification in He et al. (2016). Our competition configuration includes additional augmentation that was specifically designed for the VisDA dataset, although we subsequently found that it makes little difference. Our hyper-parameters and competition data augmentation scheme are described in Appendix C.1. It is worth noting that we applied test time augmentation (we averaged predictions form 16 differently augmented images) to achieve our competition results. We present resuts with and without test time augmentation in Table 2. Our VisDA competition test set score is also the result of ensembling the predictions of 5 different networks.
5 CONCLUSIONS
We have presented an effective domain adaptation algorithm that has achieved state of the art results in a number of benchmarks and has achieved accuracies that are almost on par with traditional supervised learning on digit recognition benchmarks targeting the MNIST and SVHN datasets. The
resulting networks will exhibit strong performance on samples from both the source and target domains. Our approach is sufficiently flexible to be usable for a variety of network architectures, including those based on randomly initialised and pre-trained networks.
Miyato et al. (2017) stated that the self-ensembling methods presented by Laine & Aila (2017) – on which our algorithm is based – operate by label propagation. This view is supported by our results, in particular our MNIST→ SVHN experiment. The latter requires additional intensity augmentation in order to sufficiently align the dataset distributions, after which good quality label predictions are propagated throughout the target dataset. In cases where data augmentation is insufficient to align the dataset distributions, a pre-trained network may be used to bridge the gap, as in our solution to the VisDA-17 challenge. This leads us to conclude that effective domain adaptation can be achieved by first aligning the distributions of the source and target datasets – the focus of much prior art in the field – and then refining their correspondance; a task to which self-ensembling is well suited.
A DATASETS AND DATA PREPARATION
A.1 SMALL IMAGE DATASETS
The datasets used in this paper are described in Table 3.
A.2 DATA PREPARATION
Some of the experiments that involved datasets described in Table 3 required additional data preparation in order to match the resolution and format of the input samples and match the classification target. These additional steps will now be described.
MNIST ↔ USPS The USPS images were up-scaled using bilinear interpolation from 16 × 16 to 28× 28 resolution to match that of MNIST. CIFAR-10 ↔ STL CIFAR-10 and STL are both 10-class image datasets. The STL images were down-scaled to 32 × 32 resolution to match that of CIFAR-10. The ‘frog’ class in CIFAR-10 and the ‘monkey’ class in STL were removed as they have no equivalent in the other dataset, resulting in a 9-class problem with 10% less samples in each dataset.
Syn-Signs→ GTSRB GTSRB is composed of images that vary in size and come with annotations that provide region of interest (bounding box around the sign) and ground truth classification. We extracted the region of interest from each image and scaled them to a resolution of 40× 40 to match those of Syn-Signs.
MNIST↔ SVHN The MNIST images were padded to 32 × 32 resolution and converted to RGB by replicating the greyscale channel into the three RGB channels to match the format of SVHN.
B SMALL IMAGE EXPERIMENT TRAINING
B.1 TRAINING PROCEDURE
Our networks were trained for 300 epochs. We used the Adam Kingma & Ba (2015) gradient descent algorithm with a learning rate of 0.001. We trained using mini-batches composed of 256 samples, except in the Syn-digits → SVHN and Syn-signs → GTSRB experiments where we used 128 in order to reduce memory usage. The self-ensembling loss was weighted by a factor of 3 and the class balancing loss was weighted by 0.005. Our teacher network weights ti were updated so as to be an exponential moving average of those of the student si using the formula ti = αti−1 + (1 − α)si, with a value of 0.99 for α. A complete pass over the target dataset was considered to be one epoch in all experiments except the MNIST→ USPS and CIFAR-10→ STL experiments due to the small size of the target datasets, in which case one epoch was considered to be a pass over the larger soure dataset.
We found that using the proportion of samples that passed the confidence threshold can be used to drive early stopping (Prechelt (1998)). The final score was the target test set performance at the epoch at which the highest confidence threshold pass rate was obtained.
C VISDA-17
C.1 HYPER-PARAMETERS
Our training procedure was the same as that used in the small image experiments, except that we used 160 × 160 images, a batch size of 56 (reduced from 64 to fit within the memory of an nVidia 1080-Ti), a self-ensembling weight of 10 (instead of 3), a confidence threshold of 0.9 (instead of 0.968) and a class balancing weight of 0.01. We used the Adam Kingma & Ba (2015) gradient descent algorithm with a learning rate of 10−5 for the final two randomly initialized layers and 10−6 for the pre-trained layers. The first convolutional layer and the first group of convolutional layers (with 64 feature channels) of the pre-trained ResNet were left unmodified during training.
Reduced data augmentation:
• scale image so that its smallest dimension is 176 pixels, then randomly crop a 160 × 160 section from the scaled image
• No random affine transformations as they increase confusion between the car and truck classes in the validation set
• random uniform scaling in the range [0.75, 1.333] • horizontal flipping
Competition data augmentation adds the following in addition to the above:
• random intensity/brightness scaling in the range [0.75, 1.333] • random rotations, normally distributed with a standard deviation of 0.2π • random desaturation in which the colours in an image are randomly desaturated to greyscale
by a factor between 0% and 100% • rotations in colour space, around a randomly chosen axes with a standard deviation of 0.05π • random offset in colour space, after standardisation using parameters specified by PyTorch
implementation of ResNet-152
D NETWORK ARCHITECTURES
Our network architectures are shown in Tables 6 - 8. | 1. What is the focus of the paper regarding domain adaptation?
2. What are the strengths and weaknesses of the proposed method?
3. How does the reviewer assess the significance and novelty of the paper's content?
4. Are there any concerns regarding the applicability and engineering efforts of the model?
5. How does the reviewer evaluate the clarity and quality of the writing? | Review | Review
This paper presents a domain adaptation algorithm based on the self-ensembling method proposed by [Tarvainen & Valpola, 2017]. The main idea is to enforce the agreement between the predictions of the teacher and the student classifiers on the target domain samples while training the student to perform well on the source domain. The teacher network is simply an exponential moving average of different versions of the student network over time.
Pros:
+ The paper is well-written and easy to read
+ The proposed method is a natural extension of the mean teacher semi-supervised learning model by [Tarvainen & Valpola, 2017]
+ The model achieves state-of-the-art results on a range of visual domain adaptation benchmarks (including top performance in the VisDA17 challenge)
Cons:
- The model is tailored to the image domain as it makes heavy use of the data augmentation. That restricts its applicability quite significantly. I’m also very interested to know how the proposed method works when no augmentation is employed (for fair comparison with some of the entries in Table 1).
- I’m not particularly fond of the engineering tricks like confidence thresholding and the class balance loss. They seem to be essential for good performance and thus, in my opinion, reduce the value of the main idea.
- Related to the previous point, the final VisDA17 model seems to be engineered too heavily to work well on a particular dataset. I’m not sure if it provides many interesting insights for the scientific community at large.
In my opinion, it’s a borderline paper. While the best reported quantitative results are quite good, it seems that achieving those requires a significant engineering effort beyond just applying the self-ensembling idea.
Notes:
* The paper somewhat breaks the anonymity of the authors by mentioning the “winning entry in the VISDA-2017”. Maybe it’s not a big issue but in my opinion it’s better to remove references to the competition entry.
* Page 2, 2.1, line 2, typo: “stanrdard” -> “standard”
Post-rebuttal revision:
After reading the authors' response to my review, I decided to increase the score by 2 points. I appreciate the improvements that were made to the paper but still feel that this work a bit too engineering-heavy, and the title does not fully reflect what's going on in the full pipeline. |
ICLR | Title
Self-ensembling for visual domain adaptation
Abstract
This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen & Valpola (2017)) of temporal ensembling (Laine & Aila (2017)), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.
N/A
This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen & Valpola (2017)) of temporal ensembling (Laine & Aila (2017)), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.
1 INTRODUCTION
The strong performance of deep learning in computer vision tasks comes at the cost of requiring large datasets with corresponding ground truth labels for training. Such datasets are often expensive to produce, owing to the cost of the human labour required to produce the ground truth labels.
Semi-supervised learning is an active area of research that aims to reduce the quantity of ground truth labels required for training. It is aimed at common practical scenarios in which only a small subset of a large dataset has corresponding ground truth labels. Unsupervised domain adaptation is a closely related problem in which one attempts to transfer knowledge gained from a labeled source dataset to a distinct unlabeled target dataset, within the constraint that the objective (e.g.digit classification) must remain the same. Domain adaptation offers the potential to train a model using labeled synthetic data – that is often abundantly available – and unlabeled real data. The scale of the problem can be seen in the VisDA-17 domain adaptation challenge images shown in Figure 1. We will present our winning solution in Section 4.2.
Recent work (Tarvainen & Valpola (2017)) has demonstrated the effectiveness of self-ensembling with random image augmentations to achieve state of the art performance in semi-supervised learning benchmarks.
We have developed the approach proposed by Tarvainen & Valpola (2017) to work in a domain adaptation scenario. We will show that this can achieve excellent results in specific small image domain adaptation benchmarks. More challenging scenarios, notably MNIST → SVHN and the VisDA-17 domain adaptation challenge required further modifications. To this end, we developed confidence thresholding and class balancing that allowed us to achieve state of the art results in a variety of benchmarks, with some of our results coming close to those achieved by traditional supervised learning. Our approach is sufficiently flexble to be applicable to a variety of network architectures, both randomly initialized and pre-trained.
Our paper is organised as follows; in Section 2 we will discuss related work that provides context and forms the basis of our technique; our approach is described in Section 3 with our experiments and results in Section 4; and finally we present our conclusions in Section 5.
2 RELATED WORK
In this section we will cover self-ensembling based semi-supervised methods that form the basis of our approach and domain adaptation techniques to which our work can be compared.
2.1 SELF-ENSEMBLING FOR SEMI-SUPERVISED LEARNING
Recent work based on methods related to self-ensembling have achieved excellent results in semisupervised learning scenarious. A neural network is trained to make consistent predictions for unsupervised samples under different augmentation Sajjadi et al. (2016), dropout and noise conditions or through the use of adversarial training Miyato et al. (2017). We will focus in particular on the self-ensembling based approaches of Laine & Aila (2017) and Tarvainen & Valpola (2017) as they form the basis of our approach.
Laine & Aila (2017) present two models; their Π-model and their temporal model. The Π-model passes each unlabeled sample through a classifier twice, each time with different dropout, noise and image translation parameters. Their unsupervised loss is the mean of the squared difference in class probability predictions resulting from the two presentations of each sample. Their temporal model maintains a per-sample moving average of the historical network predictions and encourages subsequent predictions to be consistent with the average. Their approach achieved state of the art results in the SVHN and CIFAR-10 semi-supervised classification benchmarks.
Tarvainen & Valpola (2017) further improved on the temporal model of Laine & Aila (2017) by using an exponential moving average of the network weights rather than of the class predictions. Their approach uses two networks; a student network and a teacher network, where the student is trained using gradient descent and the weigthts of the teacher are the exponential moving average of those of the student. The unsupervised loss used to train the student is the mean square difference between the predictions of the student and the teacher, under different dropout, noise and image translation parameters.
2.2 DOMAIN ADAPTATION
There is a rich body of literature tackling the problem of domain adaptation. We focus on deep learning based methods as these are most relevant to our work.
Auto-encoders are unsupervised neural network models that reconstruct their input samples by first encoding them into a latent space and then decoding and reconstructing them. Ghifary et al. (2016) describe an auto-encoder model that is trained to reconstruct samples from both the source and target domains, while a classifier is trained to predict labels from domain invariant features present in the latent representation using source domain labels. Bousmalis et al. (2016) reckognised that samples from disparate domains have distinct domain specific characteristics that must be represented in the latent representation to support effective reconstruction. They developed a split model that separates the latent representation into shared domain invariant features and private features specific to the source and target domains. Their classifier operates on the domain invariant features only.
Ganin & Lempitsky (2015) propose a bifurcated classifier that splits into label classification and domain classification branches after common feature extraction layers. A gradient reversal layer is
placed between the common feature extraction layers and the domain classification branch; while the domain classification layers attempt to determine which domain a sample came from the gradient reversal operation encourages the feature extraction layers to confuse the domain classifier by extracting domain invariant features. An alternative and simpler implementation described in their appendix minimises the label cross-entropy loss in the feature and label classification layers, minimises the domain cross-entropy in the domain classification layers but maximises it in the feature layers. The model of Tzeng et al. (2017) runs along similar lines but uses separate feature extraction sub-networks for source and domain samples and train the model in two distinct stages.
Saito et al. (2017a) use tri-training (Zhou & Li (2005)); feature extraction layers are used to drive three classifier sub-networks. The first two are trained on samples from the source domain, while a weight similarity penalty encourages them to learn different weights. Pseudo-labels generated for target domain samples by these source domain classifiers are used to train the final classifier to operate on the target domain.
Generative Adversarial Networks (GANs; Goodfellow et al. (2014)) are unsupervised models that consist of a generator network that is trained to generate samples that match the distribution of a dataset by fooling a discriminator network that is simultaneously trained to distinguish real samples from generates samples. Some GAN based models – such as that of Sankaranarayanan et al. (2017) – use a GAN to help learn a domain invariant embedding for samples. Many GAN based domain adaptation approaches use a generator that transforms samples from one domain to another.
Bousmalis et al. (2017) propose a GAN that adapts synthetic images to better match the characteristics of real images. Their generator takes a synthetic image and noise vector as input and produces an adapted image. They train a classifier to predict annotations for source and adapted samples alonside the GAN, while encouraing the generator to preserve aspects of the image important for annotation. The model of Shrivastava et al. (2017) consists of a refiner network (in the place of a generator) and discriminator that have a limited receptive field, limiting their model to making local changes while preserving ground truth annotations. The use of refined simulated images with corresponding ground truths resulted in improved performance in gaze and hand pose estimation.
Russo et al. (2017) present a bi-directional GAN composed of two generators that transform samples from the source to the target domain and vice versa. They transform labelled source samples to the target domain using one generator and back to the source domain with the other and encourage the network to learn label class consistency. This work bears similarities to CycleGAN, by Zhu et al. (2017).
A number of domain adaptation models maximise domain confusion by minimising the difference between the distributions of features extracted from source and target domains. Deep CORAL Sun & Saenko (2016) minimises the difference between the feature covariance matrices for a mini-batch of samples from the source and target domains. Tzeng et al. (2014) and Long et al. (2015) minimise the Maximum Mean Discrepancy metric Gretton et al. (2012). Li et al. (2016) described adaptive batch normalization, a variant of batch normalization (Ioffe & Szegedy (2015)) that learns separate batch normalization statistics for the source and target domains in a two-pass process, establishing new state-of-the-art results. In the first pass standard supervised learning is used to train a classifier for samples from the source domain. In the second pass, normalization statistics for target domain samples are computed for each batch normalization layer in the network, leaving the network weights as they are.
3 METHOD
Our model builds upon the mean teacher semi-supervised learning model of Tarvainen & Valpola (2017), which we will describe. Subsequently we will present our modifications that enable domain adaptation.
The structure of the mean teacher model of Tarvainen & Valpola (2017) – also discussed in section 2.1 – is shown in Figure 2a. The student network is trained using gradient descent, while the weights of the teacher network are an exponential moving average of those of the student. During training each input sample xi is passed through both the student and teacher networks, generating predicted class probability vectors zi (student) and z̃i (teacher). Different dropout, noise and image translation parameters are used for the student and teacher pathways.
During each training iteration a mini-batch of samples is drawn from the dataset, consisting of both labeled and unlabeled samples. The training loss is the sum of a supervised and an unsupervised component. The supervised loss is cross-entropy loss computed using zi (student prediction). It is masked to 0 for unlabeled samples for which no ground truth is available. The unsupervised component is the self-ensembling loss. It penalises the difference in class predictions between student (zi) and teacher (z̃i) networks for the same input sample. It is computed using the mean squared difference between the class probability predictions zi and z̃i.
Laine & Aila (2017) and Tarvainen & Valpola (2017) found that it was necessary to apply a timedependent weighting to the unsupervised loss during training in order to prevent the network from getting stuck in a degenerate solution that gives poor classification performance. They used a function that follows a Gaussian curve from 0 to 1 during the first 80 epochs.
In the following subsections we will describe our contributions in detail along with the motivations for introducing them.
3.1 ADAPTING TO DOMAIN ADAPTATION
We minimise the same loss as in Tarvainen & Valpola (2017); we apply cross-entropy loss to labeled source samples and unsupervised self-ensembling loss to target samples. As in Tarvainen & Valpola (2017), self-ensembling loss is computed as the mean-squared difference between predictions produced by the student (zTi) and teacher (z̃Ti) networks with different augmentation, dropout and noise parameters.
The models of Tarvainen & Valpola (2017) and of Laine & Aila (2017) were designed for semisupervised learning problems in which a subset of the samples in a single dataset have ground truth labels. During training both models mix labeled and unlabeled samples together in a minibatch. In contrast, unsupervised domain adaptation problems use two distinct datasets with different underlying distributions; labeled source and unlabeled target. Our variant of the mean teacher model – shown in Figure 2b – has separate source (XSi) and target (XTi) paths. Inspired by the work of Li et al. (2016), we process mini-batches from the source and target datasets separately (per iteration) so that batch normalization uses different normalization statistics for each domain during training.1. We do not use the approach of Li et al. (2016) as-is, as they handle the source and target datasets separtely in two distinct training phases, where our approach must train using both simultaneously. We also do not maintain separate exponential moving averages of the means and variances for each dataset for use at test time.
1This is simple to implement using most neural network toolkits; evaluate the network once for source samples and a second time for target samples, compute the supervised and unsupervised losses respectively and combine.
As seen in the ‘MT+TF’ row of Table 1, the model described thus far achieves state of the art results in 5 out of 8 small image benchmarks. The MNIST→ SVHN, STL→ CIFAR-10 and Syn-digits→ SVHN benchmarks however require additional modifications to achieve good performance.
3.2 CONFIDENCE THRESHOLDING
We found that replacing the Gaussian ramp-up factor that scales the unsupervised loss with confidence thresholding stabilized training in more challenging domain adaptation scenarios. For each unlabeled sample xTi the teacher network produces the predicted class probabilty vector z̃Tij – where j is the class index drawn from the set of classes C – from which we compute the confidence f̃Ti = maxj∈C(z̃Tij); the predicted probability of the predicted class of the sample. If f̃Ti is below the confidence threshold (a parameter search found 0.968 to be an effective value for small image benchmarks), the self-ensembling loss for the sample xi is masked to 0.
Our working hypothesis is that confidence thresholding acts as a filter, shifting the balance in favour of the student learning correct labels from the teacher. While high network prediction confidence does not guarantee correctness there is a positive correlation. Given the tolerance to incorrect labels reported by Laine & Aila (2017), we believe that the higher signal-to-noise ratio underlies the success of this component of our approach.
The use of confidence thresholding achieves a state of the art results in the STL→ CIFAR-10 and Syn-digits → SVHN benchmarks, as seen in the ‘MT+CT+TF’ row of Table 1. While confidence thresholding can result in very slight reductions in performance (see the MNIST↔USPS and SVHN →MNIST results), its ability to stabilise training in challenging scenarios leads us to recommend it as a replacement for the time-dependent Gaussian ramp-up used in Laine & Aila (2017).
3.3 DATA AUGMENTATION
We explored the effect of three data augmentation schemes in our small image benchmarks (section 4.1). Our minimal scheme (that should be applicable in non-visual domains) consists of Gaussian noise (with σ = 0.1) added to the pixel values. The standard scheme (indicated by ‘TF’ in Table 1) was used by Laine & Aila (2017) and adds translations in the interval [−2, 2] and horizontal flips for the CIFAR-10 ↔ STL experiments. The affine scheme (indicated by ‘TFA’) adds random affine transformations defined by the matrix in (1), where N (0, 0.1) denotes a real value drawn from a normal distribution with mean 0 and standard deviation 0.1.[
1 +N (0, 0.1) N (0, 0.1) N (0, 0.1) 1 +N (0, 0.1)
] (1)
The use of translations and horizontal flips has a significant impact in a number of our benchmarks. It is necessary in order to outpace prior art in the MNIST↔ USPS and SVHN→MNIST benchmarks and improves performance in the CIFAR-10 ↔ STL benchmarks. The use of affine augmentation can improve performance in experiments involving digit and traffic sign recognition datasets, as seen in the ‘MT+CT+TFA’ row of Table 1. In contrast it can impair performance when used with photographic datasets, as seen in the the STL→ CIFAR-10 experiment. It also impaired performance in the VisDA-17 experiment (section 4.2).
3.4 CLASS BALANCE LOSS
With the adaptations made so far the challenging MNIST→ SVHN benchmark remains undefeated due to training instabilities. During training we noticed that the error rate on the SVHN test set decreases at first, then rises and reaches high values before training completes. We diagnosed the problem by recording the predictions for the SVHN target domain samples after each epoch. The rise in error rate correlated with the predictions evolving toward a condition in which most samples are predicted as belonging to the ‘1’ class; the most populous class in the SVHN dataset. We hypothesize that the class imbalance in the SVHN dataset caused the unsupervised loss to reinforce the ‘1’ class more often than the others, resulting in the network settling in a degenerate local minimum. Rather than distinguish between digit classes as intended it seperated MNIST from SVHN samples and assigned the latter to the ‘1’ class.
We addressed this problem by introducing a class balance loss term that penalises the network for making predictions that exhibit large class imbalance. For each target domain mini-batch we compute the mean of the predicted sample class probabilities over the sample dimension, resulting in the mini-batch mean per-class probability. The loss is computed as the binary cross entropy between the mean class probability vector and a uniform probability vector. We balance the strength of the class balance loss with that of the self-ensembling loss by multiplying the class balance loss by the average of the confidence threshold mask (e.g. if 75% of samples in a mini-batch pass the confidence threshold, then the class balance loss is multiplied by 0.75).2
We would like to note the similarity between our class balance loss and the entropy maximisation loss in the IMSAT clustering model of Hu et al. (2017); IMSAT employs entropy maximisation to encourage uniform cluster sizes and entropy minimisation to encourage unambiguous cluster assignments.
4 EXPERIMENTS
Our implementation was developed using PyTorch (Chintala et al.) and is publically available at http://github.com/Britefury/self-ensemble-visual-domain-adapt.
4.1 SMALL IMAGE DATASETS
Our results can be seen in Table 1. The ‘train on source’ and ‘train on target’ results report the target domain performance of supervised training on the source and target domains. They represent the exepected baseline and best achievable result. The ‘Specific aug.‘ experiments used data augmentation specific to the MNIST→ SVHN adaptation path that is discussed further down. The small datasets and data preparation procedures are described in Appendix A. Our training procedure is described in Appendix B and our network architectures are described in Appendix D. The same network architectures and augmentation parameters were used for domain adaptation experiments and the supervised baselines discussed above. It is worth noting that only the training sets of the small image datasets were used during training; the test sets used for reporting scores only.
MNIST↔ USPS (see Figure 3a). MNIST and USPS are both greyscale hand-written digit datasets. In both adaptation directions our approach not only demonstrates a significant improvement over prior art but nearly achieves the performance of supervised learning using the target domain ground truths. The strong performance of the base mean teacher model can be attributed to the similarity of the datasets to one another. It is worth noting that data augmentation allows our ‘train on source’ baseline to outpace prior domain adaptation methods.
CIFAR-10↔ STL (see Figure 3b). CIFAR-10 and STL are both 10-class image datasets, although we removed one class from each (see Appendix A.2). We obtained strong performance in the STL→ CIFAR-10 path, but only by using confidence thresholding. The CIFAR-10→ STL results are more interesting; the ‘train on source’ baseline performance outperforms that of a network trained on the STL target domain, most likely due to the small size of the STL training set. Our self-ensembling results outpace both the baseline performance and the ‘theoretical maximum’ of a network trained
2We expect that class balance loss is likely to adversely affect performance on target datasets with large class imbalance.
on the target domain, lending further evidence to the view of Sajjadi et al. (2016) and Laine & Aila (2017) that self-ensembling acts as an effective regulariser.
Syn-Digits → SVHN (see Figure 3c). The Syn-Digits dataset is a synthetic dataset designed by Ganin & Lempitsky (2015) to be used as a source dataset in domain adaptation experiments with SVHN as the target dataset. Other approaches have achieved good scores on this benchmark, beating
the baseline by a significant margin. Our result improves on them, reducing the error rate from 6.9% to 2.9%; even slightly outpacing the ‘train on target’ 3.4% error rate achieved using supervised learning.
Syn-Signs → GTSRB (see Figure 3d). Syn-Signs is another synthetic dataset designed by Ganin & Lempitsky (2015) to target the 43-class GTSRB (German Traffic Signs Recognition Benchmark; Stallkamp et al. (2011)) dataset. Our approach halved the best error rate of competing approaches. Once again, our approaches slightly outpaces the ‘train on target’ supervised learning upper bound.
SVHN→MNIST (see Figure 3e). Google’s SVHN (Street View House Numbers) is a colour digits dataset of house number plates. Our approach significantly outpaces other techniques and achieves an accuracy close to that of supervised learning.
MNIST→ SVHN (see Figure 3f). This adaptation path is somewhat more challenging as MNIST digits are greyscale and uniform in terms of size, aspect ratio and intensity range, in contrast to the variably sized colour digits present in SVHN. As a consequence, adapting from MNIST to SVHN required additional work. Class balancing loss was necessary to ensure training stability and additional experiment specific data augmentation was required to achieve good accuracy. The use of translations and affine augmentation (see section 3.3) results in an accuracy score of 37%. Significant improvements resulted from additional augmentation in the form of random intensity flips (negative image), and random intensity scales and offsets drawn from the intervals [0.25, 1.5] and [−0.5, 0.5] respectively. These hyper-parameters were selected in order to augment MNIST samples to match the intensity variations present in SVHN, as illustrated in Figure 3f. With these additional modifications, we achieve a result that significantly outperforms prior art and nearly achieves the accuracy of a supervised classifier trained on the target dataset. We found that applying these additional augmentations to the source MNIST dataset only yielded good results; applying them to the target SVHN dataset as well yielded a small improvement but was not essential. It should also be noted that this augmentation scheme raises the performance of the ‘train on source’ baseline to just above that of much of the prior art.
4.2 VISDA-2017 VISUAL DOMAIN ADAPTATION CHALLENGE
The VisDA-2017 image classification challenge is a 12-class domain adaptation problem consisting of three datasets: a training set consisting of 3D renderings of sketchup models, and validation and test sets consisting of real images (see Figure 1) drawn from the COCO Lin et al. (2014) and YouTube BoundingBoxes Real et al. (2017) datasets respectively. The objective is to learn from labeled computer generated images and correctly predict the class of real images. Ground truth labels were made available for the training and validation sets only; test set scores were computed by a server operated by the competition organisers.
While the algorithm is that presented above, we base our network on the pretrained ResNet-152 (He et al. (2016)) network provided by PyTorch (Chintala et al.), rather than using a randomly initialised network as before. The final 1000-class classification layer is removed and replaced with two fullyconnected layers; the first has 512 units with a ReLU non-linearity while the final layer has 12 units with a softmax non-linearity. Results from our original competition submissions and newer results using two data augmentation schemes are presented in Table 2. Our reduced augmentation scheme consists of random crops, random horizontal flips and random uniform scaling. It is very similar to scheme used for ImageNet image classification in He et al. (2016). Our competition configuration includes additional augmentation that was specifically designed for the VisDA dataset, although we subsequently found that it makes little difference. Our hyper-parameters and competition data augmentation scheme are described in Appendix C.1. It is worth noting that we applied test time augmentation (we averaged predictions form 16 differently augmented images) to achieve our competition results. We present resuts with and without test time augmentation in Table 2. Our VisDA competition test set score is also the result of ensembling the predictions of 5 different networks.
5 CONCLUSIONS
We have presented an effective domain adaptation algorithm that has achieved state of the art results in a number of benchmarks and has achieved accuracies that are almost on par with traditional supervised learning on digit recognition benchmarks targeting the MNIST and SVHN datasets. The
resulting networks will exhibit strong performance on samples from both the source and target domains. Our approach is sufficiently flexible to be usable for a variety of network architectures, including those based on randomly initialised and pre-trained networks.
Miyato et al. (2017) stated that the self-ensembling methods presented by Laine & Aila (2017) – on which our algorithm is based – operate by label propagation. This view is supported by our results, in particular our MNIST→ SVHN experiment. The latter requires additional intensity augmentation in order to sufficiently align the dataset distributions, after which good quality label predictions are propagated throughout the target dataset. In cases where data augmentation is insufficient to align the dataset distributions, a pre-trained network may be used to bridge the gap, as in our solution to the VisDA-17 challenge. This leads us to conclude that effective domain adaptation can be achieved by first aligning the distributions of the source and target datasets – the focus of much prior art in the field – and then refining their correspondance; a task to which self-ensembling is well suited.
A DATASETS AND DATA PREPARATION
A.1 SMALL IMAGE DATASETS
The datasets used in this paper are described in Table 3.
A.2 DATA PREPARATION
Some of the experiments that involved datasets described in Table 3 required additional data preparation in order to match the resolution and format of the input samples and match the classification target. These additional steps will now be described.
MNIST ↔ USPS The USPS images were up-scaled using bilinear interpolation from 16 × 16 to 28× 28 resolution to match that of MNIST. CIFAR-10 ↔ STL CIFAR-10 and STL are both 10-class image datasets. The STL images were down-scaled to 32 × 32 resolution to match that of CIFAR-10. The ‘frog’ class in CIFAR-10 and the ‘monkey’ class in STL were removed as they have no equivalent in the other dataset, resulting in a 9-class problem with 10% less samples in each dataset.
Syn-Signs→ GTSRB GTSRB is composed of images that vary in size and come with annotations that provide region of interest (bounding box around the sign) and ground truth classification. We extracted the region of interest from each image and scaled them to a resolution of 40× 40 to match those of Syn-Signs.
MNIST↔ SVHN The MNIST images were padded to 32 × 32 resolution and converted to RGB by replicating the greyscale channel into the three RGB channels to match the format of SVHN.
B SMALL IMAGE EXPERIMENT TRAINING
B.1 TRAINING PROCEDURE
Our networks were trained for 300 epochs. We used the Adam Kingma & Ba (2015) gradient descent algorithm with a learning rate of 0.001. We trained using mini-batches composed of 256 samples, except in the Syn-digits → SVHN and Syn-signs → GTSRB experiments where we used 128 in order to reduce memory usage. The self-ensembling loss was weighted by a factor of 3 and the class balancing loss was weighted by 0.005. Our teacher network weights ti were updated so as to be an exponential moving average of those of the student si using the formula ti = αti−1 + (1 − α)si, with a value of 0.99 for α. A complete pass over the target dataset was considered to be one epoch in all experiments except the MNIST→ USPS and CIFAR-10→ STL experiments due to the small size of the target datasets, in which case one epoch was considered to be a pass over the larger soure dataset.
We found that using the proportion of samples that passed the confidence threshold can be used to drive early stopping (Prechelt (1998)). The final score was the target test set performance at the epoch at which the highest confidence threshold pass rate was obtained.
C VISDA-17
C.1 HYPER-PARAMETERS
Our training procedure was the same as that used in the small image experiments, except that we used 160 × 160 images, a batch size of 56 (reduced from 64 to fit within the memory of an nVidia 1080-Ti), a self-ensembling weight of 10 (instead of 3), a confidence threshold of 0.9 (instead of 0.968) and a class balancing weight of 0.01. We used the Adam Kingma & Ba (2015) gradient descent algorithm with a learning rate of 10−5 for the final two randomly initialized layers and 10−6 for the pre-trained layers. The first convolutional layer and the first group of convolutional layers (with 64 feature channels) of the pre-trained ResNet were left unmodified during training.
Reduced data augmentation:
• scale image so that its smallest dimension is 176 pixels, then randomly crop a 160 × 160 section from the scaled image
• No random affine transformations as they increase confusion between the car and truck classes in the validation set
• random uniform scaling in the range [0.75, 1.333] • horizontal flipping
Competition data augmentation adds the following in addition to the above:
• random intensity/brightness scaling in the range [0.75, 1.333] • random rotations, normally distributed with a standard deviation of 0.2π • random desaturation in which the colours in an image are randomly desaturated to greyscale
by a factor between 0% and 100% • rotations in colour space, around a randomly chosen axes with a standard deviation of 0.05π • random offset in colour space, after standardisation using parameters specified by PyTorch
implementation of ResNet-152
D NETWORK ARCHITECTURES
Our network architectures are shown in Tables 6 - 8. | 1. What is the main problem addressed in the paper?
2. What is the proposed approach to addressing the problem, particularly regarding the use of two parallel networks?
3. What is the purpose of the additional loss term coming from the teacher network?
4. Can you explain why the weight of the loss-term associated with the unsupervised learning part follows a Gaussian curve?
5. How do the experimental results demonstrate the effectiveness of the proposed methodology?
6. Can you provide more details about the specific datasets used in the experiments and how they were processed? | Review | Review
The paper addresses the problem of domain adaptation: Say you have a source dataset S of labeled examples and you have a target dataset T of unlabeled examples and you want to label examples from the target dataset.
The main idea in the paper is to train two parallel networks, a 'teacher network' and a 'student network', where the student network has a loss term that takes into account labeled examples and there is an additional loss term coming from the teacher network that compares the probabilities placed by the two networks on the outputs. This is motivated by a similar network introduced in the context of semi-supervised learning by Tarvainen and Valpola (2017). The parameters are then optimized by gradient descent where the weight of the loss-term associated with the unsupervised learning part follows a Gaussian curve (with time). No clear explanation is provided for why this may be a good thing to try. The authors also use other techniques like data augmentation to enhance their algorithms.
The experimental results in the paper are quite nice. They apply the methodology to various standard vision datasets with noticeable improvements/gains and in one case by including additional tricks manage to better than other methods for VISDA-2017 domain adaptation challenge. In the latter, the challenge is to use computer-generated labeled examples and use this information to label real photographic images. The present paper does substantially better than the competition for this challenge. |
ICLR | Title
Self-ensembling for visual domain adaptation
Abstract
This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen & Valpola (2017)) of temporal ensembling (Laine & Aila (2017)), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.
N/A
This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen & Valpola (2017)) of temporal ensembling (Laine & Aila (2017)), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.
1 INTRODUCTION
The strong performance of deep learning in computer vision tasks comes at the cost of requiring large datasets with corresponding ground truth labels for training. Such datasets are often expensive to produce, owing to the cost of the human labour required to produce the ground truth labels.
Semi-supervised learning is an active area of research that aims to reduce the quantity of ground truth labels required for training. It is aimed at common practical scenarios in which only a small subset of a large dataset has corresponding ground truth labels. Unsupervised domain adaptation is a closely related problem in which one attempts to transfer knowledge gained from a labeled source dataset to a distinct unlabeled target dataset, within the constraint that the objective (e.g.digit classification) must remain the same. Domain adaptation offers the potential to train a model using labeled synthetic data – that is often abundantly available – and unlabeled real data. The scale of the problem can be seen in the VisDA-17 domain adaptation challenge images shown in Figure 1. We will present our winning solution in Section 4.2.
Recent work (Tarvainen & Valpola (2017)) has demonstrated the effectiveness of self-ensembling with random image augmentations to achieve state of the art performance in semi-supervised learning benchmarks.
We have developed the approach proposed by Tarvainen & Valpola (2017) to work in a domain adaptation scenario. We will show that this can achieve excellent results in specific small image domain adaptation benchmarks. More challenging scenarios, notably MNIST → SVHN and the VisDA-17 domain adaptation challenge required further modifications. To this end, we developed confidence thresholding and class balancing that allowed us to achieve state of the art results in a variety of benchmarks, with some of our results coming close to those achieved by traditional supervised learning. Our approach is sufficiently flexble to be applicable to a variety of network architectures, both randomly initialized and pre-trained.
Our paper is organised as follows; in Section 2 we will discuss related work that provides context and forms the basis of our technique; our approach is described in Section 3 with our experiments and results in Section 4; and finally we present our conclusions in Section 5.
2 RELATED WORK
In this section we will cover self-ensembling based semi-supervised methods that form the basis of our approach and domain adaptation techniques to which our work can be compared.
2.1 SELF-ENSEMBLING FOR SEMI-SUPERVISED LEARNING
Recent work based on methods related to self-ensembling have achieved excellent results in semisupervised learning scenarious. A neural network is trained to make consistent predictions for unsupervised samples under different augmentation Sajjadi et al. (2016), dropout and noise conditions or through the use of adversarial training Miyato et al. (2017). We will focus in particular on the self-ensembling based approaches of Laine & Aila (2017) and Tarvainen & Valpola (2017) as they form the basis of our approach.
Laine & Aila (2017) present two models; their Π-model and their temporal model. The Π-model passes each unlabeled sample through a classifier twice, each time with different dropout, noise and image translation parameters. Their unsupervised loss is the mean of the squared difference in class probability predictions resulting from the two presentations of each sample. Their temporal model maintains a per-sample moving average of the historical network predictions and encourages subsequent predictions to be consistent with the average. Their approach achieved state of the art results in the SVHN and CIFAR-10 semi-supervised classification benchmarks.
Tarvainen & Valpola (2017) further improved on the temporal model of Laine & Aila (2017) by using an exponential moving average of the network weights rather than of the class predictions. Their approach uses two networks; a student network and a teacher network, where the student is trained using gradient descent and the weigthts of the teacher are the exponential moving average of those of the student. The unsupervised loss used to train the student is the mean square difference between the predictions of the student and the teacher, under different dropout, noise and image translation parameters.
2.2 DOMAIN ADAPTATION
There is a rich body of literature tackling the problem of domain adaptation. We focus on deep learning based methods as these are most relevant to our work.
Auto-encoders are unsupervised neural network models that reconstruct their input samples by first encoding them into a latent space and then decoding and reconstructing them. Ghifary et al. (2016) describe an auto-encoder model that is trained to reconstruct samples from both the source and target domains, while a classifier is trained to predict labels from domain invariant features present in the latent representation using source domain labels. Bousmalis et al. (2016) reckognised that samples from disparate domains have distinct domain specific characteristics that must be represented in the latent representation to support effective reconstruction. They developed a split model that separates the latent representation into shared domain invariant features and private features specific to the source and target domains. Their classifier operates on the domain invariant features only.
Ganin & Lempitsky (2015) propose a bifurcated classifier that splits into label classification and domain classification branches after common feature extraction layers. A gradient reversal layer is
placed between the common feature extraction layers and the domain classification branch; while the domain classification layers attempt to determine which domain a sample came from the gradient reversal operation encourages the feature extraction layers to confuse the domain classifier by extracting domain invariant features. An alternative and simpler implementation described in their appendix minimises the label cross-entropy loss in the feature and label classification layers, minimises the domain cross-entropy in the domain classification layers but maximises it in the feature layers. The model of Tzeng et al. (2017) runs along similar lines but uses separate feature extraction sub-networks for source and domain samples and train the model in two distinct stages.
Saito et al. (2017a) use tri-training (Zhou & Li (2005)); feature extraction layers are used to drive three classifier sub-networks. The first two are trained on samples from the source domain, while a weight similarity penalty encourages them to learn different weights. Pseudo-labels generated for target domain samples by these source domain classifiers are used to train the final classifier to operate on the target domain.
Generative Adversarial Networks (GANs; Goodfellow et al. (2014)) are unsupervised models that consist of a generator network that is trained to generate samples that match the distribution of a dataset by fooling a discriminator network that is simultaneously trained to distinguish real samples from generates samples. Some GAN based models – such as that of Sankaranarayanan et al. (2017) – use a GAN to help learn a domain invariant embedding for samples. Many GAN based domain adaptation approaches use a generator that transforms samples from one domain to another.
Bousmalis et al. (2017) propose a GAN that adapts synthetic images to better match the characteristics of real images. Their generator takes a synthetic image and noise vector as input and produces an adapted image. They train a classifier to predict annotations for source and adapted samples alonside the GAN, while encouraing the generator to preserve aspects of the image important for annotation. The model of Shrivastava et al. (2017) consists of a refiner network (in the place of a generator) and discriminator that have a limited receptive field, limiting their model to making local changes while preserving ground truth annotations. The use of refined simulated images with corresponding ground truths resulted in improved performance in gaze and hand pose estimation.
Russo et al. (2017) present a bi-directional GAN composed of two generators that transform samples from the source to the target domain and vice versa. They transform labelled source samples to the target domain using one generator and back to the source domain with the other and encourage the network to learn label class consistency. This work bears similarities to CycleGAN, by Zhu et al. (2017).
A number of domain adaptation models maximise domain confusion by minimising the difference between the distributions of features extracted from source and target domains. Deep CORAL Sun & Saenko (2016) minimises the difference between the feature covariance matrices for a mini-batch of samples from the source and target domains. Tzeng et al. (2014) and Long et al. (2015) minimise the Maximum Mean Discrepancy metric Gretton et al. (2012). Li et al. (2016) described adaptive batch normalization, a variant of batch normalization (Ioffe & Szegedy (2015)) that learns separate batch normalization statistics for the source and target domains in a two-pass process, establishing new state-of-the-art results. In the first pass standard supervised learning is used to train a classifier for samples from the source domain. In the second pass, normalization statistics for target domain samples are computed for each batch normalization layer in the network, leaving the network weights as they are.
3 METHOD
Our model builds upon the mean teacher semi-supervised learning model of Tarvainen & Valpola (2017), which we will describe. Subsequently we will present our modifications that enable domain adaptation.
The structure of the mean teacher model of Tarvainen & Valpola (2017) – also discussed in section 2.1 – is shown in Figure 2a. The student network is trained using gradient descent, while the weights of the teacher network are an exponential moving average of those of the student. During training each input sample xi is passed through both the student and teacher networks, generating predicted class probability vectors zi (student) and z̃i (teacher). Different dropout, noise and image translation parameters are used for the student and teacher pathways.
During each training iteration a mini-batch of samples is drawn from the dataset, consisting of both labeled and unlabeled samples. The training loss is the sum of a supervised and an unsupervised component. The supervised loss is cross-entropy loss computed using zi (student prediction). It is masked to 0 for unlabeled samples for which no ground truth is available. The unsupervised component is the self-ensembling loss. It penalises the difference in class predictions between student (zi) and teacher (z̃i) networks for the same input sample. It is computed using the mean squared difference between the class probability predictions zi and z̃i.
Laine & Aila (2017) and Tarvainen & Valpola (2017) found that it was necessary to apply a timedependent weighting to the unsupervised loss during training in order to prevent the network from getting stuck in a degenerate solution that gives poor classification performance. They used a function that follows a Gaussian curve from 0 to 1 during the first 80 epochs.
In the following subsections we will describe our contributions in detail along with the motivations for introducing them.
3.1 ADAPTING TO DOMAIN ADAPTATION
We minimise the same loss as in Tarvainen & Valpola (2017); we apply cross-entropy loss to labeled source samples and unsupervised self-ensembling loss to target samples. As in Tarvainen & Valpola (2017), self-ensembling loss is computed as the mean-squared difference between predictions produced by the student (zTi) and teacher (z̃Ti) networks with different augmentation, dropout and noise parameters.
The models of Tarvainen & Valpola (2017) and of Laine & Aila (2017) were designed for semisupervised learning problems in which a subset of the samples in a single dataset have ground truth labels. During training both models mix labeled and unlabeled samples together in a minibatch. In contrast, unsupervised domain adaptation problems use two distinct datasets with different underlying distributions; labeled source and unlabeled target. Our variant of the mean teacher model – shown in Figure 2b – has separate source (XSi) and target (XTi) paths. Inspired by the work of Li et al. (2016), we process mini-batches from the source and target datasets separately (per iteration) so that batch normalization uses different normalization statistics for each domain during training.1. We do not use the approach of Li et al. (2016) as-is, as they handle the source and target datasets separtely in two distinct training phases, where our approach must train using both simultaneously. We also do not maintain separate exponential moving averages of the means and variances for each dataset for use at test time.
1This is simple to implement using most neural network toolkits; evaluate the network once for source samples and a second time for target samples, compute the supervised and unsupervised losses respectively and combine.
As seen in the ‘MT+TF’ row of Table 1, the model described thus far achieves state of the art results in 5 out of 8 small image benchmarks. The MNIST→ SVHN, STL→ CIFAR-10 and Syn-digits→ SVHN benchmarks however require additional modifications to achieve good performance.
3.2 CONFIDENCE THRESHOLDING
We found that replacing the Gaussian ramp-up factor that scales the unsupervised loss with confidence thresholding stabilized training in more challenging domain adaptation scenarios. For each unlabeled sample xTi the teacher network produces the predicted class probabilty vector z̃Tij – where j is the class index drawn from the set of classes C – from which we compute the confidence f̃Ti = maxj∈C(z̃Tij); the predicted probability of the predicted class of the sample. If f̃Ti is below the confidence threshold (a parameter search found 0.968 to be an effective value for small image benchmarks), the self-ensembling loss for the sample xi is masked to 0.
Our working hypothesis is that confidence thresholding acts as a filter, shifting the balance in favour of the student learning correct labels from the teacher. While high network prediction confidence does not guarantee correctness there is a positive correlation. Given the tolerance to incorrect labels reported by Laine & Aila (2017), we believe that the higher signal-to-noise ratio underlies the success of this component of our approach.
The use of confidence thresholding achieves a state of the art results in the STL→ CIFAR-10 and Syn-digits → SVHN benchmarks, as seen in the ‘MT+CT+TF’ row of Table 1. While confidence thresholding can result in very slight reductions in performance (see the MNIST↔USPS and SVHN →MNIST results), its ability to stabilise training in challenging scenarios leads us to recommend it as a replacement for the time-dependent Gaussian ramp-up used in Laine & Aila (2017).
3.3 DATA AUGMENTATION
We explored the effect of three data augmentation schemes in our small image benchmarks (section 4.1). Our minimal scheme (that should be applicable in non-visual domains) consists of Gaussian noise (with σ = 0.1) added to the pixel values. The standard scheme (indicated by ‘TF’ in Table 1) was used by Laine & Aila (2017) and adds translations in the interval [−2, 2] and horizontal flips for the CIFAR-10 ↔ STL experiments. The affine scheme (indicated by ‘TFA’) adds random affine transformations defined by the matrix in (1), where N (0, 0.1) denotes a real value drawn from a normal distribution with mean 0 and standard deviation 0.1.[
1 +N (0, 0.1) N (0, 0.1) N (0, 0.1) 1 +N (0, 0.1)
] (1)
The use of translations and horizontal flips has a significant impact in a number of our benchmarks. It is necessary in order to outpace prior art in the MNIST↔ USPS and SVHN→MNIST benchmarks and improves performance in the CIFAR-10 ↔ STL benchmarks. The use of affine augmentation can improve performance in experiments involving digit and traffic sign recognition datasets, as seen in the ‘MT+CT+TFA’ row of Table 1. In contrast it can impair performance when used with photographic datasets, as seen in the the STL→ CIFAR-10 experiment. It also impaired performance in the VisDA-17 experiment (section 4.2).
3.4 CLASS BALANCE LOSS
With the adaptations made so far the challenging MNIST→ SVHN benchmark remains undefeated due to training instabilities. During training we noticed that the error rate on the SVHN test set decreases at first, then rises and reaches high values before training completes. We diagnosed the problem by recording the predictions for the SVHN target domain samples after each epoch. The rise in error rate correlated with the predictions evolving toward a condition in which most samples are predicted as belonging to the ‘1’ class; the most populous class in the SVHN dataset. We hypothesize that the class imbalance in the SVHN dataset caused the unsupervised loss to reinforce the ‘1’ class more often than the others, resulting in the network settling in a degenerate local minimum. Rather than distinguish between digit classes as intended it seperated MNIST from SVHN samples and assigned the latter to the ‘1’ class.
We addressed this problem by introducing a class balance loss term that penalises the network for making predictions that exhibit large class imbalance. For each target domain mini-batch we compute the mean of the predicted sample class probabilities over the sample dimension, resulting in the mini-batch mean per-class probability. The loss is computed as the binary cross entropy between the mean class probability vector and a uniform probability vector. We balance the strength of the class balance loss with that of the self-ensembling loss by multiplying the class balance loss by the average of the confidence threshold mask (e.g. if 75% of samples in a mini-batch pass the confidence threshold, then the class balance loss is multiplied by 0.75).2
We would like to note the similarity between our class balance loss and the entropy maximisation loss in the IMSAT clustering model of Hu et al. (2017); IMSAT employs entropy maximisation to encourage uniform cluster sizes and entropy minimisation to encourage unambiguous cluster assignments.
4 EXPERIMENTS
Our implementation was developed using PyTorch (Chintala et al.) and is publically available at http://github.com/Britefury/self-ensemble-visual-domain-adapt.
4.1 SMALL IMAGE DATASETS
Our results can be seen in Table 1. The ‘train on source’ and ‘train on target’ results report the target domain performance of supervised training on the source and target domains. They represent the exepected baseline and best achievable result. The ‘Specific aug.‘ experiments used data augmentation specific to the MNIST→ SVHN adaptation path that is discussed further down. The small datasets and data preparation procedures are described in Appendix A. Our training procedure is described in Appendix B and our network architectures are described in Appendix D. The same network architectures and augmentation parameters were used for domain adaptation experiments and the supervised baselines discussed above. It is worth noting that only the training sets of the small image datasets were used during training; the test sets used for reporting scores only.
MNIST↔ USPS (see Figure 3a). MNIST and USPS are both greyscale hand-written digit datasets. In both adaptation directions our approach not only demonstrates a significant improvement over prior art but nearly achieves the performance of supervised learning using the target domain ground truths. The strong performance of the base mean teacher model can be attributed to the similarity of the datasets to one another. It is worth noting that data augmentation allows our ‘train on source’ baseline to outpace prior domain adaptation methods.
CIFAR-10↔ STL (see Figure 3b). CIFAR-10 and STL are both 10-class image datasets, although we removed one class from each (see Appendix A.2). We obtained strong performance in the STL→ CIFAR-10 path, but only by using confidence thresholding. The CIFAR-10→ STL results are more interesting; the ‘train on source’ baseline performance outperforms that of a network trained on the STL target domain, most likely due to the small size of the STL training set. Our self-ensembling results outpace both the baseline performance and the ‘theoretical maximum’ of a network trained
2We expect that class balance loss is likely to adversely affect performance on target datasets with large class imbalance.
on the target domain, lending further evidence to the view of Sajjadi et al. (2016) and Laine & Aila (2017) that self-ensembling acts as an effective regulariser.
Syn-Digits → SVHN (see Figure 3c). The Syn-Digits dataset is a synthetic dataset designed by Ganin & Lempitsky (2015) to be used as a source dataset in domain adaptation experiments with SVHN as the target dataset. Other approaches have achieved good scores on this benchmark, beating
the baseline by a significant margin. Our result improves on them, reducing the error rate from 6.9% to 2.9%; even slightly outpacing the ‘train on target’ 3.4% error rate achieved using supervised learning.
Syn-Signs → GTSRB (see Figure 3d). Syn-Signs is another synthetic dataset designed by Ganin & Lempitsky (2015) to target the 43-class GTSRB (German Traffic Signs Recognition Benchmark; Stallkamp et al. (2011)) dataset. Our approach halved the best error rate of competing approaches. Once again, our approaches slightly outpaces the ‘train on target’ supervised learning upper bound.
SVHN→MNIST (see Figure 3e). Google’s SVHN (Street View House Numbers) is a colour digits dataset of house number plates. Our approach significantly outpaces other techniques and achieves an accuracy close to that of supervised learning.
MNIST→ SVHN (see Figure 3f). This adaptation path is somewhat more challenging as MNIST digits are greyscale and uniform in terms of size, aspect ratio and intensity range, in contrast to the variably sized colour digits present in SVHN. As a consequence, adapting from MNIST to SVHN required additional work. Class balancing loss was necessary to ensure training stability and additional experiment specific data augmentation was required to achieve good accuracy. The use of translations and affine augmentation (see section 3.3) results in an accuracy score of 37%. Significant improvements resulted from additional augmentation in the form of random intensity flips (negative image), and random intensity scales and offsets drawn from the intervals [0.25, 1.5] and [−0.5, 0.5] respectively. These hyper-parameters were selected in order to augment MNIST samples to match the intensity variations present in SVHN, as illustrated in Figure 3f. With these additional modifications, we achieve a result that significantly outperforms prior art and nearly achieves the accuracy of a supervised classifier trained on the target dataset. We found that applying these additional augmentations to the source MNIST dataset only yielded good results; applying them to the target SVHN dataset as well yielded a small improvement but was not essential. It should also be noted that this augmentation scheme raises the performance of the ‘train on source’ baseline to just above that of much of the prior art.
4.2 VISDA-2017 VISUAL DOMAIN ADAPTATION CHALLENGE
The VisDA-2017 image classification challenge is a 12-class domain adaptation problem consisting of three datasets: a training set consisting of 3D renderings of sketchup models, and validation and test sets consisting of real images (see Figure 1) drawn from the COCO Lin et al. (2014) and YouTube BoundingBoxes Real et al. (2017) datasets respectively. The objective is to learn from labeled computer generated images and correctly predict the class of real images. Ground truth labels were made available for the training and validation sets only; test set scores were computed by a server operated by the competition organisers.
While the algorithm is that presented above, we base our network on the pretrained ResNet-152 (He et al. (2016)) network provided by PyTorch (Chintala et al.), rather than using a randomly initialised network as before. The final 1000-class classification layer is removed and replaced with two fullyconnected layers; the first has 512 units with a ReLU non-linearity while the final layer has 12 units with a softmax non-linearity. Results from our original competition submissions and newer results using two data augmentation schemes are presented in Table 2. Our reduced augmentation scheme consists of random crops, random horizontal flips and random uniform scaling. It is very similar to scheme used for ImageNet image classification in He et al. (2016). Our competition configuration includes additional augmentation that was specifically designed for the VisDA dataset, although we subsequently found that it makes little difference. Our hyper-parameters and competition data augmentation scheme are described in Appendix C.1. It is worth noting that we applied test time augmentation (we averaged predictions form 16 differently augmented images) to achieve our competition results. We present resuts with and without test time augmentation in Table 2. Our VisDA competition test set score is also the result of ensembling the predictions of 5 different networks.
5 CONCLUSIONS
We have presented an effective domain adaptation algorithm that has achieved state of the art results in a number of benchmarks and has achieved accuracies that are almost on par with traditional supervised learning on digit recognition benchmarks targeting the MNIST and SVHN datasets. The
resulting networks will exhibit strong performance on samples from both the source and target domains. Our approach is sufficiently flexible to be usable for a variety of network architectures, including those based on randomly initialised and pre-trained networks.
Miyato et al. (2017) stated that the self-ensembling methods presented by Laine & Aila (2017) – on which our algorithm is based – operate by label propagation. This view is supported by our results, in particular our MNIST→ SVHN experiment. The latter requires additional intensity augmentation in order to sufficiently align the dataset distributions, after which good quality label predictions are propagated throughout the target dataset. In cases where data augmentation is insufficient to align the dataset distributions, a pre-trained network may be used to bridge the gap, as in our solution to the VisDA-17 challenge. This leads us to conclude that effective domain adaptation can be achieved by first aligning the distributions of the source and target datasets – the focus of much prior art in the field – and then refining their correspondance; a task to which self-ensembling is well suited.
A DATASETS AND DATA PREPARATION
A.1 SMALL IMAGE DATASETS
The datasets used in this paper are described in Table 3.
A.2 DATA PREPARATION
Some of the experiments that involved datasets described in Table 3 required additional data preparation in order to match the resolution and format of the input samples and match the classification target. These additional steps will now be described.
MNIST ↔ USPS The USPS images were up-scaled using bilinear interpolation from 16 × 16 to 28× 28 resolution to match that of MNIST. CIFAR-10 ↔ STL CIFAR-10 and STL are both 10-class image datasets. The STL images were down-scaled to 32 × 32 resolution to match that of CIFAR-10. The ‘frog’ class in CIFAR-10 and the ‘monkey’ class in STL were removed as they have no equivalent in the other dataset, resulting in a 9-class problem with 10% less samples in each dataset.
Syn-Signs→ GTSRB GTSRB is composed of images that vary in size and come with annotations that provide region of interest (bounding box around the sign) and ground truth classification. We extracted the region of interest from each image and scaled them to a resolution of 40× 40 to match those of Syn-Signs.
MNIST↔ SVHN The MNIST images were padded to 32 × 32 resolution and converted to RGB by replicating the greyscale channel into the three RGB channels to match the format of SVHN.
B SMALL IMAGE EXPERIMENT TRAINING
B.1 TRAINING PROCEDURE
Our networks were trained for 300 epochs. We used the Adam Kingma & Ba (2015) gradient descent algorithm with a learning rate of 0.001. We trained using mini-batches composed of 256 samples, except in the Syn-digits → SVHN and Syn-signs → GTSRB experiments where we used 128 in order to reduce memory usage. The self-ensembling loss was weighted by a factor of 3 and the class balancing loss was weighted by 0.005. Our teacher network weights ti were updated so as to be an exponential moving average of those of the student si using the formula ti = αti−1 + (1 − α)si, with a value of 0.99 for α. A complete pass over the target dataset was considered to be one epoch in all experiments except the MNIST→ USPS and CIFAR-10→ STL experiments due to the small size of the target datasets, in which case one epoch was considered to be a pass over the larger soure dataset.
We found that using the proportion of samples that passed the confidence threshold can be used to drive early stopping (Prechelt (1998)). The final score was the target test set performance at the epoch at which the highest confidence threshold pass rate was obtained.
C VISDA-17
C.1 HYPER-PARAMETERS
Our training procedure was the same as that used in the small image experiments, except that we used 160 × 160 images, a batch size of 56 (reduced from 64 to fit within the memory of an nVidia 1080-Ti), a self-ensembling weight of 10 (instead of 3), a confidence threshold of 0.9 (instead of 0.968) and a class balancing weight of 0.01. We used the Adam Kingma & Ba (2015) gradient descent algorithm with a learning rate of 10−5 for the final two randomly initialized layers and 10−6 for the pre-trained layers. The first convolutional layer and the first group of convolutional layers (with 64 feature channels) of the pre-trained ResNet were left unmodified during training.
Reduced data augmentation:
• scale image so that its smallest dimension is 176 pixels, then randomly crop a 160 × 160 section from the scaled image
• No random affine transformations as they increase confusion between the car and truck classes in the validation set
• random uniform scaling in the range [0.75, 1.333] • horizontal flipping
Competition data augmentation adds the following in addition to the above:
• random intensity/brightness scaling in the range [0.75, 1.333] • random rotations, normally distributed with a standard deviation of 0.2π • random desaturation in which the colours in an image are randomly desaturated to greyscale
by a factor between 0% and 100% • rotations in colour space, around a randomly chosen axes with a standard deviation of 0.05π • random offset in colour space, after standardisation using parameters specified by PyTorch
implementation of ResNet-152
D NETWORK ARCHITECTURES
Our network architectures are shown in Tables 6 - 8. | 1. What is the main contribution of the paper, and how does it relate to previous works in domain adaptation?
2. What are the strengths and weaknesses of the proposed method, particularly in its application to multiple domain adaptation tasks?
3. How does the reviewer assess the clarity and completeness of the paper's content, including its literature review and explanations of certain techniques?
4. Are there any specific points in the paper that the reviewer found confusing or unclear, such as the distinction between semi-supervised tasks and domain adaptation or the meaning of a particular footnote?
5. Does the reviewer have any suggestions for improving the paper, either in terms of its technical content or its presentation and organization? | Review | Review
The paper was very well-written, and mostly clear, making it easy to follow. The originality of the main method was not immediately apparent to me. However, the authors clearly outline the tricks they had to do to achieve good performance on multiple domain adaptation tasks: confidence thresholding, particular data augmentation, and a loss to deal with imbalanced target datasets, all of which seem like good tricks-of-the-trade for future work. The experimentation was extensive and convincing.
Pros:
* Winning entry to the VISDA 2017 visual domain adaptation challenge competition.
* Extensive experimentation on established toy datasets (USPS<>MNIST, SVHN<>MNIST, SVHN, GTSRB) and other more real-world datasets (including the VISDA one)
Cons:
* Literature review on domain adaptation was lacking. Recent CVPR papers on transforming samples from source to target should be referred to, one of them was by Shrivastava et al., Learning from Simulated and Unsupervised Images through Adversarial Training, and another by Bousmalis et al., Unsupervised Pixel-level Domain Adaptation with GANs. Also you might want to mention Domain Separation Networks which uses gradient reversal (Ganin et al.) and autoencoders (Ghifary et al.). There was no mention of MMD-based methods, on which there are a few papers. The authors might want to mention non-Deep Learning methods also, or that this review relates to neural networks,
* On p. 4 it wasn't clear to me how the semi-supervised tasks by Tarvainen and Laine were different to domain adaptation. Did you want to say that the data distributions are different? How does this make the task different. Having source and target come in different minibatches is purely an implementation decision.
* It was unclear to me what footnote a. on p. 6 means. Why would you combine results from Ganin et al. and Ghifary et al. ?
* To preserve anonymity keep acknowledgements out of blind submissions. (although not a big deal with your acknowledgements) |
ICLR | Title
Replicable Bandits
Abstract
In this paper, we introduce the notion of replicable policies in the context of stochastic bandits, one of the canonical problems in interactive learning. A policy in the bandit environment is called replicable if it pulls, with high probability, the exact same sequence of arms in two different and independent executions (i.e., under independent reward realizations). We show that not only do replicable policies exist, but also they achieve almost the same optimal (non-replicable) regret bounds in terms of the time horizon. More specifically, in the stochastic multi-armed bandits setting, we develop a policy with an optimal problem-dependent regret bound whose dependence on the replicability parameter is also optimal. Similarly, for stochastic linear bandits (with finitely and infinitely many arms) we develop replicable policies that achieve the best-known problem-independent regret bounds with an optimal dependency on the replicability parameter. Our results show that even though randomization is crucial for the exploration-exploitation trade-off, an optimal balance can still be achieved while pulling the exact same arms in two different rounds of executions.
1 INTRODUCTION
In order for scientific findings to be valid and reliable, the experimental process must be repeatable, and must provide coherent results and conclusions across these repetitions. In fact, lack of reproducibility has been a major issue in many scientific areas; a 2016 survey that appeared in Nature (Baker, 2016a) revealed that more than 70% of researchers failed in their attempt to reproduce another researcher’s experiments. What is even more concerning is that over 50% of them failed to reproduce their own findings. Similar concerns have been raised by the machine learning community, e.g., the ICLR 2019 Reproducibility Challenge (Pineau et al., 2019) and NeurIPS 2019 Reproducibility Program (Pineau et al., 2021), due to the to the exponential increase in the number of publications and the reliability of the findings.
The aforementioned empirical evidence has recently led to theoretical studies and rigorous definitions of replicability. In particular, the works of Impagliazzo et al. (2022) and Ahn et al. (2022) considered replicability as an algorithmic property through the lens of (offline) learning and convex optimization, respectively. In a similar vein, in the current work, we introduce the notion of replicability in the context of interactive learning and decision making. In particular, we study replicable policy design for the fundamental setting of stochastic bandits.
A multi-armed bandit (MAB) is a one-player game that is played over T rounds where there is a set of different arms/actions A of size |A| = K (in the more general case of linear bandits, we can consider even an infinite number of arms). In each round t = 1, 2, . . . , T , the player pulls an arm at ∈ A and receives a corresponding reward rt. In the stochastic setting, the rewards of each
arm are sampled in each round independently, from some fixed but unknown, distribution supported on [0, 1]. Crucially, each arm has a potentially different reward distribution, but the distribution of each arm is fixed over time. A bandit algorithm A at every round t takes as input the sequence of arm-reward pairs that it has seen so far, i.e., (a1, r1), . . . , (at−1, rt−1), then uses (potentially) some internal randomness ξ to pull an arm at ∈ A and, finally, observes the associated reward rt ∼ Dat . We propose the following natural notion of a replicable bandit algorithm, which is inspired by the definition of Impagliazzo et al. (2022). Intuitively, a bandit algorithm is replicable if two distinct executions of the algorithm, with internal randomness fixed between both runs, but with independent reward realizations, give the exact same sequence of played arms, with high probability. More formally, we have the following definition. Definition 1 (Replicable Bandit Algorithm). Let ρ ∈ [0, 1]. We call a bandit algorithm A ρreplicable in the stochastic setting if for any distribution Daj over [0, 1] of the rewards of the j-th arm aj ∈ A, and for any two executions of A, where the internal randomness ξ is shared across the executions, it holds that
Pr ξ,r(1),r(2)
[( a (1) 1 , . . . , a (1) T ) = ( a (2) 1 , . . . , a (2) T )] ≥ 1− ρ .
Here, a(i)t = A(a (i) 1 , r (i) 1 , ..., a (i) t−1, r (i) t−1; ξ) is the t-th action taken by the algorithm A in execution i ∈ {1, 2}.
The reason why we allow for some fixed internal randomness is that the algorithm designer has control over it, e.g., they can use the same seed for their (pseudo)random generator between two executions. Clearly, naively designing a replicable bandit algorithm is not quite challenging. For instance, an algorithm that always pulls the same arm or an algorithm that plays the arms in a particular random sequence determined by the shared random seed ξ are both replicable. The caveat is that the performance of these algorithms in terms of expected regret will be quite poor. In this work, we aim to design bandit algorithms which are replicable and enjoy small expected regret. In the stochastic setting, the (expected) regret after T rounds is defined as
E[RT ] = T max a∈A
µa −E
[ T∑
t=1
µat
] ,
where µa = Er∼Da [r] is the mean reward for arm a ∈ A. In a similar manner, we can define the regret in the more general setting of linear bandits (see, Section 5) Hence, the overarching question in this work is the following:
Is it possible to design replicable bandit algorithms with small expected regret?
At a first glance, one might think that this is not possible, since it looks like replicability contradicts the exploratory behavior that a bandit algorithm should possess. However, our main results answer this question in the affirmative and can be summarized in Table 1.
1.1 RELATED WORK
Reproducibility/Replicability. In this work, we introduce the notion of replicability in the context of interactive learning and, in particular, in the fundamental setting of stochastic bandits. Close to our work, the notion of a replicable algorithm in the context of learning was proposed by Impagliazzo et al. (2022), where it is shown how any statistical query algorithm can be made replicable with a moderate increase in its sample complexity. Using this result, they provide replicable algorithms for finding approximate heavy-hitters, medians, and the learning of half-spaces. Reproducibility has been also considered in the context of optimization by Ahn et al. (2022). We mention that in Ahn et al. (2022) the notion of a replicable algorithm is different from our work and that of Impagliazzo et al. (2022), in the sense that the outputs of two different executions of the algorithm do not need to be exactly the same. From a more application-oriented perspective, Shamir & Lin (2022) study irreproducibility in recommendation systems and propose the use of smooth activations (instead of ReLUs) to improve recommendation reproducibility. In general, the reproducibility crisis is reported in various scientific disciplines Ioannidis (2005); McNutt (2014); Baker (2016b); Goodman et al. (2016); Lucic et al. (2018); Henderson et al. (2018). For more details we refer to the report of the NeurIPS 2019 Reproducibility Program Pineau et al. (2021) and the ICLR 2019 Reproducibility Challenge Pineau et al. (2019).
Bandit Algorithms. Stochastic multi-armed bandits for the general setting without structure have been studied extensively Slivkins (2019); Lattimore & Szepesvári (2020); Bubeck et al. (2012b); Auer et al. (2002); Cesa-Bianchi & Fischer (1998); Kaufmann et al. (2012a); Audibert et al. (2010); Agrawal & Goyal (2012); Kaufmann et al. (2012b). In this setting, the optimum regret achievable is O ( log(T ) ∑ i:∆i>0 ∆−1 ) ; this is achieved, e.g., by the upper confidence bound (UCB) algorithm of Auer et al. (2002). The setting of d-dimensional linear stochastic bandits is also well-explored Dani et al. (2008); Abbasi-Yadkori et al. (2011) under the well-specified linear reward model, achieving (near) optimal problem-independent regret of O(d √ T log(T )) Lattimore & Szepesvári (2020). Note that the best-known lower bound is Ω(d √ T ) Dani et al. (2008) and that the number of arms can, in principle, be unbounded. For a finite number of arms K, the best known upper bound is O( √ dT log(K)) Bubeck et al. (2012a). Our work focuses on the design of replicable bandit algorithms and we hence consider only stochastic environments. In general, there is also extensive work in adversarial bandits and we refer the interested reader to Lattimore & Szepesvári (2020).
Batched Bandits. While sequential bandit problems have been studied for almost a century, there is much interest in the batched setting too. In many settings, like medical trials, one has to take a lot of actions in parallel and observe their rewards later. The works of Auer & Ortner (2010) and CesaBianchi et al. (2013) provided sequential bandit algorithms which can easily work in the batched setting. The works of Gao et al. (2019) and Esfandiari et al. (2021) are focusing exclusively on the batched setting. Our work on replicable bandits builds upon some of the techniques from these two lines of work.
2 STOCHASTIC BANDITS AND REPLICABILITY
In this section, we first highlight the main challenges in order to guarantee replicability and then discuss how the results of Impagliazzo et al. (2022) can be applied in our setting.
2.1 WARM-UP I: NAIVE REPLICABILITY AND CHALLENGES
Let us consider the stochastic two-arm setting (K = 2) and a bandit algorithm A with two independent executions, A1 and A2. The algorithm Ai plays the sequence 1, 2, 1, 2, . . . until some, potentially random, round Ti ∈ N after which one of the two arms is eliminated and, from that point, the algorithm picks the winning arm ji ∈ {1, 2}. The algorithm A is ρ-replicable if and only if T1 = T2 and j1 = j2 with probability 1− ρ. Assume that |µ1 − µ2| = ∆ where µi is the mean of the distribution of the i-th arm. If we assume that ∆ is known, then we can run the algorithm for T1 = T2 = C∆2 log(1/ρ) for some universal constant C > 0 and obtain that, with probability 1 − ρ, it will hold that µ̂(j)1 ≈ µ1 and µ̂ (j) 2 ≈ µ2
for j ∈ {1, 2}, where µ̂(j)i is the estimation of arm’s i mean during execution j. Hence, knowing ∆ implies that the stopping criterion of the algorithm A is deterministic and that, with high probability, the winning arm will be detected at time T1 = T2. This will make the algorithm ρ-replicable.
Observe that when K = 2, the only obstacle to replicability is that the algorithm should decide at the same time to select the winning arm and the selection must be the same in the two execution threads. In the presence of multiple arms, there exists the additional constraint that the above conditions must be satisfied during, potentially, multiple arm eliminations. Hence, the two questions arising from the above discussion are (i) how to modify the above approach when ∆ is unknown and (ii) how to deal with K > 2 arms.
A potential solution to the second question (on handling K > 2 arms) is the Execute-Then-Commit (ETC) strategy. Consider the stochastic K-arm bandit setting. For any ρ ∈ (0, 1), the ETC algorithm with known ∆ = mini ∆i and horizon T that uses m = 4∆2 log(1/ρ) deterministic exploration phases before commitment is ρ-replicable. The intuition is exactly the same as in the K = 2 case. The caveats of this approach are that it assumes that ∆ is known and that the obtained regret is quite unsatisfying. In particular, it achieves regret bounded by m ∑ i∈[K] ∆i + ρ · (T −mK) ∑ i∈[k] ∆i.
Next, we discuss how to improve the regret bound without knowing the gaps ∆i. Before designing new algorithms, we will inspect the guarantees that can be obtained by combining ideas from previous results in the bandits literature and the recent work in replicable learning of Impagliazzo et al. (2022).
2.2 WARM-UP II: BANDIT ALGORITHMS AND REPLICABLE MEAN ESTIMATION
First, we remark that we work in the stochastic setting and the distributions of the rewards of the two arms are subgaussian. Thus, the problem of estimating their mean is an instance of a statistical query for which we can use the algorithm of Impagliazzo et al. (2022) to get a replicable mean estimator for the distributions of the rewards of the arms. Proposition 2 (Replicable Mean Estimation (Impagliazzo et al., 2022)). Let τ, δ, ρ ∈ [0, 1]. There exists a ρ-replicable algorithm ReprMeanEstimation that draws Ω ( log(1/δ) τ2(ρ−δ)2 ) samples from a distribution with mean µ and computes an estimate µ̂ that satisfies |µ̂ − µ| ≤ τ with probability at least 1− δ.
Notice that we are working in the regime where δ ≪ ρ, so the sample complexity is Ω (
log(1/δ) τ2ρ2
) .
The straightforward approach is to try to use an optimal multi-armed algorithm for the stochastic setting, such as UCB or arm-elimination (Even-Dar et al., 2006), combined with the replicable mean estimator. However, it is not hard to see that this approach does not give meaningful results: if we want to achieve replicability ρ we need to call the replicable mean estimator routine with parameter ρ/(KT ), due to the union bound that we need to take. This means that we need to pull every arm at least K2T 2 times, so the regret guarantee becomes vacuous. This gives us the first key insight to tackle the problem: we need to reduce the number of calls to the mean estimator. Hence, we will draw inspiration from the line of work in stochastic batched bandits (Gao et al., 2019; Esfandiari et al., 2021) to derive replicable bandit algorithms.
3 REPLICABLE MEAN ESTIMATION FOR BATCHED BANDITS
As a first step, we would like to show how one could combine the existing replicable algorithms of Impagliazzo et al. (2022) with the batched bandits approach of Esfandiari et al. (2021) to get some preliminary non-trivial results. We build an algorithm for the K-arm setting, where the gaps ∆j are unknown to the learner. Let δ be the confidence parameter of the arm elimination algorithm and ρ be the replicability guarantee we want to achieve. Our approach is the following: let us, deterministically, split the time interval into sub-intervals of increasing length. We treat each subinterval as a batch of samples where we pull each active arm the same number of times and use the replicable mean estimation algorithm to, empirically, compute the true mean. At the end of each batch, we decide to eliminate some arm j using the standard UCB estimate. Crucially, if we condition on the event that all the calls to the replicable mean estimator return the same number, then the algorithm we propose is replicable.
Algorithm 1 Mean-Estimation Based Replicable Algorithm for Stochastic MAB (Theorem 3) 1: Input: time horizon T, number of arms K, replicability ρ 2: Initialization: B ← log(T ), q ← T 1/B , c0 ← 0, A ← [K], r ← T , µ̂a ← 0,∀a ∈ A 3: for i = 1 to B − 1 do 4: if ⌊qi⌋ · |A| > r then 5: break 6: ci = ci−1 + ⌊qi⌋ 7: Pull every arm a ∈ A for ⌊qi⌋ times 8: for a ∈ A do 9: µ̂a ← ReprMeanEstimation(δ = 1/(2KTB), τ = 1, √ log(2KTB)/ci, ρ
′ = ρ/(KB)) ▷ Proposition 2
10: r ← r − |A| · ⌊qi⌋ 11: for a ∈ A do 12: if µ̂a < maxa∈A µ̂a − 2τ then 13: Remove a from A 14: In the last batch play the arm from A with the smallest index
Theorem 3. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 1) for the stochastic bandit problem with K arms and gaps (∆j)j∈[K] whose expected regret is
E[RT ] ≤ C · K2 log2(T )
ρ2
∑ j:∆j>0 ( ∆j + log(KT log(T )) ∆j ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in K,T and 1/ρ.
The above result, whose proof can be found in Appendix A, states that, by combining the tools from Impagliazzo et al. (2022) and Esfandiari et al. (2021), we can design a replicable bandit algorithm with (instance-dependent) expected regret O(K2 log3(T )/ρ2). Notice that the regret guarantee has an extra K2 log2(T )/ρ2 factor compared to its non-replicable counterpart in Esfandiari et al. (2021) (Theorem 5.1). This is because, due to a union bound over the rounds and the arms, we need to call the replicable mean estimator with parameter ρ/(K log(T )). In the next section, we show how to get rid of the log2(T ) by designing a new algorithm.
4 IMPROVED ALGORITHMS FOR REPLICABLE STOCHASTIC BANDITS
While the previous result provides a non-trivial regret bound, it is not optimal with respect to the time horizon T . In this section, we show how to improve it by designing a new algorithm, presented in Algorithm 2, which satisfies the guarantees of Theorem 4 and, essentially, decreases the dependence on the time horizon T from log3(T ) to log(T ). Our main result for replicable stochastic multi-armed bandits with K arms follows. Theorem 4. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 2) for the stochastic bandit problem with K arms and gaps (∆j)j∈[K] whose expected regret is
E[RT ] ≤ C · K2
ρ2 ∑ j:∆j>0 ( ∆j + log(KT log(T )) ∆j ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in K,T and 1/ρ.
Note that, compared to the non-replicable setting, we incur an extra factor of K2/ρ2 in the regret. The proof can be found in Appendix B. Let us now describe how Algorithm 2 works. We decompose the time horizon into B = log(T ) batches. Without the replicability constraint, one could draw qi samples in batch i from each arm and estimate the mean reward. With the replicability constraint, we have to boost this: in each batch i, we pull each active arm O(βqi) times, for some q to be determined, where β = O(K2/ρ2) is the replicability blow-up. Using these samples, we compute
Algorithm 2 Replicable Algorithm for Stochastic Multi-Armed Bandits (Theorem 4) 1: Input: time horizon T, number of arms K, replicability ρ 2: Initialization: B ← log(T ), q ← T 1/B , c0 ← 0, A0 ← [K], r ← T , µ̂a ← 0,∀a ∈ A0
3: β ← ⌊max{K2/ρ2, 2304}⌋ 4: for i = 1 to B − 1 do 5: if β⌊qi⌋ · |Ai| > r then 6: break 7: Ai ← Ai−1 8: for a ∈ Ai do 9: Pull arm a for β⌊qi⌋ times
10: Compute the empirical mean µ̂(i)α 11: ci ← ci−1 + ⌊qi⌋ 12: c̃i ← βci 13: Ũi ← √ 2 ln(2KTB)/c̃i
14: Ui ← √ 2 ln(2KTB)/ci 15: U i ← Uni[Ui/2, Ui] 16: r ← r − β · |Ai| · ⌊qi⌋ 17: for a ∈ Ai do 18: if µ̂(i)a + Ũi < maxa∈Ai µ̂ (i) a − U i then 19: Remove a from Ai 20: In the last batch play the arm from AB−1 with the smallest index
the empirical mean µ̂(i)α for any active arm α. Note that Ũi in Algorithm 2 corresponds to the size of the actual confidence interval of the estimation and Ui corresponds to the confidence interval of an algorithm that does not use the β-blow-up in the number of samples. The novelty of our approach comes from the choice of the interval around the mean of the maximum arm: we pick a threshold uniformly at random from an interval of size Ui/2 around the maximum mean. Then, the algorithm checks whether µ̂(i)a + Ũi < max µ̂ (i) a′ − U i, where max runs over the active arms a′ in batch i, and eliminates arms accordingly. To prove the result we show that there are three regions that some arm j can be in relative to the confidence interval of the best arm in batch i (cf. Appendix B). If it lies in two of these regions, then the decision of whether to keep it or discard it is the same in both executions of the algorithm. However, if it is in the third region, the decision could be different between parallel executions, and since it relies on some external and unknown randomness, it is not clear how to reason about it. To overcome this issue, we use the random threshold to argue about the probability that the decision between two executions differs. The crucial observation that allows us to get rid of the extra log2(T ) factor is that there are correlations between consecutive batches: we prove that if some arm j lies in this “bad” region in some batch i, then it will be outside this region after a constant number of batches.
5 REPLICABLE STOCHASTIC LINEAR BANDITS
We now investigate replicability in the more general setting of stochastic linear bandits. In this setting, each arm is a vector a ∈ Rd belonging to some action set A ⊆ Rd, and there is a parameter θ⋆ ∈ Rd unknown to the player. In round t, the player chooses some action at ∈ A and receives a reward rt = ⟨θ⋆, at⟩ + ηt, where ηt is a zero-mean 1-subgaussian random variable independent of any other source of randomness. This means that E[ηt] = 0 and satisfies E[exp(ληt)] ≤ exp(λ2/2) for any λ ∈ R. For normalization purposes, it is standard to assume that ∥θ⋆∥2 ≤ 1 and supa∈A ∥a∥2 ≤ 1. In the linear setting, the expected regret after T pulls a1, . . . , aT can be written as
E[RT ] = T sup a∈A ⟨θ⋆, a⟩ −E
[ T∑
t=1
⟨θ⋆, at⟩ ] .
In Section 5.1 we provide results for the finite action space case, i.e., when |A| = K. Next, in Section 5.2, we study replicable linear bandit algorithms when dealing with infinite action spaces. In the following, we work in the regime where T ≫ d. We underline that our approach leverages connections of stochastic linear bandits with G-optimal experiment design, core sets constructions, and least-squares estimators. Roughly speaking, the goal of G-optimal design is to find a (small) subset of arms A′, which is called the core set, and define a distribution π over them with the following property: for any ε > 0, δ > 0 pulling only these arms for an appropriate number of times and computing the least-squares estimate θ̂ guarantees that supa∈A⟨a, θ∗− θ̂⟩ ≤ ε, with probability 1−δ. For an extensive discussion, we refer to Chapters 21 and 22 of Lattimore & Szepesvári (2020).
5.1 FINITE ACTION SET
We first introduce a lemma that allows us to reduce the size of the action set that our algorithm has to search over.
Lemma 5 (See Chapters 21 and 22 in Lattimore & Szepesvári (2020)). For any finite action set A that spans Rd and any δ, ε > 0, there exists an algorithm that, in time polynomial in d, computes a multi-set of Θ(d log(1/δ)/ε2+d log log d) actions (possibly with repetitions) such that (i) they span Rd and (ii) if we perform these actions in a batched stochastic d-dimensional linear bandits setting with true parameter θ⋆ ∈ Rd and let θ̂ be the least-squares estimate for θ⋆, then, for any a ∈ A, with probability at least 1− δ, we have
∣∣∣〈a, θ⋆ − θ̂〉∣∣∣ ≤ ε. Essentially, the multi-set in Lemma 5 is obtained using an approximate G-optimal design algorithm. Thus, it is crucial to check whether this can be done in a replicable manner. Recall that the above set of distinct actions is called the core set and is the solution of an (approximate) Goptimal design problem. To be more specific, consider a distribution π : A → [0, 1] and define V (π) = ∑ a∈A π(a)aa
⊤ ∈ Rd×d and g(π) = supa∈A ∥a∥2V (π)−1 . The distribution π is called a design and the goal of G-optimal design is to find a design that minimizes g. Since the number of actions is finite, this problem reduces to an optimization problem which can be solved efficiently using standard optimization methods (e.g., the Frank-Wolfe method). Since the initialization is the same, the algorithm that finds the optimal (or an approximately optimal) design is replicable under the assumption that the gradients and the projections do not have numerical errors. This perspective is orthogonal to the work of Ahn et al. (2022), that defines reproducibility from a different viewpoint.
Algorithm 3 Replicable Algorithm for Stochastic Linear Bandits (Theorem 6) 1: Input: number of arms K, time horizon T, replicability ρ 2: Initialization: B ← log(T ), q ← (T/c)1/B , A ← [K], r ← T 3: β ← ⌊max{K2/ρ2, 2304}⌋ 4: for i = 1 to B − 1 do 5: ε̃i = √ d log(KT 2)/(βqi)
6: εi = √ d log(KT 2)/qi 7: ni = 10d log(KT 2)/ε2i 8: a1, . . . , ani ← multi-set given by Lemma 5 with parameters δ = 1/(KT 2) and ε = ε̃i 9: if ni > r then
10: break 11: Pull every arm a1, . . . , ani and receive rewards r1, . . . , rni 12: Compute the LSE θ̂i ← (∑ni j=1 aja T j )−1 (∑ni j=1 ajrj )
13: εi ← Uni[εi/2, εi]
14: r ← r − ni
15: for a ∈ A do 16: if ⟨a, θ̂i⟩+ ε̃i < maxa∈A⟨a, θ̂i⟩ − εi then 17: Remove a from A 18: In the last batch play argmaxa∈A⟨a, θ̂B−1⟩
In our batched bandit algorithm (Algorithm 3), the multi-set of arms a1, . . . , ani computed in each batch is obtained via a deterministic algorithm with runtime poly(K, d), where |A| = K. Hence, the
multi-set will be the same in two different executions of the algorithm. On the other hand, the LSE will not be since it depends on the stochastic rewards. We apply the techniques that we developed in the replicable stochastic MAB setting in order to design our algorithm. Our main result for replicable d-dimensional stochastic linear bandits with K arms follows. For the proof, we refer to Appendix C. Theorem 6. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm for the stochastic ddimensional linear bandit problem with K arms whose expected regret is
E[RT ] ≤ C · K2
ρ2
√ dT log(KT ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in d,K, T and 1/ρ.
Note that the best known non-replicable algorithm achieves an upper bound of Õ( √ dT log(K)) and, hence, our algorithm incurs a replicability overhead of order K2/ρ2. The intuition behind the proof is similar to the multi-armed bandit setting in Section 4.
5.2 INFINITE ACTION SET
Let us proceed to the setting where the action set A is unbounded. Unfortunately, even when d = 1, we cannot directly get an algorithm that has satisfactory regret guarantees by discretizing the space and using Algorithm 3. The approach of Esfandiari et al. (2021) is to discretize the action space and use an 1/T -net to cover it, i.e. a set A′ ⊆ A such that for all a ∈ A there exists some a′ ∈ A′ with ||a − a′||2 ≤ 1/T . It is known that there exists such a net of size at most (3T )d (Vershynin, 2018, Corollary 4.2.13). Then, they apply the algorithm for the finite arms setting, increasing their regret guarantee by a factor of √ d. However, our replicable algorithm for this setting contains an additional factor of K2 in the regret bound. Thus, even when d = 1, our regret guarantee is greater than T, so the bound is vacuous. One way to fix this issue and get a sublinear regret guarantee is to use a smaller net. We use a 1/T 1/(4d+2)−net that has size at most (3T ) d 4d+2 and this yields an expected
regret of order O(T 4d+1/(4d+2) √ d log(T )/ρ2). For further details, we refer to Appendix D.
Even though the regret guarantee we managed to get using the smaller net of Appendix D is sublinear in T , it is not a satisfactory bound. The next step is to provide an algorithm for the infinite action setting using a replicable LSE subroutine combined with the batching approach of Esfandiari et al. (2021). We will make use of the next lemma. Lemma 7 (Section 21.2 Note 3 of Lattimore & Szepesvári (2020)). There exists a deterministic algorithm that, given an action space A ⊆ Rd, computes a 2-approximate G-optimal design π with a core set of size O(d log log(d)).
We additionally prove the next useful lemma, which, essentially, states that we can assume without loss of generality that every arm in the support of π has mass at least Ω(1/(d log(d))). We refer to Appendix F.1 for the proof. Lemma 8 (Effective Support). Let π be the distribution that corresponds to the 2-approximate optimal G-design of Lemma 7 with input A. Assume that π(a) ≤ c/(d log(d)), where c > 0 is some absolute numerical constant, for some arm a in the core set. Then, we can construct a distribution π̂ such that, for any arm a in the core set, π̂(a) ≥ C/(d log(d)), where C > 0 is an absolute constant, so that it holds
sup a′∈A
∥a′∥2V (π̂)−1 ≤ 4d .
The upcoming lemma is a replicable algorithm for the least-squares estimator and, essentially, builds upon Lemma 7 and Lemma 8. Its proof can be found at Appendix F.2. Lemma 9 (Replicable LSE). Let ρ, ε ∈ (0, 1] and 0 < δ ≤ min{ρ, 1/d}1. Consider an environment of d-dimensional stochastic linear bandits with infinite action space A. Assume that π is a 4- approximate optimal design with associated core set C as computed by Lemma 7 with input A. There exists a ρ-replicable algorithm that pulls each arm a ∈ C a total of
Ω
( d4 log(d/δ) log2 log(d) log log log(d)
ε2ρ2 ) 1We can handle the case of 0 < δ ≤ d by paying an extra log d factor in the sample complexity.
times and outputs θSQ that satisfies supa∈A |⟨a, θSQ − θ⋆⟩| ≤ ε , with probability at least 1− δ.
Algorithm 4 Replicable LSE Algorithm for Stochastic Infinite Action Set (Theorem 10) 1: Input: time horizon T, action set A ⊆ Rd, replicability ρ 2: A′ ← 1/T -net of A 3: Initialization: r ← T,B ← log(T ), q ← (T/c)1/B 4: for i = 1 to B − 1 do 5: qi denotes the number of pulls of all arms before the replicability blow-up 6: εi = c · d √ log(T )/qi
7: The blow-up is Mi = qi · d3 log(d) log2 log(d) log log log(d) log2(T )/ρ2 8: a1, . . . , a|Ci| ← core set Ci of the design given by Lemma 7 with parameter A′ 9: if ⌈Mi⌉ > r then
10: break 11: Pull every arm aj for Ni = ⌈Mi⌉/|Ci| rounds and receive rewards r(j)1 , ..., r (j) Ni
for j ∈ [|Ci|] 12: Si = {(aj , r(j)t ) : t ∈ [Ni], j ∈ [|Ci|]} 13: θ̂i ← ReplicableLSE(Si, ρ′ = ρ/(dB), δ = 1/(2|A′|T 2), τ = min{εi, 1}) 14: r ← r − ⌈Mi⌉ 15: for a ∈ A′ do 16: if ⟨a, θ̂i⟩ < maxa∈A′⟨a, θ̂i⟩ − 2εi then 17: Remove a from A′ 18: In the last batch play argmaxa∈A′⟨a, θ̂B−1⟩ 19: 20: ReplicableLSE(S, ρ, δ, τ) 21: for a ∈ C do 22: v(a)← ReplicableSQ(ϕ : x ∈ R 7→ x ∈ R, S, ρ, δ, τ) ▷ Impagliazzo et al. (2022) 23: return ( ∑ j∈|S| aja ⊤ j ) −1 · ( ∑ a∈C a na v(a))
The main result for the infinite actions’ case, obtained by Algorithm 4, follows. Its proof can be found at Appendix E. Theorem 10. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (Algorithm 4) for the stochastic d-dimensional linear bandit problem with infinite action set whose expected regret is
E[RT ] ≤ C · d4 log(d) log2 log(d) log log log(d)
ρ2
√ T log3/2(T ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in T d and 1/ρ.
Our algorithm for the infinite arm linear bandit case enjoys an expected regret of order Õ(poly(d) √ T ). We underline that the dependence of the regret on the time horizon is (almost) optimal, and we incur an extra d3 factor in the regret guarantee compared to the non-replicable algorithm of Esfandiari et al. (2021). We now comment on the time complexity of our algorithm. Remark 11. The current implementation of our algorithm requires time exponential in d. However, for a general convex set A, given access to a separation oracle for it and an oracle that computes an (approximate) G-optimal design, we can execute it in polynomial time and with polynomially many calls to the oracle. Notably, when A is a polytope such oracles exist. We underline that computational complexity issues also arise in the traditional setting of linear bandits with an infinite number of arms and the computational overhead that the replicability requirement adds is minimal. For further details, we refer to Appendix G.
6 CONCLUSION AND FUTURE DIRECTIONS
In this paper, we have provided a formal notion of reproducibility/replicability for stochastic bandits and we have developed algorithms for the multi-armed bandit and the linear bandit settings that satisfy this notion and enjoy a small regret decay compared to their non-replicable counterparts. We hope and believe that our paper will inspire future works in replicable algorithms for more complicated interactive learning settings such as reinforcement learning. We also provide experimental evaluation in Appendix H.
7 ACKNOWLEDGEMENTS
Alkis Kalavasis was supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the “First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant”, project BALSAM, HFRIFM17-1424. Amin Karbasi acknowledges funding in direct support of this work from NSF (IIS-1845032), ONR (N00014- 19-1-2406), and the AI Institute for Learning-Enabled Optimization at Scale (TILOS). Andreas Krause was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program grant agreement no. 815943 and the Swiss National Science Foundation under NCCR Automation, grant agreement 51NF40 180545. Grigoris Velegkas was supported by NSF (IIS-1845032), an Onassis Foundation PhD Fellowship and a Bodossaki Foundation PhD Fellowship.
A THE PROOF OF THEOREM 3
Theorem. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 1) for the stochastic bandit problem with K arms and gaps (∆j)j∈[K] whose expected regret is
E[RT ] ≤ C · K2 log2(T )
ρ2
∑ j:∆j>0 ( ∆j + log(2KT log(T )) ∆j ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in K,T and 1/ρ.
Proof. First, we claim that the algorithm is ρ-replicable: since the elimination decisions are taken in the same iterates and are based solely on the mean estimations, the replicability of the algorithm of Proposition 2 implies the replicability of the whole algorithm. In particular,
Pr[(a1, ..., aT ) ̸= (a′1, ..., a′T )] = Pr[∃i ∈ [B],∃j ∈ [K] : µ̂ (i) j was not replicable] ≤ ρ .
During each batch i, we draw for any active arm ⌊qi⌋ fresh samples for a total of ci samples and use the replicable mean estimation algorithm to estimate its mean. For an active arm, at the end of some batch i ∈ [B], we say that its estimation is “correct” if the estimation of its mean is within√ log(2KTB)/ci from the true mean. Using Proposition 2, the estimation of any active arm at the end of any batch (except possibly the last batch) is correct with probability at least 1− 1/(2KTB) and so, by the union bound, the probability that the estimation is incorrect for some arm at the end of some batch is bounded by 1/T . We remark that when δ < ρ, the sample complexity of Proposition 2 reduces to O(log(1/δ)/(τ2ρ2)). Let E denote the event that our estimates are correct. The total expected regret can be bounded as
E[RT ] ≤ T · 1/T +E[RT |E ] .
It suffices to bound the second term of the RHS and hence we can assume that each gap is correctly estimated within an additive factor of √ log(2KTB)/ci after batch i. First, due to the elimination condition, we get that the best arm is never eliminated. Next, we have that
E[RT |E ] = ∑
j:∆j>0
∆j E[Tj |E ] ,
where Tj is the total number of pulls of arm j. Fix a sub-optimal arm j and assume that i + 1 was the last batch it was active. Since this arm is not eliminated at the end of batch i, and the estimations are correct, we have that
∆j ≤ √ log(2KTB)/ci ,
and so ci ≤ log(2KTB)/∆2j . Hence, the number of pulls to get the desired bound due to Proposition 2 is (since we need to pull an arm ci/ρ21 times in order to get an estimate at distance√
log(1/δ)/c2i with probability 1− δ in a ρ1-replicable manner when δ < ρ1)
Tj ≤ ci+1/ρ21 = q/ρ21(1 + ci) ≤ q/ρ21 · (1 + log(2KTB)/∆2j ) .
This implies that the total regret is bounded by
E[RT ] ≤ 1 + q/ρ21 · ∑
j:∆j>0
( ∆j + log(2KTB)
∆j
) .
We finally set q = T 1/B and B = log(T ). Moreover, we have that ρ1 = ρ/(KB). These yield
E[RT ] ≤ K2 log2(T )
ρ2
∑ j:∆j>0 ( ∆j + log(2KT log(T )) ∆j ) .
This completes the proof.
B THE PROOF OF THEOREM 4
Theorem. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 2) for the stochastic bandit problem with K arms and gaps (∆j)j∈[K] whose expected regret is
E[RT ] ≤ C · K2
ρ2 ∑ j:∆j>0 (∆j + log(KT log(T ))/∆j) ,
for some absolute numerical constant C > 0, and its running time is polynomial in K,T and 1/ρ.
To give some intuition, we begin with a non tight analysis which, however, provides the main ideas behind the actual proof.
Non Tight Analysis Assume that the environment has K arms with unknown means µi and let T be the number of rounds. Consider B to the total number of batches and β > 1. We set q = T 1/B . In each batch i ∈ [B], we pull each arm β⌊ qi⌋ times. Hence, after the i-th batch, we will have drawn c̃i = ∑ 1≤j≤i β⌊qj⌋ independent and identically distributed samples from each arm. Let us
also set ci = ∑ 1≤j≤i⌊qj⌋.
Let us fix i ∈ [B]. Using Hoeffding’s bound for subgaussian concentration, the length of the confidence bound for arm j ∈ [K] that guarantees 1 − δ probability of success (in the sense that the empirical estimate µ̂j will be close to the true µj) is equal to
Ũi = √ 2 log(1/δ)/c̃i ,
when the estimator uses c̃i samples. Also, let Ui = √ 2 log(1/δ)/ci .
Assume that the active arms at the batch iteration i lie in the set Ai. Consider the estimates {µ̂(i)j }i∈[B],j∈Ai , where µ̂ (i) j is the empirical mean of arm j using c̃i samples. We will eliminate an arm j at the end of the batch iteration i if
µ̂ (i) j + Ũi ≤ max t∈Ai µ̂ (i) t − U i ,
where U i ∼ Uni[Ui/2, Ui]. For the remaining of the proof, we condition on the event E that for every arm j ∈ [K] and every batch i ∈ [B] the true mean is within Ũi from the empirical one. We first argue about the replicability of our algorithm. Consider a fixed round i (end of i-th batch) and a fixed arm j. Let i⋆ be the optimal empirical arm after the i-th batch.
Let µ̂(i) ′ j , µ̂ (i)′ i⋆ the empirical estimates of arms j, i ⋆ after the i-th batch, under some other execution of the algorithm. We condition on the event E ′ for the other execution as well. Notice that |µ̂(i) ′
j − µ̂ (i) j | ≤ 2Ũi, |µ̂ (i)′ i⋆ − µ̂ (i) i⋆ | ≤ 2Ũi. Notice that, since the randomness of U i is shared, if µ̂ (i) j + Ũi ≥ µ̂ (i) i⋆ − U i + 4Ũi, then the arm j will not be eliminated after the i-th batch in some other execution of the algorithm as well. Similarly, if µ̂(i)j + Ũi < µ̂ (i) i⋆ −U i − 4Ũi the the arm j will get eliminated after the i-th batch in some other execution of the algorithm as well. In particular, this means that if µ̂(i)j − 2Ũi > µ̂ (i) i⋆ + Ũi − Ui/2 then the arm j will not get eliminated in some other execution of the algorithm and if µ̂(i)j + 5Ũi < µ̂ (i) i⋆ − Ui then the arm j will also get eliminated in some other execution of the algorithm with probability 1 under the event E ∩ E ′. We call the above two cases good since they preserve replicability. Thus, it suffices to bound the probability that the decision about arm j will be different between the two executions when we are in neither of these cases. Then, the worst case bound due to the mass of the uniform probability measure is
16 √ 2 log(1/δ)/c̃i√
2 log(1/δ)/ci . This implies that the probability mass of the bad event is at most 16 √ ci/c̃i = 16 √ 1/β. A union bound over all arms and batches yields that the probability that two distinct executions differ in at least one pull is
Pr[(a1, . . . , aT ) ̸= (a′1, . . . , a′T )] ≤ 16KB √ 1/β + 2δ ,
and since δ ≤ ρ it suffices to pick β = 768K2B2/ρ2. We now focus on the regret of our algorithm. Let us set δ = 1/(KTB). Fix a sub-optimal arm j and assume that batch i+ 1 was the last batch that is was active. We obtain that the total number of pulls of this arm is
Tj ≤ c̃i+1 ≤ βq(1 + ci) ≤ βq(1 + 8 log(1/δ)/∆2j )
From the replicability analysis, it suffices to take β of order K2 log2(T )/ρ2 and so E[RT ] ≤ T ·1/T+E[RT |E ] = 1+ ∑
j:∆j>0
∆j E[Tj |E ] ≤ C ·K2 log2(T )
ρ2
∑ j:∆j>0 ( ∆j + log(KT log(T )) ∆j ) ,
for some absolute constant C > 0.
Notice that the above analysis, which uses a naive union bound, does not yield the desired regret bound. We next provide a more tight analysis of the same algorithm that achieves the regret bound of Theorem 4.
Improved Analysis (The Proof of Theorem 4) In the previous analysis, we used a union bound over all arms and all batches in order to control the probability of the bad event. However, we can obtain an improved regret bound as follows. Fix a sub-optimal arm i ∈ [K] and let t be the first round that it appears in the bad event. We claim that after a constant number of rounds, this arm will be eliminated. This will shave the O(log2(T )) factor from the regret bound. Essentially, as indicated in the previous proof, the bad event corresponds to the case where the randomness of the cut-off threshold U can influence the decision of whether the algorithm eliminates an arm or not. The intuition is that during the rounds t and t+1, given that the two intervals intersected at round t, we know that the probability that they intersect again is quite small since the interval of the optimal mean is moving upwards, the interval of the sub-optimal mean is concentrating around the guess and the two estimations have been moved by at most a constant times the interval’s length.
Since the bad event occurs at round t, we know that
µ̂ (t) j ∈ [ µ̂ (t) t⋆ − Ut − 5Ũt, µ̂ (t) t⋆ − Ut/2 + 3Ũt ] .
In the above µ̂tt⋆ is the estimate of the optimal mean at round t whose index is denoted by t ⋆. Now assume that the bad event for arm j also occurs at round t+ k. Then, we have that
µ̂ (t+k) j ∈ [ µ̂ (t+k) (t+k)⋆ − Ut+k − 5Ũt+k, µ̂ (t+k) (t+k)⋆ − Ut+k/2 + 3Ũt+k ] .
First, notice that since the concentration inequality under event E holds for rounds t, t+ k we have that µ̂(t+k)j ≤ µ̂ (t) j + Ũt + Ũt+k. Thus, combining it with the above inequalities gives us
µ̂ (t+k) (t+k)⋆ − Ut+k − 5Ũt+k ≤ µ̂ (t+k) j ≤ µ̂ (t) j + Ũt + Ũt+k ≤ µ̂ (t) t⋆ − Ut/2 + 4Ũt + Ũt+k.
We now compare µ̂(t)t⋆ , µ̂ (t+k) (t+k)⋆ . Let o denote the optimal arm. We have that
µ̂ (t+k) (t+k)⋆ ≥ µ̂ (t+k) o ≥ µo − Ũt+k ≥ µt⋆ − Ũt+k ≥ µ̂ (t) t⋆ − Ũt − Ũt+k.
This gives us that
µ̂ (t) t⋆ − Ut+k − 6Ũt+k − Ũt ≤ µ̂ (t+k) (t+k)⋆ − Ut+k − 5Ũt+k.
Thus, we have established that
µ̂ (t) t⋆ − Ut+k − 6Ũt+k − Ũt ≤ µ̂ (t) t⋆ − Ut/2 + 4Ũt + Ũt+k =⇒
Ut+k ≥ Ut/2− 7Ũt+k − 5Ũt ≥ Ut/2− 12Ũt.
Since β ≥ 2304, we get that 12Ũt ≤ Ut/4. Thus, we get that
Ut+k ≥ Ut/4.
Notice that Ut+k Ut =
√ ct
ct+k ,
thus it immediately follows that
ct ct+k ≥ 1 16 =⇒ q t+1 − 1 qt+k+1 − 1 ≥ 1 16 =⇒ 16
( 1− 1
qt+1
) ≥ qk − 1
qt+1 =⇒
qk ≤ 16 + 1 qt+1 ≤ 17 =⇒ k log q ≤ log 17 =⇒ k ≤ 5,
when we pick B = log(T ) batches. Thus, for every arm the bad event can happen at most 6 times, by taking a union bound over the K arms we see that the probability that our algorithm is not replicable is at most O(K √ 1/β), so picking β = Θ(K2/ρ2) suffices to get the result.
C THE PROOF OF THEOREM 6
Theorem. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 3) for the stochastic d-dimensional linear bandit problem with K arms whose expected regret is
E[RT ] ≤ C · K2
ρ2
√ dT log(KT ) ,
for some absolute numerical constant C > 0, and its running time is polynomial in d,K, T and 1/ρ.
Proof. Let c, C be the numerical constants hidden in Lemma 5, i.e., the size of the multi-set is in the interval [cd log(1/δ)/ε2, Cd log(1/δ)/ε2]. We know that the size of each batch ni ∈ [cqi, Cqi] (see Lemma 5), so by the end of the B − 1 batch we will have less than nB pulls left. Hence, the number of batches is at most B.
We first define the event E that the estimates of all arms after the end of each batch are accurate, i.e., for every active arm a at the beginning of the i-th batch, at the end of the batch we have that∣∣∣〈a, θ̂i − θ⋆〉∣∣∣ ≤ ε̃i. Since δ = 1/(KT 2) and there are at most T batches and K active arms in each batch, a simple union bound shows that E happens with probability at least 1 − 1/T. We condition on the event E throughout the rest of the proof. We now argue about the regret bound of our algorithm. We first show that any optimal arm a∗ will not get eliminated. Indeed, consider any sub-optimal arm a ∈ [K] and any batch i ∈ [B]. Under the event E we have that
⟨a, θ̂i⟩ − ⟨a∗, θ̂i⟩ ≤ (⟨a, θ∗⟩+ ε̃i)− (⟨a∗, θ∗⟩ − ε̃i) < 2ε̃i < εi + εi.
Next, we need to bound the number of times we pull some fixed suboptimal arm a ∈ [K]. We let ∆ = ⟨a∗ − a, θ∗⟩ denote the gap and we let i be the smallest integer such that εi < ∆/4. We claim that this arm will get eliminated by the end of batch i. Indeed,
⟨a∗, θ̂i⟩ − ⟨a, θ̂i⟩ ≥ (⟨a∗, θ̂i⟩ − ε̃i)− (⟨a, θ̂i⟩+ ε̃i) = ∆− 2ε̃i > 4εi − 2ε̃i > ε̃i + εi.
This shows that during any batch i, all the active arms have gap at most 4εi−1. Thus, the regret of the algorithm conditioned on the event E is at most
B∑ i=1 4niεi−1 ≤ 4βC B∑ i=1 qi √ d log(KT 2)/qi−1 ≤ 6βCq √ d log(KT ) B−1∑ i=0 qi/2 ≤
O ( βqB/2+1 √ d log(KT ) ) = O ( K2
ρ2 qB/2+1
√ d log(KT ) ) = O ( K2 ρ2 q √ dT log(KT ) ) .
Thus, the overall regret is bounded by δ · T + (1 − δ) · O ( K2 ρ2 q √ dT log(KT ) ) =
O ( K2 ρ2 q √ dT log(KT ) ) .
We now argue about the replicability of our algorithm. The analysis follows in a similar fashion as in Theorem 4. Let θ̂i, θ̂′i be the LSE after the i-th batch, under two different executions of the algorithm and assume that the set of active arms. We condition on the event E ′ for the other execution as well. Assume that the set of active arms is the same under both executions at the beginning of batch i. Notice that since the set that is guaranteed by Lemma 5 is computed by a deterministic algorithm, both executions will pull the same arms in batch i. Consider a suboptimal arm a and let ai∗ = argmaxa∈A⟨θ̂i, a⟩, a′i∗ = argmaxa∈A⟨θ̂′i, a⟩. Under the event E ∩ E ′ we have that |⟨a, θ̂i − θ̂′i⟩| ≤ 2ε̃i, |⟨ai∗ , θ̂i − θ̂′i⟩| ≤ 2ε̃i, and |⟨a′i∗ , θ̂′i⟩ − ⟨ai∗ , θ̂i⟩| ≤ 2ε̃i. Notice that, since the randomness of εi is shared, if ⟨a, θ̂i⟩ + ε̃i ≥ ⟨ai∗ , θ̂i⟩ − εi + 4ε̃i, then the arm a will not be eliminated after the i-th batch in some other execution of the algorithm as well. Similarly, if ⟨a, θ̂i⟩+ ε̃i < ⟨ai∗ , θ̂i⟩− εi− 4ε̃i the the arm a will get eliminated after the i-th batch in some other execution of the algorithm as well. In particular, this means that if ⟨a, θ̂i⟩−2ε̃i > ⟨ai∗ , θ̂i⟩+ε̃i−εi/2 then the arm a will not get eliminated in some other execution of the algorithm and if ⟨a, θ̂i⟩+5ε̃i < ⟨ai∗ , θ̂i⟩ − εi then the arm j will also get eliminated in some other execution of the algorithm with probability 1 under the event E ∩E ′. Thus, it suffices to bound the probability that the decision about arm j will be different between the two executions when we are in neither of these cases. Then, the worst case bound due to the mass of the uniform probability measure is
16 √
d log(1/δ)/c̃i√ d log(1/δ)/ci .
This implies that the probability mass of the bad event is at most 16 √ ci/c̃i = 16 √ 1/β. A naive union bound would require us to pick β = Θ(K2 log2 T/ρ2). We next show to avoid the log2 T factor. Fix a sub-optimal arm a ∈ [K] and let t be the first round that it appears in the bad event. Since the bad event occurs at round t, we know that
⟨a, θ̂t⟩ ∈ [ ⟨at∗ , θ̂t⟩ − εt − 5ε̃t, ⟨at∗ , θ̂t⟩ − εt/2 + 3ε̃t ] .
In the above, at∗ is the optimal arm at round t w.r.t. the LSE. Now assume that the bad event for arm a also occurs at round t+ k. Then, we have that
⟨a, θ̂t+k⟩ ∈ [ ⟨a(t+k)∗ , θ̂t+k⟩ − εt+k − 5ε̃t+k, ⟨a(t+k)∗ , θ̂t+k⟩ − εt/2 + 3ε̃t+k ] .
First, notice that since the concentration inequality under event E holds for rounds t, t+ k we have that ⟨a, θ̂t+k⟩ ≤ ⟨a, θ̂t⟩+ ε̃t + ε̃t+k. Thus, combining it with the above inequalities gives us ⟨a(t+k)∗ , θ̂t+k⟩− εt+k − 5ε̃t+k ≤ ⟨a, θ̂t+k⟩ ≤ ⟨a, θ̂t⟩+ ε̃t + ε̃t+k ≤ ⟨at∗ , θ̂t⟩− εt/2+ 4ε̃t + ε̃t+k. We now compare ⟨at∗ , θ̂t⟩, ⟨a(t+k)∗ , θ̂t+k⟩. Let a∗ denote the optimal arm. We have that ⟨a(t+k)∗ , θ̂t+k⟩ ≥ ⟨a∗, θ̂t+k⟩ ≥ ⟨a∗, θ∗⟩ − ε̃t+k ≥ ⟨at∗ , θ∗⟩ − ε̃t+k ≥ ⟨at∗ , θ̂t⟩ − ε̃t+k − ε̃t.
This gives us that
⟨at∗ , θ̂t⟩ − εt+k − 6ε̃t+k − ε̃t ≤ ⟨a(t+k)∗ , θ̂t+k⟩ − εt+k − 5ε̃t+k. Thus, we have established that
⟨at∗ , θ̂t⟩ − εt+k − 6ε̃t+k − ε̃t ≤ ⟨at∗ , θ̂t⟩ − εt/2 + 4ε̃t + ε̃t+k =⇒ εt+k ≥ εt/2− 7ε̃t+k − 5ε̃t ≥ εt/2− 12ε̃t.
Since β ≥ 2304, we get that 12ε̃t ≤ εt/4. Thus, we get that εt+k ≥ εt/4.
Notice that εt+k εt =
√ qt
qt+k ,
thus it immediately follows that qt
qt+k ≥ 1 16 =⇒ qk ≤ 16 =⇒ k log q ≤ log 16 =⇒ k ≤ 4,
when we pick B = log(T ) batches. Thus, for every arm the bad event can happen at most 5 times, by taking a union bound over the K arms we see that the probability that our algorithm is not replicable is at most O(K √ 1/β), so picking β = Θ(K2/ρ2) suffices to get the result.
D NAIVE APPLICATION OF ALGORITHM 3 WITH INFINITE ACTION SPACE
We use a 1/T 1/(4d+2)−net that has size at most (3T ) d 4d+2 . Let A′ be the new set of arms. We then run Algorithm 3 using A′. This gives us the following result, that is proved right after. Corollary 12. Let T ∈ N, ρ ∈ (0, 1]. There is a ρ-replicable algorithm for the stochastic ddimensional linear bandit problem with infinite arms whose expected regret is at most
E[RT ] ≤ C · T
4d+1 4d+2
ρ2
√ d log(T ) ,
where C > 0 is an absolute numerical constant.
Proof. Since K ≤ (3T ) d 4d+2 , we have that
T sup a∈A′ ⟨a, θ∗⟩ −E
[ T∑
i=1
⟨at, θ∗⟩ ] ≤ O ( (3T ) 2d 4d+2
ρ2
√ dT log ( T (3T ) d 4d+2 )) = O ( T 4d+1 4d+2
ρ2
√ d log(T ) ) Comparing to the best arm in A, we have that:
T sup a∈A ⟨a, θ∗⟩ −E
[ T∑
i=1
⟨at, θ∗⟩ ] = ( T sup
a∈A ⟨a, θ∗⟩ − T sup a∈A′ ⟨a, θ∗⟩
) + ( T sup
a∈A′ ⟨a, θ∗⟩ −E
[ T∑
i=1
⟨at, θ∗⟩ ]) Our choice of the 1/T 1/(4d+2)-net implies that for every a ∈ A there exists some a′ ∈ A′ such that ||a − a′||2 ≤ 1/T 1/(4d+2). Thus, supa∈A⟨a, θ∗⟩ − supa′∈A′⟨a′, θ∗⟩ ≤ ||a − a′||2||θ∗||2 ≤ 1/T 1/(4d+2). Thus, the total regret is at most
T · 1/T 1/(4d+2) +O
( T 4d+1 4d+2
ρ2
√ d log(T ) ) = O ( T 4d+1 4d+2
ρ2
√ d log(T ) ) .
E THE PROOF OF THEOREM 10
Theorem. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 4) for the stochastic d-dimensional linear bandit problem with infinite action set whose expected regret is
E[RT ] ≤ C · d4 log(d) log2 log(d) log log log(d)
ρ2
√ T log3/2(T ) ,
for some absolute numerical constant C > 0, and its running time is polynomial in T d and 1/ρ.
Proof. First, the algorithm is ρ-replicable since in each batch we use a replicable LSE sub-routine with parameter ρ′ = ρ/B. This implies that
Pr[(a1, ..., aT ) ̸= (a′1, ..., a′T )] = Pr[∃i ∈ [B] : θ̂i was not replicable] ≤ ρ . Let us fix a batch iteration i ∈ [B− 1]. Set Ci be the core set computed by Lemma 7. The algorithm first pulls ni =
Cd4 log(d/δ) log2 log(d) log log log(d) ε2iρ
′2 times each one of the arms of the i-th core set Ci, as indicated by Lemma 9 and computes the LSE θ̂i in a replicable way using the algorithm of Lemma 9. Let E be the event that over all batches the estimations are correct. We pick δ = 1/(2|A′|T 2) so that this good event does hold with probability at least 1− 1/T . Our goal is to control the expected regret which can be written as
E[RT ] = T sup a∈A ⟨a, θ⋆⟩ −E T∑ t=1 ⟨at, θ⋆⟩ .
We have that T sup
a∈A ⟨a, θ⋆⟩ − T sup a′∈A′ ⟨a′, θ⋆⟩ ≤ 1 ,
since A′ is a deterministic 1/T -net of A. Also, let us set the expected regret of the bounded action sub-problem as
E[R′T ] = T sup a′∈A′
⟨a′, θ⋆⟩ −E T∑
t=1
⟨at, θ⋆⟩ .
We can now employ the analysis of the finite arm case. During batch i, any active arm has gap at most 4εi−1, so the instantaneous regret in any round is not more than 4εi−1. The expected regret conditional on the good event E is upper bounded by
E[R′T |E ] ≤ B∑ i=1 4Miεi−1 ,
where Mi is the total number of pulls in batch i (using the replicability blow-up) and εi−1 is the error one would achieve by drawing qi samples (ignoring the blow-up). Then, for some absolute constant C > 0, we have that
E[R′T |E ] ≤ B∑ i=1 4 ( qi d3 log(d) log2 log(d) log log log(d) log2 T ρ2 ) · √ d2 log(T )/qi−1 ,
which yields that
E[R′T |E ] ≤ C d4 log(d) log2 log(d) log log log(d) log(T )
√ log(T )
ρ2 · S ,
where we set
S := B∑ i=1
qi
q(i−1)/2 = q1/2 B∑ i=1 qi/2 = q(1+B)/2 .
We pick B = log(T ) and get that, if q = T 1/B then S = Θ( √ T ). We remark that this choice of q is valid since B∑ i=1 qi = qB+1 − q q − 1 = Θ(qB)− 1 ≥ Tρ 2 d3 log(d) log2 log(d) log log log(d) .
Hence, we have that E[R′T |E ] ≤ O ( d4 log(d) log2 log(d) log log log(d)
ρ2
√ T log3/2(T ) ) .
Note that when E does not hold, we can bound the expected regret by 1/T · T = 1. This implies that the overall regret E[RT ] ≤ 2 + E[R′T |E ] and so it satisfies the desired bound and the proof is complete.
F DEFERRED LEMMATA
F.1 THE PROOF OF LEMMA 8
Proof. Consider the distribution π that is a 2-approximation to the optimal G-design and has support |C| = O(d log log d). Let C′ be the set of arms in the support such that π(a) ≤ c/d log d. We consider π̃ = (1 − x)π + xa, where a ∈ C′ and x will be specified later. Consider now the matrix V (π̃). Using the Sherman-Morrison formula, we have that
V (π̃)−1 = 1 1− x V (π)−1 − xV (π)
−1aa⊤V (π)−1 (1− x)2 ( 1 + 11−x ||a|| 2 V (π)−1 ) = 1 1− x
( V (π)−1 − xV (π) −1aa⊤V (π)−1
1− x+ ||a||2V (π)−1
) .
Consider any arm a′. Then,
||a′||2V (π̃)−1 = 1
1− x ||a||2V (π)−1 −
x 1− x · (a
⊤V (π)−1a′)2
1− x+ ||a||2V (π)−1 ≤ 1 1− x ||a||2V (π)−1 .
Note that we apply this transformation at most O(d log log d) times. Let π̂ be the distribution we end up with. We see that
||a′||2V (π̂)−1 ≤ ( 1
1− x
)cd log log d ||a||2V (π)−1 ≤ 2 ( 1
1− x
)cd log log d d.
Notice that there is a constant c′ such that when x = c′/d log d we have that (
1 1−x
)cd log log d ≤ 2.
Moreover, notice that the mass of every arm is at least x(1 − x)|C| ≥ x − |C|x2 = c′/(d log(d)) − c′′d log log d/(d2 log2(d)) ≥ c/(d log(d)), for some absolute numerical constant c > 0. This concludes the claim.
F.2 THE PROOF OF LEMMA 9
Proof. The proof works when we can treat Ω(⌈d log(1/δ)π(a)/ε2⌉) as Ω(d log(1/δ)π(a)/ε2), i.e., as long as π(a) = Ω(ε2/d log(1/δ)). In the regime we are in, this point is handled thanks to Lemma 8. Combining the following proof with Lemma 8, we can obtain the desired result.
We underline that we work in the fixed design setting: the arms ai are deterministically chosen independently of the rewards ri. Assume that the core set of Lemma 7 is the set C. Fix the multi-set S = {(ai, ri) : i ∈ [M ]}, where each arm a lies in the core set and is pulled na = Θ(π(a)d log(d) log(|C|/δ)/ε2) times2. Hence, we have that
M = ∑ a∈C na = Θ ( d log(d) log(|C|/δ)/ε2 ) .
Let also V = ∑
i∈[M ] aia ⊤ i . The least-squares estimator can be written as
θ (ε) LSE = V −1 ∑ i∈[M ] airi = V −1 ∑ a∈C a ∑ i∈[na] ri(a) ,
where each a lies in the core set (deterministically) and ri(a) is the i-th reward generated independently by the linear regression process ⟨θ⋆, a⟩+ξ, where ξ is a fresh zero mean sub-gaussian random variable. Our goal is to reproducibly estimate the value ∑ i∈[na] ri(a) for any a. This is sufficient since two independent executions of the algorithm share the set C and na for any a. Note that the above sum is a random variable. In the following, we condition on the high-probability event that the average reward of the arm a is ε-close to the expected one, i.e., the value ⟨θ⋆, a⟩. This happens with probability at least 1− δ/(2|C|), given Ω(π(a)d log(d) log(|C|/δ)/ε2) samples from arm a ∈ C. In order to guarantee replicability, we will apply a result from Impagliazzo et al. (2022). Since we will union bound over all arms in the core set and |C| = O(d log log(d)) (via Lemma 7), we will make use of a (ρ/|C|)-replicable algorithm that gives an estimate v(a) ∈ R such that
|⟨θ⋆, a⟩ − v(a)| ≤ τ ,
with probability at least 1− δ/(2|C|). For δ < ρ, the algorithm uses Sa = Ω ( d2 log(d/δ) log2 log(d) log log log(d)/(ρ2τ2) ) many samples from the linear regression with fixed arm a ∈ C. Since we have conditioned on the randomness of ri(a) for any i, we get∣∣∣∣∣∣ 1na ∑ i∈[na] ri(a)− v(a) ∣∣∣∣∣∣ ≤ ∣∣∣∣∣∣ 1na ∑ i∈[na] ri(a)− ⟨θ∗, a⟩
∣∣∣∣∣∣+ |⟨θ∗, a⟩ − v(a)| ≤ ε+ τ , with probability at least 1− δ/(2|C|). Hence, by repeating this approach for all arms in the core set, we set θSQ = V −1 ∑ a∈C a na v(a). Let us condition on the randomness of the estimate θ (ε) LSE. We have that
sup a′∈A |⟨a′, θSQ − θ⋆⟩| ≤ sup a′∈A |⟨a′, θSQ − θ(ε)LSE⟩|+ sup a′∈A |⟨a′, θ(ε)LSE − θ ⋆⟩| .
2Recall that π(a) ≥ c/(d log(d)), for some constant c > 0, so the previous expression is Ω(log(δ/|C|)/ε2).
Note that the second term is ε with probability at least 1− δ via Lemma 5. Our next goal is to tune the accuracy τ ∈ (0, 1) so that the first term yields another ε error. For the first term, we have that
sup a′∈A |⟨a′, θSQ − θ(ε)LSE⟩| ≤ sup a′∈A ∣∣∣∣∣⟨a′, V −1∑ a∈C a na (ε+ τ)⟩ ∣∣∣∣∣ Note that V = Cd log(d) log(|C|/δ)ε2 ∑ a∈C π(a)aa ⊤ and so V −1 = ε 2 Cd log(d) log(|C|/δ)V (π) −1, for some absolute constant C > 0. This implies that
sup a′∈A |⟨a′, θSQ−θ(ε)LSE⟩| ≤ (ε+τ) sup a′∈A ∣∣∣∣∣ 〈 a′,
ε2
Cd log(d) log(|C|/δ) V (π)−1 ∑ a∈C Cd log(d) log(|C|/δ)π(a) ε2 a 〉∣∣∣∣∣ . Hence, we get that
sup a′∈A |⟨a′, θSQ − θ(ε)LSE⟩| ≤ (ε+ τ) sup a′∈A ∣∣∣∣∣ 〈 a′, V (π)−1 ∑ a∈C π(a)a 〉∣∣∣∣∣ . Consider a fixed arm a′ ∈ A. Then,∣∣∣∣∣ 〈 a′, V (π)−1 ∑ a∈C π(a)a 〉∣∣∣∣∣ ≤∑ a∈C π(a) ∣∣⟨a′, V (π)−1a⟩∣∣
≤ ∑ a∈C π(a) ( 1 + ∣∣⟨a′, V (π)−1a⟩∣∣2) = 1 +
∑ a∈C π(a) ∣∣⟨a′, V (π)−1a⟩∣∣ | 1. What is the focus and contribution of the paper regarding the stochastic multi-armed bandit problem?
2. What are the strengths of the proposed approach, particularly in terms of ensuring reproducibility?
3. Do you have any concerns or questions about the technical analysis, especially regarding the upper bound scaling?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies the stochastic multi-armed bandit problem (K-armed as well as the linear version) under "reproducibility constraints," i.e., the policy should play the same sequence of arms in any two i.i.d. instances of the problem (using the same algorithmic random seed) with probability at least
1
−
ρ
. The authors propose
ρ
-reproducible policies with rate-optimal regret (w.r.t.
T
).
Strengths And Weaknesses
The paper is technically sound. It is insightful to see that while standard bandit algorithms are not reproducible in general, one can with only a slight multiplicative increase in sample complexity ensure reproducibility.
Clarity, Quality, Novelty And Reproducibility
It certainly is an interesting problem to study from a mathematical perspective; the authors point to antecedents in the literature that underscore the importance of reproducibility. The main technical contribution (in my opinion) is showing that one can get rate-optimality of regret w.r.t.
T
while guaranteeing reproducibility simultaneously. The paper is well written and appears comprehensive in fleshing out connections to extant literature.
Question: Is the
1
/
ρ
2
-scaling of the upper bounds best possible w.r.t.
ρ
? Can you please elaborate on this in the paper? |
ICLR | Title
Replicable Bandits
Abstract
In this paper, we introduce the notion of replicable policies in the context of stochastic bandits, one of the canonical problems in interactive learning. A policy in the bandit environment is called replicable if it pulls, with high probability, the exact same sequence of arms in two different and independent executions (i.e., under independent reward realizations). We show that not only do replicable policies exist, but also they achieve almost the same optimal (non-replicable) regret bounds in terms of the time horizon. More specifically, in the stochastic multi-armed bandits setting, we develop a policy with an optimal problem-dependent regret bound whose dependence on the replicability parameter is also optimal. Similarly, for stochastic linear bandits (with finitely and infinitely many arms) we develop replicable policies that achieve the best-known problem-independent regret bounds with an optimal dependency on the replicability parameter. Our results show that even though randomization is crucial for the exploration-exploitation trade-off, an optimal balance can still be achieved while pulling the exact same arms in two different rounds of executions.
1 INTRODUCTION
In order for scientific findings to be valid and reliable, the experimental process must be repeatable, and must provide coherent results and conclusions across these repetitions. In fact, lack of reproducibility has been a major issue in many scientific areas; a 2016 survey that appeared in Nature (Baker, 2016a) revealed that more than 70% of researchers failed in their attempt to reproduce another researcher’s experiments. What is even more concerning is that over 50% of them failed to reproduce their own findings. Similar concerns have been raised by the machine learning community, e.g., the ICLR 2019 Reproducibility Challenge (Pineau et al., 2019) and NeurIPS 2019 Reproducibility Program (Pineau et al., 2021), due to the to the exponential increase in the number of publications and the reliability of the findings.
The aforementioned empirical evidence has recently led to theoretical studies and rigorous definitions of replicability. In particular, the works of Impagliazzo et al. (2022) and Ahn et al. (2022) considered replicability as an algorithmic property through the lens of (offline) learning and convex optimization, respectively. In a similar vein, in the current work, we introduce the notion of replicability in the context of interactive learning and decision making. In particular, we study replicable policy design for the fundamental setting of stochastic bandits.
A multi-armed bandit (MAB) is a one-player game that is played over T rounds where there is a set of different arms/actions A of size |A| = K (in the more general case of linear bandits, we can consider even an infinite number of arms). In each round t = 1, 2, . . . , T , the player pulls an arm at ∈ A and receives a corresponding reward rt. In the stochastic setting, the rewards of each
arm are sampled in each round independently, from some fixed but unknown, distribution supported on [0, 1]. Crucially, each arm has a potentially different reward distribution, but the distribution of each arm is fixed over time. A bandit algorithm A at every round t takes as input the sequence of arm-reward pairs that it has seen so far, i.e., (a1, r1), . . . , (at−1, rt−1), then uses (potentially) some internal randomness ξ to pull an arm at ∈ A and, finally, observes the associated reward rt ∼ Dat . We propose the following natural notion of a replicable bandit algorithm, which is inspired by the definition of Impagliazzo et al. (2022). Intuitively, a bandit algorithm is replicable if two distinct executions of the algorithm, with internal randomness fixed between both runs, but with independent reward realizations, give the exact same sequence of played arms, with high probability. More formally, we have the following definition. Definition 1 (Replicable Bandit Algorithm). Let ρ ∈ [0, 1]. We call a bandit algorithm A ρreplicable in the stochastic setting if for any distribution Daj over [0, 1] of the rewards of the j-th arm aj ∈ A, and for any two executions of A, where the internal randomness ξ is shared across the executions, it holds that
Pr ξ,r(1),r(2)
[( a (1) 1 , . . . , a (1) T ) = ( a (2) 1 , . . . , a (2) T )] ≥ 1− ρ .
Here, a(i)t = A(a (i) 1 , r (i) 1 , ..., a (i) t−1, r (i) t−1; ξ) is the t-th action taken by the algorithm A in execution i ∈ {1, 2}.
The reason why we allow for some fixed internal randomness is that the algorithm designer has control over it, e.g., they can use the same seed for their (pseudo)random generator between two executions. Clearly, naively designing a replicable bandit algorithm is not quite challenging. For instance, an algorithm that always pulls the same arm or an algorithm that plays the arms in a particular random sequence determined by the shared random seed ξ are both replicable. The caveat is that the performance of these algorithms in terms of expected regret will be quite poor. In this work, we aim to design bandit algorithms which are replicable and enjoy small expected regret. In the stochastic setting, the (expected) regret after T rounds is defined as
E[RT ] = T max a∈A
µa −E
[ T∑
t=1
µat
] ,
where µa = Er∼Da [r] is the mean reward for arm a ∈ A. In a similar manner, we can define the regret in the more general setting of linear bandits (see, Section 5) Hence, the overarching question in this work is the following:
Is it possible to design replicable bandit algorithms with small expected regret?
At a first glance, one might think that this is not possible, since it looks like replicability contradicts the exploratory behavior that a bandit algorithm should possess. However, our main results answer this question in the affirmative and can be summarized in Table 1.
1.1 RELATED WORK
Reproducibility/Replicability. In this work, we introduce the notion of replicability in the context of interactive learning and, in particular, in the fundamental setting of stochastic bandits. Close to our work, the notion of a replicable algorithm in the context of learning was proposed by Impagliazzo et al. (2022), where it is shown how any statistical query algorithm can be made replicable with a moderate increase in its sample complexity. Using this result, they provide replicable algorithms for finding approximate heavy-hitters, medians, and the learning of half-spaces. Reproducibility has been also considered in the context of optimization by Ahn et al. (2022). We mention that in Ahn et al. (2022) the notion of a replicable algorithm is different from our work and that of Impagliazzo et al. (2022), in the sense that the outputs of two different executions of the algorithm do not need to be exactly the same. From a more application-oriented perspective, Shamir & Lin (2022) study irreproducibility in recommendation systems and propose the use of smooth activations (instead of ReLUs) to improve recommendation reproducibility. In general, the reproducibility crisis is reported in various scientific disciplines Ioannidis (2005); McNutt (2014); Baker (2016b); Goodman et al. (2016); Lucic et al. (2018); Henderson et al. (2018). For more details we refer to the report of the NeurIPS 2019 Reproducibility Program Pineau et al. (2021) and the ICLR 2019 Reproducibility Challenge Pineau et al. (2019).
Bandit Algorithms. Stochastic multi-armed bandits for the general setting without structure have been studied extensively Slivkins (2019); Lattimore & Szepesvári (2020); Bubeck et al. (2012b); Auer et al. (2002); Cesa-Bianchi & Fischer (1998); Kaufmann et al. (2012a); Audibert et al. (2010); Agrawal & Goyal (2012); Kaufmann et al. (2012b). In this setting, the optimum regret achievable is O ( log(T ) ∑ i:∆i>0 ∆−1 ) ; this is achieved, e.g., by the upper confidence bound (UCB) algorithm of Auer et al. (2002). The setting of d-dimensional linear stochastic bandits is also well-explored Dani et al. (2008); Abbasi-Yadkori et al. (2011) under the well-specified linear reward model, achieving (near) optimal problem-independent regret of O(d √ T log(T )) Lattimore & Szepesvári (2020). Note that the best-known lower bound is Ω(d √ T ) Dani et al. (2008) and that the number of arms can, in principle, be unbounded. For a finite number of arms K, the best known upper bound is O( √ dT log(K)) Bubeck et al. (2012a). Our work focuses on the design of replicable bandit algorithms and we hence consider only stochastic environments. In general, there is also extensive work in adversarial bandits and we refer the interested reader to Lattimore & Szepesvári (2020).
Batched Bandits. While sequential bandit problems have been studied for almost a century, there is much interest in the batched setting too. In many settings, like medical trials, one has to take a lot of actions in parallel and observe their rewards later. The works of Auer & Ortner (2010) and CesaBianchi et al. (2013) provided sequential bandit algorithms which can easily work in the batched setting. The works of Gao et al. (2019) and Esfandiari et al. (2021) are focusing exclusively on the batched setting. Our work on replicable bandits builds upon some of the techniques from these two lines of work.
2 STOCHASTIC BANDITS AND REPLICABILITY
In this section, we first highlight the main challenges in order to guarantee replicability and then discuss how the results of Impagliazzo et al. (2022) can be applied in our setting.
2.1 WARM-UP I: NAIVE REPLICABILITY AND CHALLENGES
Let us consider the stochastic two-arm setting (K = 2) and a bandit algorithm A with two independent executions, A1 and A2. The algorithm Ai plays the sequence 1, 2, 1, 2, . . . until some, potentially random, round Ti ∈ N after which one of the two arms is eliminated and, from that point, the algorithm picks the winning arm ji ∈ {1, 2}. The algorithm A is ρ-replicable if and only if T1 = T2 and j1 = j2 with probability 1− ρ. Assume that |µ1 − µ2| = ∆ where µi is the mean of the distribution of the i-th arm. If we assume that ∆ is known, then we can run the algorithm for T1 = T2 = C∆2 log(1/ρ) for some universal constant C > 0 and obtain that, with probability 1 − ρ, it will hold that µ̂(j)1 ≈ µ1 and µ̂ (j) 2 ≈ µ2
for j ∈ {1, 2}, where µ̂(j)i is the estimation of arm’s i mean during execution j. Hence, knowing ∆ implies that the stopping criterion of the algorithm A is deterministic and that, with high probability, the winning arm will be detected at time T1 = T2. This will make the algorithm ρ-replicable.
Observe that when K = 2, the only obstacle to replicability is that the algorithm should decide at the same time to select the winning arm and the selection must be the same in the two execution threads. In the presence of multiple arms, there exists the additional constraint that the above conditions must be satisfied during, potentially, multiple arm eliminations. Hence, the two questions arising from the above discussion are (i) how to modify the above approach when ∆ is unknown and (ii) how to deal with K > 2 arms.
A potential solution to the second question (on handling K > 2 arms) is the Execute-Then-Commit (ETC) strategy. Consider the stochastic K-arm bandit setting. For any ρ ∈ (0, 1), the ETC algorithm with known ∆ = mini ∆i and horizon T that uses m = 4∆2 log(1/ρ) deterministic exploration phases before commitment is ρ-replicable. The intuition is exactly the same as in the K = 2 case. The caveats of this approach are that it assumes that ∆ is known and that the obtained regret is quite unsatisfying. In particular, it achieves regret bounded by m ∑ i∈[K] ∆i + ρ · (T −mK) ∑ i∈[k] ∆i.
Next, we discuss how to improve the regret bound without knowing the gaps ∆i. Before designing new algorithms, we will inspect the guarantees that can be obtained by combining ideas from previous results in the bandits literature and the recent work in replicable learning of Impagliazzo et al. (2022).
2.2 WARM-UP II: BANDIT ALGORITHMS AND REPLICABLE MEAN ESTIMATION
First, we remark that we work in the stochastic setting and the distributions of the rewards of the two arms are subgaussian. Thus, the problem of estimating their mean is an instance of a statistical query for which we can use the algorithm of Impagliazzo et al. (2022) to get a replicable mean estimator for the distributions of the rewards of the arms. Proposition 2 (Replicable Mean Estimation (Impagliazzo et al., 2022)). Let τ, δ, ρ ∈ [0, 1]. There exists a ρ-replicable algorithm ReprMeanEstimation that draws Ω ( log(1/δ) τ2(ρ−δ)2 ) samples from a distribution with mean µ and computes an estimate µ̂ that satisfies |µ̂ − µ| ≤ τ with probability at least 1− δ.
Notice that we are working in the regime where δ ≪ ρ, so the sample complexity is Ω (
log(1/δ) τ2ρ2
) .
The straightforward approach is to try to use an optimal multi-armed algorithm for the stochastic setting, such as UCB or arm-elimination (Even-Dar et al., 2006), combined with the replicable mean estimator. However, it is not hard to see that this approach does not give meaningful results: if we want to achieve replicability ρ we need to call the replicable mean estimator routine with parameter ρ/(KT ), due to the union bound that we need to take. This means that we need to pull every arm at least K2T 2 times, so the regret guarantee becomes vacuous. This gives us the first key insight to tackle the problem: we need to reduce the number of calls to the mean estimator. Hence, we will draw inspiration from the line of work in stochastic batched bandits (Gao et al., 2019; Esfandiari et al., 2021) to derive replicable bandit algorithms.
3 REPLICABLE MEAN ESTIMATION FOR BATCHED BANDITS
As a first step, we would like to show how one could combine the existing replicable algorithms of Impagliazzo et al. (2022) with the batched bandits approach of Esfandiari et al. (2021) to get some preliminary non-trivial results. We build an algorithm for the K-arm setting, where the gaps ∆j are unknown to the learner. Let δ be the confidence parameter of the arm elimination algorithm and ρ be the replicability guarantee we want to achieve. Our approach is the following: let us, deterministically, split the time interval into sub-intervals of increasing length. We treat each subinterval as a batch of samples where we pull each active arm the same number of times and use the replicable mean estimation algorithm to, empirically, compute the true mean. At the end of each batch, we decide to eliminate some arm j using the standard UCB estimate. Crucially, if we condition on the event that all the calls to the replicable mean estimator return the same number, then the algorithm we propose is replicable.
Algorithm 1 Mean-Estimation Based Replicable Algorithm for Stochastic MAB (Theorem 3) 1: Input: time horizon T, number of arms K, replicability ρ 2: Initialization: B ← log(T ), q ← T 1/B , c0 ← 0, A ← [K], r ← T , µ̂a ← 0,∀a ∈ A 3: for i = 1 to B − 1 do 4: if ⌊qi⌋ · |A| > r then 5: break 6: ci = ci−1 + ⌊qi⌋ 7: Pull every arm a ∈ A for ⌊qi⌋ times 8: for a ∈ A do 9: µ̂a ← ReprMeanEstimation(δ = 1/(2KTB), τ = 1, √ log(2KTB)/ci, ρ
′ = ρ/(KB)) ▷ Proposition 2
10: r ← r − |A| · ⌊qi⌋ 11: for a ∈ A do 12: if µ̂a < maxa∈A µ̂a − 2τ then 13: Remove a from A 14: In the last batch play the arm from A with the smallest index
Theorem 3. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 1) for the stochastic bandit problem with K arms and gaps (∆j)j∈[K] whose expected regret is
E[RT ] ≤ C · K2 log2(T )
ρ2
∑ j:∆j>0 ( ∆j + log(KT log(T )) ∆j ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in K,T and 1/ρ.
The above result, whose proof can be found in Appendix A, states that, by combining the tools from Impagliazzo et al. (2022) and Esfandiari et al. (2021), we can design a replicable bandit algorithm with (instance-dependent) expected regret O(K2 log3(T )/ρ2). Notice that the regret guarantee has an extra K2 log2(T )/ρ2 factor compared to its non-replicable counterpart in Esfandiari et al. (2021) (Theorem 5.1). This is because, due to a union bound over the rounds and the arms, we need to call the replicable mean estimator with parameter ρ/(K log(T )). In the next section, we show how to get rid of the log2(T ) by designing a new algorithm.
4 IMPROVED ALGORITHMS FOR REPLICABLE STOCHASTIC BANDITS
While the previous result provides a non-trivial regret bound, it is not optimal with respect to the time horizon T . In this section, we show how to improve it by designing a new algorithm, presented in Algorithm 2, which satisfies the guarantees of Theorem 4 and, essentially, decreases the dependence on the time horizon T from log3(T ) to log(T ). Our main result for replicable stochastic multi-armed bandits with K arms follows. Theorem 4. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 2) for the stochastic bandit problem with K arms and gaps (∆j)j∈[K] whose expected regret is
E[RT ] ≤ C · K2
ρ2 ∑ j:∆j>0 ( ∆j + log(KT log(T )) ∆j ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in K,T and 1/ρ.
Note that, compared to the non-replicable setting, we incur an extra factor of K2/ρ2 in the regret. The proof can be found in Appendix B. Let us now describe how Algorithm 2 works. We decompose the time horizon into B = log(T ) batches. Without the replicability constraint, one could draw qi samples in batch i from each arm and estimate the mean reward. With the replicability constraint, we have to boost this: in each batch i, we pull each active arm O(βqi) times, for some q to be determined, where β = O(K2/ρ2) is the replicability blow-up. Using these samples, we compute
Algorithm 2 Replicable Algorithm for Stochastic Multi-Armed Bandits (Theorem 4) 1: Input: time horizon T, number of arms K, replicability ρ 2: Initialization: B ← log(T ), q ← T 1/B , c0 ← 0, A0 ← [K], r ← T , µ̂a ← 0,∀a ∈ A0
3: β ← ⌊max{K2/ρ2, 2304}⌋ 4: for i = 1 to B − 1 do 5: if β⌊qi⌋ · |Ai| > r then 6: break 7: Ai ← Ai−1 8: for a ∈ Ai do 9: Pull arm a for β⌊qi⌋ times
10: Compute the empirical mean µ̂(i)α 11: ci ← ci−1 + ⌊qi⌋ 12: c̃i ← βci 13: Ũi ← √ 2 ln(2KTB)/c̃i
14: Ui ← √ 2 ln(2KTB)/ci 15: U i ← Uni[Ui/2, Ui] 16: r ← r − β · |Ai| · ⌊qi⌋ 17: for a ∈ Ai do 18: if µ̂(i)a + Ũi < maxa∈Ai µ̂ (i) a − U i then 19: Remove a from Ai 20: In the last batch play the arm from AB−1 with the smallest index
the empirical mean µ̂(i)α for any active arm α. Note that Ũi in Algorithm 2 corresponds to the size of the actual confidence interval of the estimation and Ui corresponds to the confidence interval of an algorithm that does not use the β-blow-up in the number of samples. The novelty of our approach comes from the choice of the interval around the mean of the maximum arm: we pick a threshold uniformly at random from an interval of size Ui/2 around the maximum mean. Then, the algorithm checks whether µ̂(i)a + Ũi < max µ̂ (i) a′ − U i, where max runs over the active arms a′ in batch i, and eliminates arms accordingly. To prove the result we show that there are three regions that some arm j can be in relative to the confidence interval of the best arm in batch i (cf. Appendix B). If it lies in two of these regions, then the decision of whether to keep it or discard it is the same in both executions of the algorithm. However, if it is in the third region, the decision could be different between parallel executions, and since it relies on some external and unknown randomness, it is not clear how to reason about it. To overcome this issue, we use the random threshold to argue about the probability that the decision between two executions differs. The crucial observation that allows us to get rid of the extra log2(T ) factor is that there are correlations between consecutive batches: we prove that if some arm j lies in this “bad” region in some batch i, then it will be outside this region after a constant number of batches.
5 REPLICABLE STOCHASTIC LINEAR BANDITS
We now investigate replicability in the more general setting of stochastic linear bandits. In this setting, each arm is a vector a ∈ Rd belonging to some action set A ⊆ Rd, and there is a parameter θ⋆ ∈ Rd unknown to the player. In round t, the player chooses some action at ∈ A and receives a reward rt = ⟨θ⋆, at⟩ + ηt, where ηt is a zero-mean 1-subgaussian random variable independent of any other source of randomness. This means that E[ηt] = 0 and satisfies E[exp(ληt)] ≤ exp(λ2/2) for any λ ∈ R. For normalization purposes, it is standard to assume that ∥θ⋆∥2 ≤ 1 and supa∈A ∥a∥2 ≤ 1. In the linear setting, the expected regret after T pulls a1, . . . , aT can be written as
E[RT ] = T sup a∈A ⟨θ⋆, a⟩ −E
[ T∑
t=1
⟨θ⋆, at⟩ ] .
In Section 5.1 we provide results for the finite action space case, i.e., when |A| = K. Next, in Section 5.2, we study replicable linear bandit algorithms when dealing with infinite action spaces. In the following, we work in the regime where T ≫ d. We underline that our approach leverages connections of stochastic linear bandits with G-optimal experiment design, core sets constructions, and least-squares estimators. Roughly speaking, the goal of G-optimal design is to find a (small) subset of arms A′, which is called the core set, and define a distribution π over them with the following property: for any ε > 0, δ > 0 pulling only these arms for an appropriate number of times and computing the least-squares estimate θ̂ guarantees that supa∈A⟨a, θ∗− θ̂⟩ ≤ ε, with probability 1−δ. For an extensive discussion, we refer to Chapters 21 and 22 of Lattimore & Szepesvári (2020).
5.1 FINITE ACTION SET
We first introduce a lemma that allows us to reduce the size of the action set that our algorithm has to search over.
Lemma 5 (See Chapters 21 and 22 in Lattimore & Szepesvári (2020)). For any finite action set A that spans Rd and any δ, ε > 0, there exists an algorithm that, in time polynomial in d, computes a multi-set of Θ(d log(1/δ)/ε2+d log log d) actions (possibly with repetitions) such that (i) they span Rd and (ii) if we perform these actions in a batched stochastic d-dimensional linear bandits setting with true parameter θ⋆ ∈ Rd and let θ̂ be the least-squares estimate for θ⋆, then, for any a ∈ A, with probability at least 1− δ, we have
∣∣∣〈a, θ⋆ − θ̂〉∣∣∣ ≤ ε. Essentially, the multi-set in Lemma 5 is obtained using an approximate G-optimal design algorithm. Thus, it is crucial to check whether this can be done in a replicable manner. Recall that the above set of distinct actions is called the core set and is the solution of an (approximate) Goptimal design problem. To be more specific, consider a distribution π : A → [0, 1] and define V (π) = ∑ a∈A π(a)aa
⊤ ∈ Rd×d and g(π) = supa∈A ∥a∥2V (π)−1 . The distribution π is called a design and the goal of G-optimal design is to find a design that minimizes g. Since the number of actions is finite, this problem reduces to an optimization problem which can be solved efficiently using standard optimization methods (e.g., the Frank-Wolfe method). Since the initialization is the same, the algorithm that finds the optimal (or an approximately optimal) design is replicable under the assumption that the gradients and the projections do not have numerical errors. This perspective is orthogonal to the work of Ahn et al. (2022), that defines reproducibility from a different viewpoint.
Algorithm 3 Replicable Algorithm for Stochastic Linear Bandits (Theorem 6) 1: Input: number of arms K, time horizon T, replicability ρ 2: Initialization: B ← log(T ), q ← (T/c)1/B , A ← [K], r ← T 3: β ← ⌊max{K2/ρ2, 2304}⌋ 4: for i = 1 to B − 1 do 5: ε̃i = √ d log(KT 2)/(βqi)
6: εi = √ d log(KT 2)/qi 7: ni = 10d log(KT 2)/ε2i 8: a1, . . . , ani ← multi-set given by Lemma 5 with parameters δ = 1/(KT 2) and ε = ε̃i 9: if ni > r then
10: break 11: Pull every arm a1, . . . , ani and receive rewards r1, . . . , rni 12: Compute the LSE θ̂i ← (∑ni j=1 aja T j )−1 (∑ni j=1 ajrj )
13: εi ← Uni[εi/2, εi]
14: r ← r − ni
15: for a ∈ A do 16: if ⟨a, θ̂i⟩+ ε̃i < maxa∈A⟨a, θ̂i⟩ − εi then 17: Remove a from A 18: In the last batch play argmaxa∈A⟨a, θ̂B−1⟩
In our batched bandit algorithm (Algorithm 3), the multi-set of arms a1, . . . , ani computed in each batch is obtained via a deterministic algorithm with runtime poly(K, d), where |A| = K. Hence, the
multi-set will be the same in two different executions of the algorithm. On the other hand, the LSE will not be since it depends on the stochastic rewards. We apply the techniques that we developed in the replicable stochastic MAB setting in order to design our algorithm. Our main result for replicable d-dimensional stochastic linear bandits with K arms follows. For the proof, we refer to Appendix C. Theorem 6. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm for the stochastic ddimensional linear bandit problem with K arms whose expected regret is
E[RT ] ≤ C · K2
ρ2
√ dT log(KT ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in d,K, T and 1/ρ.
Note that the best known non-replicable algorithm achieves an upper bound of Õ( √ dT log(K)) and, hence, our algorithm incurs a replicability overhead of order K2/ρ2. The intuition behind the proof is similar to the multi-armed bandit setting in Section 4.
5.2 INFINITE ACTION SET
Let us proceed to the setting where the action set A is unbounded. Unfortunately, even when d = 1, we cannot directly get an algorithm that has satisfactory regret guarantees by discretizing the space and using Algorithm 3. The approach of Esfandiari et al. (2021) is to discretize the action space and use an 1/T -net to cover it, i.e. a set A′ ⊆ A such that for all a ∈ A there exists some a′ ∈ A′ with ||a − a′||2 ≤ 1/T . It is known that there exists such a net of size at most (3T )d (Vershynin, 2018, Corollary 4.2.13). Then, they apply the algorithm for the finite arms setting, increasing their regret guarantee by a factor of √ d. However, our replicable algorithm for this setting contains an additional factor of K2 in the regret bound. Thus, even when d = 1, our regret guarantee is greater than T, so the bound is vacuous. One way to fix this issue and get a sublinear regret guarantee is to use a smaller net. We use a 1/T 1/(4d+2)−net that has size at most (3T ) d 4d+2 and this yields an expected
regret of order O(T 4d+1/(4d+2) √ d log(T )/ρ2). For further details, we refer to Appendix D.
Even though the regret guarantee we managed to get using the smaller net of Appendix D is sublinear in T , it is not a satisfactory bound. The next step is to provide an algorithm for the infinite action setting using a replicable LSE subroutine combined with the batching approach of Esfandiari et al. (2021). We will make use of the next lemma. Lemma 7 (Section 21.2 Note 3 of Lattimore & Szepesvári (2020)). There exists a deterministic algorithm that, given an action space A ⊆ Rd, computes a 2-approximate G-optimal design π with a core set of size O(d log log(d)).
We additionally prove the next useful lemma, which, essentially, states that we can assume without loss of generality that every arm in the support of π has mass at least Ω(1/(d log(d))). We refer to Appendix F.1 for the proof. Lemma 8 (Effective Support). Let π be the distribution that corresponds to the 2-approximate optimal G-design of Lemma 7 with input A. Assume that π(a) ≤ c/(d log(d)), where c > 0 is some absolute numerical constant, for some arm a in the core set. Then, we can construct a distribution π̂ such that, for any arm a in the core set, π̂(a) ≥ C/(d log(d)), where C > 0 is an absolute constant, so that it holds
sup a′∈A
∥a′∥2V (π̂)−1 ≤ 4d .
The upcoming lemma is a replicable algorithm for the least-squares estimator and, essentially, builds upon Lemma 7 and Lemma 8. Its proof can be found at Appendix F.2. Lemma 9 (Replicable LSE). Let ρ, ε ∈ (0, 1] and 0 < δ ≤ min{ρ, 1/d}1. Consider an environment of d-dimensional stochastic linear bandits with infinite action space A. Assume that π is a 4- approximate optimal design with associated core set C as computed by Lemma 7 with input A. There exists a ρ-replicable algorithm that pulls each arm a ∈ C a total of
Ω
( d4 log(d/δ) log2 log(d) log log log(d)
ε2ρ2 ) 1We can handle the case of 0 < δ ≤ d by paying an extra log d factor in the sample complexity.
times and outputs θSQ that satisfies supa∈A |⟨a, θSQ − θ⋆⟩| ≤ ε , with probability at least 1− δ.
Algorithm 4 Replicable LSE Algorithm for Stochastic Infinite Action Set (Theorem 10) 1: Input: time horizon T, action set A ⊆ Rd, replicability ρ 2: A′ ← 1/T -net of A 3: Initialization: r ← T,B ← log(T ), q ← (T/c)1/B 4: for i = 1 to B − 1 do 5: qi denotes the number of pulls of all arms before the replicability blow-up 6: εi = c · d √ log(T )/qi
7: The blow-up is Mi = qi · d3 log(d) log2 log(d) log log log(d) log2(T )/ρ2 8: a1, . . . , a|Ci| ← core set Ci of the design given by Lemma 7 with parameter A′ 9: if ⌈Mi⌉ > r then
10: break 11: Pull every arm aj for Ni = ⌈Mi⌉/|Ci| rounds and receive rewards r(j)1 , ..., r (j) Ni
for j ∈ [|Ci|] 12: Si = {(aj , r(j)t ) : t ∈ [Ni], j ∈ [|Ci|]} 13: θ̂i ← ReplicableLSE(Si, ρ′ = ρ/(dB), δ = 1/(2|A′|T 2), τ = min{εi, 1}) 14: r ← r − ⌈Mi⌉ 15: for a ∈ A′ do 16: if ⟨a, θ̂i⟩ < maxa∈A′⟨a, θ̂i⟩ − 2εi then 17: Remove a from A′ 18: In the last batch play argmaxa∈A′⟨a, θ̂B−1⟩ 19: 20: ReplicableLSE(S, ρ, δ, τ) 21: for a ∈ C do 22: v(a)← ReplicableSQ(ϕ : x ∈ R 7→ x ∈ R, S, ρ, δ, τ) ▷ Impagliazzo et al. (2022) 23: return ( ∑ j∈|S| aja ⊤ j ) −1 · ( ∑ a∈C a na v(a))
The main result for the infinite actions’ case, obtained by Algorithm 4, follows. Its proof can be found at Appendix E. Theorem 10. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (Algorithm 4) for the stochastic d-dimensional linear bandit problem with infinite action set whose expected regret is
E[RT ] ≤ C · d4 log(d) log2 log(d) log log log(d)
ρ2
√ T log3/2(T ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in T d and 1/ρ.
Our algorithm for the infinite arm linear bandit case enjoys an expected regret of order Õ(poly(d) √ T ). We underline that the dependence of the regret on the time horizon is (almost) optimal, and we incur an extra d3 factor in the regret guarantee compared to the non-replicable algorithm of Esfandiari et al. (2021). We now comment on the time complexity of our algorithm. Remark 11. The current implementation of our algorithm requires time exponential in d. However, for a general convex set A, given access to a separation oracle for it and an oracle that computes an (approximate) G-optimal design, we can execute it in polynomial time and with polynomially many calls to the oracle. Notably, when A is a polytope such oracles exist. We underline that computational complexity issues also arise in the traditional setting of linear bandits with an infinite number of arms and the computational overhead that the replicability requirement adds is minimal. For further details, we refer to Appendix G.
6 CONCLUSION AND FUTURE DIRECTIONS
In this paper, we have provided a formal notion of reproducibility/replicability for stochastic bandits and we have developed algorithms for the multi-armed bandit and the linear bandit settings that satisfy this notion and enjoy a small regret decay compared to their non-replicable counterparts. We hope and believe that our paper will inspire future works in replicable algorithms for more complicated interactive learning settings such as reinforcement learning. We also provide experimental evaluation in Appendix H.
7 ACKNOWLEDGEMENTS
Alkis Kalavasis was supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the “First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant”, project BALSAM, HFRIFM17-1424. Amin Karbasi acknowledges funding in direct support of this work from NSF (IIS-1845032), ONR (N00014- 19-1-2406), and the AI Institute for Learning-Enabled Optimization at Scale (TILOS). Andreas Krause was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program grant agreement no. 815943 and the Swiss National Science Foundation under NCCR Automation, grant agreement 51NF40 180545. Grigoris Velegkas was supported by NSF (IIS-1845032), an Onassis Foundation PhD Fellowship and a Bodossaki Foundation PhD Fellowship.
A THE PROOF OF THEOREM 3
Theorem. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 1) for the stochastic bandit problem with K arms and gaps (∆j)j∈[K] whose expected regret is
E[RT ] ≤ C · K2 log2(T )
ρ2
∑ j:∆j>0 ( ∆j + log(2KT log(T )) ∆j ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in K,T and 1/ρ.
Proof. First, we claim that the algorithm is ρ-replicable: since the elimination decisions are taken in the same iterates and are based solely on the mean estimations, the replicability of the algorithm of Proposition 2 implies the replicability of the whole algorithm. In particular,
Pr[(a1, ..., aT ) ̸= (a′1, ..., a′T )] = Pr[∃i ∈ [B],∃j ∈ [K] : µ̂ (i) j was not replicable] ≤ ρ .
During each batch i, we draw for any active arm ⌊qi⌋ fresh samples for a total of ci samples and use the replicable mean estimation algorithm to estimate its mean. For an active arm, at the end of some batch i ∈ [B], we say that its estimation is “correct” if the estimation of its mean is within√ log(2KTB)/ci from the true mean. Using Proposition 2, the estimation of any active arm at the end of any batch (except possibly the last batch) is correct with probability at least 1− 1/(2KTB) and so, by the union bound, the probability that the estimation is incorrect for some arm at the end of some batch is bounded by 1/T . We remark that when δ < ρ, the sample complexity of Proposition 2 reduces to O(log(1/δ)/(τ2ρ2)). Let E denote the event that our estimates are correct. The total expected regret can be bounded as
E[RT ] ≤ T · 1/T +E[RT |E ] .
It suffices to bound the second term of the RHS and hence we can assume that each gap is correctly estimated within an additive factor of √ log(2KTB)/ci after batch i. First, due to the elimination condition, we get that the best arm is never eliminated. Next, we have that
E[RT |E ] = ∑
j:∆j>0
∆j E[Tj |E ] ,
where Tj is the total number of pulls of arm j. Fix a sub-optimal arm j and assume that i + 1 was the last batch it was active. Since this arm is not eliminated at the end of batch i, and the estimations are correct, we have that
∆j ≤ √ log(2KTB)/ci ,
and so ci ≤ log(2KTB)/∆2j . Hence, the number of pulls to get the desired bound due to Proposition 2 is (since we need to pull an arm ci/ρ21 times in order to get an estimate at distance√
log(1/δ)/c2i with probability 1− δ in a ρ1-replicable manner when δ < ρ1)
Tj ≤ ci+1/ρ21 = q/ρ21(1 + ci) ≤ q/ρ21 · (1 + log(2KTB)/∆2j ) .
This implies that the total regret is bounded by
E[RT ] ≤ 1 + q/ρ21 · ∑
j:∆j>0
( ∆j + log(2KTB)
∆j
) .
We finally set q = T 1/B and B = log(T ). Moreover, we have that ρ1 = ρ/(KB). These yield
E[RT ] ≤ K2 log2(T )
ρ2
∑ j:∆j>0 ( ∆j + log(2KT log(T )) ∆j ) .
This completes the proof.
B THE PROOF OF THEOREM 4
Theorem. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 2) for the stochastic bandit problem with K arms and gaps (∆j)j∈[K] whose expected regret is
E[RT ] ≤ C · K2
ρ2 ∑ j:∆j>0 (∆j + log(KT log(T ))/∆j) ,
for some absolute numerical constant C > 0, and its running time is polynomial in K,T and 1/ρ.
To give some intuition, we begin with a non tight analysis which, however, provides the main ideas behind the actual proof.
Non Tight Analysis Assume that the environment has K arms with unknown means µi and let T be the number of rounds. Consider B to the total number of batches and β > 1. We set q = T 1/B . In each batch i ∈ [B], we pull each arm β⌊ qi⌋ times. Hence, after the i-th batch, we will have drawn c̃i = ∑ 1≤j≤i β⌊qj⌋ independent and identically distributed samples from each arm. Let us
also set ci = ∑ 1≤j≤i⌊qj⌋.
Let us fix i ∈ [B]. Using Hoeffding’s bound for subgaussian concentration, the length of the confidence bound for arm j ∈ [K] that guarantees 1 − δ probability of success (in the sense that the empirical estimate µ̂j will be close to the true µj) is equal to
Ũi = √ 2 log(1/δ)/c̃i ,
when the estimator uses c̃i samples. Also, let Ui = √ 2 log(1/δ)/ci .
Assume that the active arms at the batch iteration i lie in the set Ai. Consider the estimates {µ̂(i)j }i∈[B],j∈Ai , where µ̂ (i) j is the empirical mean of arm j using c̃i samples. We will eliminate an arm j at the end of the batch iteration i if
µ̂ (i) j + Ũi ≤ max t∈Ai µ̂ (i) t − U i ,
where U i ∼ Uni[Ui/2, Ui]. For the remaining of the proof, we condition on the event E that for every arm j ∈ [K] and every batch i ∈ [B] the true mean is within Ũi from the empirical one. We first argue about the replicability of our algorithm. Consider a fixed round i (end of i-th batch) and a fixed arm j. Let i⋆ be the optimal empirical arm after the i-th batch.
Let µ̂(i) ′ j , µ̂ (i)′ i⋆ the empirical estimates of arms j, i ⋆ after the i-th batch, under some other execution of the algorithm. We condition on the event E ′ for the other execution as well. Notice that |µ̂(i) ′
j − µ̂ (i) j | ≤ 2Ũi, |µ̂ (i)′ i⋆ − µ̂ (i) i⋆ | ≤ 2Ũi. Notice that, since the randomness of U i is shared, if µ̂ (i) j + Ũi ≥ µ̂ (i) i⋆ − U i + 4Ũi, then the arm j will not be eliminated after the i-th batch in some other execution of the algorithm as well. Similarly, if µ̂(i)j + Ũi < µ̂ (i) i⋆ −U i − 4Ũi the the arm j will get eliminated after the i-th batch in some other execution of the algorithm as well. In particular, this means that if µ̂(i)j − 2Ũi > µ̂ (i) i⋆ + Ũi − Ui/2 then the arm j will not get eliminated in some other execution of the algorithm and if µ̂(i)j + 5Ũi < µ̂ (i) i⋆ − Ui then the arm j will also get eliminated in some other execution of the algorithm with probability 1 under the event E ∩ E ′. We call the above two cases good since they preserve replicability. Thus, it suffices to bound the probability that the decision about arm j will be different between the two executions when we are in neither of these cases. Then, the worst case bound due to the mass of the uniform probability measure is
16 √ 2 log(1/δ)/c̃i√
2 log(1/δ)/ci . This implies that the probability mass of the bad event is at most 16 √ ci/c̃i = 16 √ 1/β. A union bound over all arms and batches yields that the probability that two distinct executions differ in at least one pull is
Pr[(a1, . . . , aT ) ̸= (a′1, . . . , a′T )] ≤ 16KB √ 1/β + 2δ ,
and since δ ≤ ρ it suffices to pick β = 768K2B2/ρ2. We now focus on the regret of our algorithm. Let us set δ = 1/(KTB). Fix a sub-optimal arm j and assume that batch i+ 1 was the last batch that is was active. We obtain that the total number of pulls of this arm is
Tj ≤ c̃i+1 ≤ βq(1 + ci) ≤ βq(1 + 8 log(1/δ)/∆2j )
From the replicability analysis, it suffices to take β of order K2 log2(T )/ρ2 and so E[RT ] ≤ T ·1/T+E[RT |E ] = 1+ ∑
j:∆j>0
∆j E[Tj |E ] ≤ C ·K2 log2(T )
ρ2
∑ j:∆j>0 ( ∆j + log(KT log(T )) ∆j ) ,
for some absolute constant C > 0.
Notice that the above analysis, which uses a naive union bound, does not yield the desired regret bound. We next provide a more tight analysis of the same algorithm that achieves the regret bound of Theorem 4.
Improved Analysis (The Proof of Theorem 4) In the previous analysis, we used a union bound over all arms and all batches in order to control the probability of the bad event. However, we can obtain an improved regret bound as follows. Fix a sub-optimal arm i ∈ [K] and let t be the first round that it appears in the bad event. We claim that after a constant number of rounds, this arm will be eliminated. This will shave the O(log2(T )) factor from the regret bound. Essentially, as indicated in the previous proof, the bad event corresponds to the case where the randomness of the cut-off threshold U can influence the decision of whether the algorithm eliminates an arm or not. The intuition is that during the rounds t and t+1, given that the two intervals intersected at round t, we know that the probability that they intersect again is quite small since the interval of the optimal mean is moving upwards, the interval of the sub-optimal mean is concentrating around the guess and the two estimations have been moved by at most a constant times the interval’s length.
Since the bad event occurs at round t, we know that
µ̂ (t) j ∈ [ µ̂ (t) t⋆ − Ut − 5Ũt, µ̂ (t) t⋆ − Ut/2 + 3Ũt ] .
In the above µ̂tt⋆ is the estimate of the optimal mean at round t whose index is denoted by t ⋆. Now assume that the bad event for arm j also occurs at round t+ k. Then, we have that
µ̂ (t+k) j ∈ [ µ̂ (t+k) (t+k)⋆ − Ut+k − 5Ũt+k, µ̂ (t+k) (t+k)⋆ − Ut+k/2 + 3Ũt+k ] .
First, notice that since the concentration inequality under event E holds for rounds t, t+ k we have that µ̂(t+k)j ≤ µ̂ (t) j + Ũt + Ũt+k. Thus, combining it with the above inequalities gives us
µ̂ (t+k) (t+k)⋆ − Ut+k − 5Ũt+k ≤ µ̂ (t+k) j ≤ µ̂ (t) j + Ũt + Ũt+k ≤ µ̂ (t) t⋆ − Ut/2 + 4Ũt + Ũt+k.
We now compare µ̂(t)t⋆ , µ̂ (t+k) (t+k)⋆ . Let o denote the optimal arm. We have that
µ̂ (t+k) (t+k)⋆ ≥ µ̂ (t+k) o ≥ µo − Ũt+k ≥ µt⋆ − Ũt+k ≥ µ̂ (t) t⋆ − Ũt − Ũt+k.
This gives us that
µ̂ (t) t⋆ − Ut+k − 6Ũt+k − Ũt ≤ µ̂ (t+k) (t+k)⋆ − Ut+k − 5Ũt+k.
Thus, we have established that
µ̂ (t) t⋆ − Ut+k − 6Ũt+k − Ũt ≤ µ̂ (t) t⋆ − Ut/2 + 4Ũt + Ũt+k =⇒
Ut+k ≥ Ut/2− 7Ũt+k − 5Ũt ≥ Ut/2− 12Ũt.
Since β ≥ 2304, we get that 12Ũt ≤ Ut/4. Thus, we get that
Ut+k ≥ Ut/4.
Notice that Ut+k Ut =
√ ct
ct+k ,
thus it immediately follows that
ct ct+k ≥ 1 16 =⇒ q t+1 − 1 qt+k+1 − 1 ≥ 1 16 =⇒ 16
( 1− 1
qt+1
) ≥ qk − 1
qt+1 =⇒
qk ≤ 16 + 1 qt+1 ≤ 17 =⇒ k log q ≤ log 17 =⇒ k ≤ 5,
when we pick B = log(T ) batches. Thus, for every arm the bad event can happen at most 6 times, by taking a union bound over the K arms we see that the probability that our algorithm is not replicable is at most O(K √ 1/β), so picking β = Θ(K2/ρ2) suffices to get the result.
C THE PROOF OF THEOREM 6
Theorem. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 3) for the stochastic d-dimensional linear bandit problem with K arms whose expected regret is
E[RT ] ≤ C · K2
ρ2
√ dT log(KT ) ,
for some absolute numerical constant C > 0, and its running time is polynomial in d,K, T and 1/ρ.
Proof. Let c, C be the numerical constants hidden in Lemma 5, i.e., the size of the multi-set is in the interval [cd log(1/δ)/ε2, Cd log(1/δ)/ε2]. We know that the size of each batch ni ∈ [cqi, Cqi] (see Lemma 5), so by the end of the B − 1 batch we will have less than nB pulls left. Hence, the number of batches is at most B.
We first define the event E that the estimates of all arms after the end of each batch are accurate, i.e., for every active arm a at the beginning of the i-th batch, at the end of the batch we have that∣∣∣〈a, θ̂i − θ⋆〉∣∣∣ ≤ ε̃i. Since δ = 1/(KT 2) and there are at most T batches and K active arms in each batch, a simple union bound shows that E happens with probability at least 1 − 1/T. We condition on the event E throughout the rest of the proof. We now argue about the regret bound of our algorithm. We first show that any optimal arm a∗ will not get eliminated. Indeed, consider any sub-optimal arm a ∈ [K] and any batch i ∈ [B]. Under the event E we have that
⟨a, θ̂i⟩ − ⟨a∗, θ̂i⟩ ≤ (⟨a, θ∗⟩+ ε̃i)− (⟨a∗, θ∗⟩ − ε̃i) < 2ε̃i < εi + εi.
Next, we need to bound the number of times we pull some fixed suboptimal arm a ∈ [K]. We let ∆ = ⟨a∗ − a, θ∗⟩ denote the gap and we let i be the smallest integer such that εi < ∆/4. We claim that this arm will get eliminated by the end of batch i. Indeed,
⟨a∗, θ̂i⟩ − ⟨a, θ̂i⟩ ≥ (⟨a∗, θ̂i⟩ − ε̃i)− (⟨a, θ̂i⟩+ ε̃i) = ∆− 2ε̃i > 4εi − 2ε̃i > ε̃i + εi.
This shows that during any batch i, all the active arms have gap at most 4εi−1. Thus, the regret of the algorithm conditioned on the event E is at most
B∑ i=1 4niεi−1 ≤ 4βC B∑ i=1 qi √ d log(KT 2)/qi−1 ≤ 6βCq √ d log(KT ) B−1∑ i=0 qi/2 ≤
O ( βqB/2+1 √ d log(KT ) ) = O ( K2
ρ2 qB/2+1
√ d log(KT ) ) = O ( K2 ρ2 q √ dT log(KT ) ) .
Thus, the overall regret is bounded by δ · T + (1 − δ) · O ( K2 ρ2 q √ dT log(KT ) ) =
O ( K2 ρ2 q √ dT log(KT ) ) .
We now argue about the replicability of our algorithm. The analysis follows in a similar fashion as in Theorem 4. Let θ̂i, θ̂′i be the LSE after the i-th batch, under two different executions of the algorithm and assume that the set of active arms. We condition on the event E ′ for the other execution as well. Assume that the set of active arms is the same under both executions at the beginning of batch i. Notice that since the set that is guaranteed by Lemma 5 is computed by a deterministic algorithm, both executions will pull the same arms in batch i. Consider a suboptimal arm a and let ai∗ = argmaxa∈A⟨θ̂i, a⟩, a′i∗ = argmaxa∈A⟨θ̂′i, a⟩. Under the event E ∩ E ′ we have that |⟨a, θ̂i − θ̂′i⟩| ≤ 2ε̃i, |⟨ai∗ , θ̂i − θ̂′i⟩| ≤ 2ε̃i, and |⟨a′i∗ , θ̂′i⟩ − ⟨ai∗ , θ̂i⟩| ≤ 2ε̃i. Notice that, since the randomness of εi is shared, if ⟨a, θ̂i⟩ + ε̃i ≥ ⟨ai∗ , θ̂i⟩ − εi + 4ε̃i, then the arm a will not be eliminated after the i-th batch in some other execution of the algorithm as well. Similarly, if ⟨a, θ̂i⟩+ ε̃i < ⟨ai∗ , θ̂i⟩− εi− 4ε̃i the the arm a will get eliminated after the i-th batch in some other execution of the algorithm as well. In particular, this means that if ⟨a, θ̂i⟩−2ε̃i > ⟨ai∗ , θ̂i⟩+ε̃i−εi/2 then the arm a will not get eliminated in some other execution of the algorithm and if ⟨a, θ̂i⟩+5ε̃i < ⟨ai∗ , θ̂i⟩ − εi then the arm j will also get eliminated in some other execution of the algorithm with probability 1 under the event E ∩E ′. Thus, it suffices to bound the probability that the decision about arm j will be different between the two executions when we are in neither of these cases. Then, the worst case bound due to the mass of the uniform probability measure is
16 √
d log(1/δ)/c̃i√ d log(1/δ)/ci .
This implies that the probability mass of the bad event is at most 16 √ ci/c̃i = 16 √ 1/β. A naive union bound would require us to pick β = Θ(K2 log2 T/ρ2). We next show to avoid the log2 T factor. Fix a sub-optimal arm a ∈ [K] and let t be the first round that it appears in the bad event. Since the bad event occurs at round t, we know that
⟨a, θ̂t⟩ ∈ [ ⟨at∗ , θ̂t⟩ − εt − 5ε̃t, ⟨at∗ , θ̂t⟩ − εt/2 + 3ε̃t ] .
In the above, at∗ is the optimal arm at round t w.r.t. the LSE. Now assume that the bad event for arm a also occurs at round t+ k. Then, we have that
⟨a, θ̂t+k⟩ ∈ [ ⟨a(t+k)∗ , θ̂t+k⟩ − εt+k − 5ε̃t+k, ⟨a(t+k)∗ , θ̂t+k⟩ − εt/2 + 3ε̃t+k ] .
First, notice that since the concentration inequality under event E holds for rounds t, t+ k we have that ⟨a, θ̂t+k⟩ ≤ ⟨a, θ̂t⟩+ ε̃t + ε̃t+k. Thus, combining it with the above inequalities gives us ⟨a(t+k)∗ , θ̂t+k⟩− εt+k − 5ε̃t+k ≤ ⟨a, θ̂t+k⟩ ≤ ⟨a, θ̂t⟩+ ε̃t + ε̃t+k ≤ ⟨at∗ , θ̂t⟩− εt/2+ 4ε̃t + ε̃t+k. We now compare ⟨at∗ , θ̂t⟩, ⟨a(t+k)∗ , θ̂t+k⟩. Let a∗ denote the optimal arm. We have that ⟨a(t+k)∗ , θ̂t+k⟩ ≥ ⟨a∗, θ̂t+k⟩ ≥ ⟨a∗, θ∗⟩ − ε̃t+k ≥ ⟨at∗ , θ∗⟩ − ε̃t+k ≥ ⟨at∗ , θ̂t⟩ − ε̃t+k − ε̃t.
This gives us that
⟨at∗ , θ̂t⟩ − εt+k − 6ε̃t+k − ε̃t ≤ ⟨a(t+k)∗ , θ̂t+k⟩ − εt+k − 5ε̃t+k. Thus, we have established that
⟨at∗ , θ̂t⟩ − εt+k − 6ε̃t+k − ε̃t ≤ ⟨at∗ , θ̂t⟩ − εt/2 + 4ε̃t + ε̃t+k =⇒ εt+k ≥ εt/2− 7ε̃t+k − 5ε̃t ≥ εt/2− 12ε̃t.
Since β ≥ 2304, we get that 12ε̃t ≤ εt/4. Thus, we get that εt+k ≥ εt/4.
Notice that εt+k εt =
√ qt
qt+k ,
thus it immediately follows that qt
qt+k ≥ 1 16 =⇒ qk ≤ 16 =⇒ k log q ≤ log 16 =⇒ k ≤ 4,
when we pick B = log(T ) batches. Thus, for every arm the bad event can happen at most 5 times, by taking a union bound over the K arms we see that the probability that our algorithm is not replicable is at most O(K √ 1/β), so picking β = Θ(K2/ρ2) suffices to get the result.
D NAIVE APPLICATION OF ALGORITHM 3 WITH INFINITE ACTION SPACE
We use a 1/T 1/(4d+2)−net that has size at most (3T ) d 4d+2 . Let A′ be the new set of arms. We then run Algorithm 3 using A′. This gives us the following result, that is proved right after. Corollary 12. Let T ∈ N, ρ ∈ (0, 1]. There is a ρ-replicable algorithm for the stochastic ddimensional linear bandit problem with infinite arms whose expected regret is at most
E[RT ] ≤ C · T
4d+1 4d+2
ρ2
√ d log(T ) ,
where C > 0 is an absolute numerical constant.
Proof. Since K ≤ (3T ) d 4d+2 , we have that
T sup a∈A′ ⟨a, θ∗⟩ −E
[ T∑
i=1
⟨at, θ∗⟩ ] ≤ O ( (3T ) 2d 4d+2
ρ2
√ dT log ( T (3T ) d 4d+2 )) = O ( T 4d+1 4d+2
ρ2
√ d log(T ) ) Comparing to the best arm in A, we have that:
T sup a∈A ⟨a, θ∗⟩ −E
[ T∑
i=1
⟨at, θ∗⟩ ] = ( T sup
a∈A ⟨a, θ∗⟩ − T sup a∈A′ ⟨a, θ∗⟩
) + ( T sup
a∈A′ ⟨a, θ∗⟩ −E
[ T∑
i=1
⟨at, θ∗⟩ ]) Our choice of the 1/T 1/(4d+2)-net implies that for every a ∈ A there exists some a′ ∈ A′ such that ||a − a′||2 ≤ 1/T 1/(4d+2). Thus, supa∈A⟨a, θ∗⟩ − supa′∈A′⟨a′, θ∗⟩ ≤ ||a − a′||2||θ∗||2 ≤ 1/T 1/(4d+2). Thus, the total regret is at most
T · 1/T 1/(4d+2) +O
( T 4d+1 4d+2
ρ2
√ d log(T ) ) = O ( T 4d+1 4d+2
ρ2
√ d log(T ) ) .
E THE PROOF OF THEOREM 10
Theorem. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 4) for the stochastic d-dimensional linear bandit problem with infinite action set whose expected regret is
E[RT ] ≤ C · d4 log(d) log2 log(d) log log log(d)
ρ2
√ T log3/2(T ) ,
for some absolute numerical constant C > 0, and its running time is polynomial in T d and 1/ρ.
Proof. First, the algorithm is ρ-replicable since in each batch we use a replicable LSE sub-routine with parameter ρ′ = ρ/B. This implies that
Pr[(a1, ..., aT ) ̸= (a′1, ..., a′T )] = Pr[∃i ∈ [B] : θ̂i was not replicable] ≤ ρ . Let us fix a batch iteration i ∈ [B− 1]. Set Ci be the core set computed by Lemma 7. The algorithm first pulls ni =
Cd4 log(d/δ) log2 log(d) log log log(d) ε2iρ
′2 times each one of the arms of the i-th core set Ci, as indicated by Lemma 9 and computes the LSE θ̂i in a replicable way using the algorithm of Lemma 9. Let E be the event that over all batches the estimations are correct. We pick δ = 1/(2|A′|T 2) so that this good event does hold with probability at least 1− 1/T . Our goal is to control the expected regret which can be written as
E[RT ] = T sup a∈A ⟨a, θ⋆⟩ −E T∑ t=1 ⟨at, θ⋆⟩ .
We have that T sup
a∈A ⟨a, θ⋆⟩ − T sup a′∈A′ ⟨a′, θ⋆⟩ ≤ 1 ,
since A′ is a deterministic 1/T -net of A. Also, let us set the expected regret of the bounded action sub-problem as
E[R′T ] = T sup a′∈A′
⟨a′, θ⋆⟩ −E T∑
t=1
⟨at, θ⋆⟩ .
We can now employ the analysis of the finite arm case. During batch i, any active arm has gap at most 4εi−1, so the instantaneous regret in any round is not more than 4εi−1. The expected regret conditional on the good event E is upper bounded by
E[R′T |E ] ≤ B∑ i=1 4Miεi−1 ,
where Mi is the total number of pulls in batch i (using the replicability blow-up) and εi−1 is the error one would achieve by drawing qi samples (ignoring the blow-up). Then, for some absolute constant C > 0, we have that
E[R′T |E ] ≤ B∑ i=1 4 ( qi d3 log(d) log2 log(d) log log log(d) log2 T ρ2 ) · √ d2 log(T )/qi−1 ,
which yields that
E[R′T |E ] ≤ C d4 log(d) log2 log(d) log log log(d) log(T )
√ log(T )
ρ2 · S ,
where we set
S := B∑ i=1
qi
q(i−1)/2 = q1/2 B∑ i=1 qi/2 = q(1+B)/2 .
We pick B = log(T ) and get that, if q = T 1/B then S = Θ( √ T ). We remark that this choice of q is valid since B∑ i=1 qi = qB+1 − q q − 1 = Θ(qB)− 1 ≥ Tρ 2 d3 log(d) log2 log(d) log log log(d) .
Hence, we have that E[R′T |E ] ≤ O ( d4 log(d) log2 log(d) log log log(d)
ρ2
√ T log3/2(T ) ) .
Note that when E does not hold, we can bound the expected regret by 1/T · T = 1. This implies that the overall regret E[RT ] ≤ 2 + E[R′T |E ] and so it satisfies the desired bound and the proof is complete.
F DEFERRED LEMMATA
F.1 THE PROOF OF LEMMA 8
Proof. Consider the distribution π that is a 2-approximation to the optimal G-design and has support |C| = O(d log log d). Let C′ be the set of arms in the support such that π(a) ≤ c/d log d. We consider π̃ = (1 − x)π + xa, where a ∈ C′ and x will be specified later. Consider now the matrix V (π̃). Using the Sherman-Morrison formula, we have that
V (π̃)−1 = 1 1− x V (π)−1 − xV (π)
−1aa⊤V (π)−1 (1− x)2 ( 1 + 11−x ||a|| 2 V (π)−1 ) = 1 1− x
( V (π)−1 − xV (π) −1aa⊤V (π)−1
1− x+ ||a||2V (π)−1
) .
Consider any arm a′. Then,
||a′||2V (π̃)−1 = 1
1− x ||a||2V (π)−1 −
x 1− x · (a
⊤V (π)−1a′)2
1− x+ ||a||2V (π)−1 ≤ 1 1− x ||a||2V (π)−1 .
Note that we apply this transformation at most O(d log log d) times. Let π̂ be the distribution we end up with. We see that
||a′||2V (π̂)−1 ≤ ( 1
1− x
)cd log log d ||a||2V (π)−1 ≤ 2 ( 1
1− x
)cd log log d d.
Notice that there is a constant c′ such that when x = c′/d log d we have that (
1 1−x
)cd log log d ≤ 2.
Moreover, notice that the mass of every arm is at least x(1 − x)|C| ≥ x − |C|x2 = c′/(d log(d)) − c′′d log log d/(d2 log2(d)) ≥ c/(d log(d)), for some absolute numerical constant c > 0. This concludes the claim.
F.2 THE PROOF OF LEMMA 9
Proof. The proof works when we can treat Ω(⌈d log(1/δ)π(a)/ε2⌉) as Ω(d log(1/δ)π(a)/ε2), i.e., as long as π(a) = Ω(ε2/d log(1/δ)). In the regime we are in, this point is handled thanks to Lemma 8. Combining the following proof with Lemma 8, we can obtain the desired result.
We underline that we work in the fixed design setting: the arms ai are deterministically chosen independently of the rewards ri. Assume that the core set of Lemma 7 is the set C. Fix the multi-set S = {(ai, ri) : i ∈ [M ]}, where each arm a lies in the core set and is pulled na = Θ(π(a)d log(d) log(|C|/δ)/ε2) times2. Hence, we have that
M = ∑ a∈C na = Θ ( d log(d) log(|C|/δ)/ε2 ) .
Let also V = ∑
i∈[M ] aia ⊤ i . The least-squares estimator can be written as
θ (ε) LSE = V −1 ∑ i∈[M ] airi = V −1 ∑ a∈C a ∑ i∈[na] ri(a) ,
where each a lies in the core set (deterministically) and ri(a) is the i-th reward generated independently by the linear regression process ⟨θ⋆, a⟩+ξ, where ξ is a fresh zero mean sub-gaussian random variable. Our goal is to reproducibly estimate the value ∑ i∈[na] ri(a) for any a. This is sufficient since two independent executions of the algorithm share the set C and na for any a. Note that the above sum is a random variable. In the following, we condition on the high-probability event that the average reward of the arm a is ε-close to the expected one, i.e., the value ⟨θ⋆, a⟩. This happens with probability at least 1− δ/(2|C|), given Ω(π(a)d log(d) log(|C|/δ)/ε2) samples from arm a ∈ C. In order to guarantee replicability, we will apply a result from Impagliazzo et al. (2022). Since we will union bound over all arms in the core set and |C| = O(d log log(d)) (via Lemma 7), we will make use of a (ρ/|C|)-replicable algorithm that gives an estimate v(a) ∈ R such that
|⟨θ⋆, a⟩ − v(a)| ≤ τ ,
with probability at least 1− δ/(2|C|). For δ < ρ, the algorithm uses Sa = Ω ( d2 log(d/δ) log2 log(d) log log log(d)/(ρ2τ2) ) many samples from the linear regression with fixed arm a ∈ C. Since we have conditioned on the randomness of ri(a) for any i, we get∣∣∣∣∣∣ 1na ∑ i∈[na] ri(a)− v(a) ∣∣∣∣∣∣ ≤ ∣∣∣∣∣∣ 1na ∑ i∈[na] ri(a)− ⟨θ∗, a⟩
∣∣∣∣∣∣+ |⟨θ∗, a⟩ − v(a)| ≤ ε+ τ , with probability at least 1− δ/(2|C|). Hence, by repeating this approach for all arms in the core set, we set θSQ = V −1 ∑ a∈C a na v(a). Let us condition on the randomness of the estimate θ (ε) LSE. We have that
sup a′∈A |⟨a′, θSQ − θ⋆⟩| ≤ sup a′∈A |⟨a′, θSQ − θ(ε)LSE⟩|+ sup a′∈A |⟨a′, θ(ε)LSE − θ ⋆⟩| .
2Recall that π(a) ≥ c/(d log(d)), for some constant c > 0, so the previous expression is Ω(log(δ/|C|)/ε2).
Note that the second term is ε with probability at least 1− δ via Lemma 5. Our next goal is to tune the accuracy τ ∈ (0, 1) so that the first term yields another ε error. For the first term, we have that
sup a′∈A |⟨a′, θSQ − θ(ε)LSE⟩| ≤ sup a′∈A ∣∣∣∣∣⟨a′, V −1∑ a∈C a na (ε+ τ)⟩ ∣∣∣∣∣ Note that V = Cd log(d) log(|C|/δ)ε2 ∑ a∈C π(a)aa ⊤ and so V −1 = ε 2 Cd log(d) log(|C|/δ)V (π) −1, for some absolute constant C > 0. This implies that
sup a′∈A |⟨a′, θSQ−θ(ε)LSE⟩| ≤ (ε+τ) sup a′∈A ∣∣∣∣∣ 〈 a′,
ε2
Cd log(d) log(|C|/δ) V (π)−1 ∑ a∈C Cd log(d) log(|C|/δ)π(a) ε2 a 〉∣∣∣∣∣ . Hence, we get that
sup a′∈A |⟨a′, θSQ − θ(ε)LSE⟩| ≤ (ε+ τ) sup a′∈A ∣∣∣∣∣ 〈 a′, V (π)−1 ∑ a∈C π(a)a 〉∣∣∣∣∣ . Consider a fixed arm a′ ∈ A. Then,∣∣∣∣∣ 〈 a′, V (π)−1 ∑ a∈C π(a)a 〉∣∣∣∣∣ ≤∑ a∈C π(a) ∣∣⟨a′, V (π)−1a⟩∣∣
≤ ∑ a∈C π(a) ( 1 + ∣∣⟨a′, V (π)−1a⟩∣∣2) = 1 +
∑ a∈C π(a) ∣∣⟨a′, V (π)−1a⟩∣∣ | 1. What is the focus of the paper regarding reproducible bandit algorithms?
2. What are the strengths of the proposed algorithms, particularly in their regret bounds?
3. What are the weaknesses of the paper, especially regarding experimental illustrations and extra dependence on K?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The purpose of this paper is to investigate reproducible bandit algorithms while keeping a reasonable regret guarantee. The paper starts with a notion of reproducibility for a bandit algorithm following the definition of Impagliazzo et al. (2022). It states that a bandit algorithm is reproducible if two different executions of the algorithm would lead to the same sequence of played arms with high probability.
The authors first studied the problem for standard stochastic bandits in Section 3 and proposed a first version of algorithm based on reproducible mean estimation (Impagliazzo et al., 2022) and a batched bandits algorithm (Esfandiari et al., 2021) (using traditional optimal algorithms like UCB is unlikely to be feasible under current definition). This first version incurs an extra factor of
O
(
K
2
log
2
(
T
)
/
ρ
2
)
compared to its non-reproducible counterpart. The authors then managed to get rid of a
log
2
(
T
)
factor in Section 4, which reduces the extra factor to
O
(
K
2
/
ρ
2
)
.
The authors also managed to propose an algorithm with the same extra
O
(
K
2
/
ρ
2
)
factor for linear bandits (finitely-armed) proposed in Section 5.1. A study in the infinitely-armed case is also provided (with an extra
d
2
factor against its non-reproducible counterpart).
Strengths And Weaknesses
Strength:
The topic is novel and could have an impact on the community.
A first definition of reproducible bandit algorithms is provided and is supported by some non-trivial algorithms.
A very clear thinking process on how the algorithms (and improvements) are proposed.
Correct theoretical contributions: regret bounds provided are non-trivial.
Weakness:
One of my major concerns is that the paper didn't provide any experimental illustration while the paper is talking about reproducibility. I do understand that the authors want to provide a fundamental study of the problem, but reproducibility itself is a very practical-oriented notion for which lack of experiments seems not quite reasonable to me.
Another related point, not necessarily a weak point but rather something arguable, is the extra dependence of
K
in the regret bound compared to non-reproducible algorithms. It seems that the extra
K
2
/
ρ
2
factor is rather difficult to be get rid of under current context. This could particularly cause problems when we have a large arm space in practice. And somehow, reproducibility makes more sense in a large scale environment. This is a question rather about the validity of the definition itself in my opinion: does it really make sense in practice? I think the authors could probably provide more discussion on that rather than (alongside some experiments ideally) stacking theoretical results for different problem settings one by one.
Clarity, Quality, Novelty And Reproducibility
Clarity and quality: The paper is well written. In particular, I appreciate a lot the warm-up part in Section 2, which provided a more comprehensible view of the problem to readers.
Novelty: The topic is new and important. The algorithms inspired a lot from Esfandiari et al. (2021) which reduces a bit the novelty, but it's ok.
Reproducibility: The reproducibility aspect is not really applicable here since the paper seems to be pure theoretical, which is somehow disappointing since the paper itself is discussing about reproducibility. |
ICLR | Title
Replicable Bandits
Abstract
In this paper, we introduce the notion of replicable policies in the context of stochastic bandits, one of the canonical problems in interactive learning. A policy in the bandit environment is called replicable if it pulls, with high probability, the exact same sequence of arms in two different and independent executions (i.e., under independent reward realizations). We show that not only do replicable policies exist, but also they achieve almost the same optimal (non-replicable) regret bounds in terms of the time horizon. More specifically, in the stochastic multi-armed bandits setting, we develop a policy with an optimal problem-dependent regret bound whose dependence on the replicability parameter is also optimal. Similarly, for stochastic linear bandits (with finitely and infinitely many arms) we develop replicable policies that achieve the best-known problem-independent regret bounds with an optimal dependency on the replicability parameter. Our results show that even though randomization is crucial for the exploration-exploitation trade-off, an optimal balance can still be achieved while pulling the exact same arms in two different rounds of executions.
1 INTRODUCTION
In order for scientific findings to be valid and reliable, the experimental process must be repeatable, and must provide coherent results and conclusions across these repetitions. In fact, lack of reproducibility has been a major issue in many scientific areas; a 2016 survey that appeared in Nature (Baker, 2016a) revealed that more than 70% of researchers failed in their attempt to reproduce another researcher’s experiments. What is even more concerning is that over 50% of them failed to reproduce their own findings. Similar concerns have been raised by the machine learning community, e.g., the ICLR 2019 Reproducibility Challenge (Pineau et al., 2019) and NeurIPS 2019 Reproducibility Program (Pineau et al., 2021), due to the to the exponential increase in the number of publications and the reliability of the findings.
The aforementioned empirical evidence has recently led to theoretical studies and rigorous definitions of replicability. In particular, the works of Impagliazzo et al. (2022) and Ahn et al. (2022) considered replicability as an algorithmic property through the lens of (offline) learning and convex optimization, respectively. In a similar vein, in the current work, we introduce the notion of replicability in the context of interactive learning and decision making. In particular, we study replicable policy design for the fundamental setting of stochastic bandits.
A multi-armed bandit (MAB) is a one-player game that is played over T rounds where there is a set of different arms/actions A of size |A| = K (in the more general case of linear bandits, we can consider even an infinite number of arms). In each round t = 1, 2, . . . , T , the player pulls an arm at ∈ A and receives a corresponding reward rt. In the stochastic setting, the rewards of each
arm are sampled in each round independently, from some fixed but unknown, distribution supported on [0, 1]. Crucially, each arm has a potentially different reward distribution, but the distribution of each arm is fixed over time. A bandit algorithm A at every round t takes as input the sequence of arm-reward pairs that it has seen so far, i.e., (a1, r1), . . . , (at−1, rt−1), then uses (potentially) some internal randomness ξ to pull an arm at ∈ A and, finally, observes the associated reward rt ∼ Dat . We propose the following natural notion of a replicable bandit algorithm, which is inspired by the definition of Impagliazzo et al. (2022). Intuitively, a bandit algorithm is replicable if two distinct executions of the algorithm, with internal randomness fixed between both runs, but with independent reward realizations, give the exact same sequence of played arms, with high probability. More formally, we have the following definition. Definition 1 (Replicable Bandit Algorithm). Let ρ ∈ [0, 1]. We call a bandit algorithm A ρreplicable in the stochastic setting if for any distribution Daj over [0, 1] of the rewards of the j-th arm aj ∈ A, and for any two executions of A, where the internal randomness ξ is shared across the executions, it holds that
Pr ξ,r(1),r(2)
[( a (1) 1 , . . . , a (1) T ) = ( a (2) 1 , . . . , a (2) T )] ≥ 1− ρ .
Here, a(i)t = A(a (i) 1 , r (i) 1 , ..., a (i) t−1, r (i) t−1; ξ) is the t-th action taken by the algorithm A in execution i ∈ {1, 2}.
The reason why we allow for some fixed internal randomness is that the algorithm designer has control over it, e.g., they can use the same seed for their (pseudo)random generator between two executions. Clearly, naively designing a replicable bandit algorithm is not quite challenging. For instance, an algorithm that always pulls the same arm or an algorithm that plays the arms in a particular random sequence determined by the shared random seed ξ are both replicable. The caveat is that the performance of these algorithms in terms of expected regret will be quite poor. In this work, we aim to design bandit algorithms which are replicable and enjoy small expected regret. In the stochastic setting, the (expected) regret after T rounds is defined as
E[RT ] = T max a∈A
µa −E
[ T∑
t=1
µat
] ,
where µa = Er∼Da [r] is the mean reward for arm a ∈ A. In a similar manner, we can define the regret in the more general setting of linear bandits (see, Section 5) Hence, the overarching question in this work is the following:
Is it possible to design replicable bandit algorithms with small expected regret?
At a first glance, one might think that this is not possible, since it looks like replicability contradicts the exploratory behavior that a bandit algorithm should possess. However, our main results answer this question in the affirmative and can be summarized in Table 1.
1.1 RELATED WORK
Reproducibility/Replicability. In this work, we introduce the notion of replicability in the context of interactive learning and, in particular, in the fundamental setting of stochastic bandits. Close to our work, the notion of a replicable algorithm in the context of learning was proposed by Impagliazzo et al. (2022), where it is shown how any statistical query algorithm can be made replicable with a moderate increase in its sample complexity. Using this result, they provide replicable algorithms for finding approximate heavy-hitters, medians, and the learning of half-spaces. Reproducibility has been also considered in the context of optimization by Ahn et al. (2022). We mention that in Ahn et al. (2022) the notion of a replicable algorithm is different from our work and that of Impagliazzo et al. (2022), in the sense that the outputs of two different executions of the algorithm do not need to be exactly the same. From a more application-oriented perspective, Shamir & Lin (2022) study irreproducibility in recommendation systems and propose the use of smooth activations (instead of ReLUs) to improve recommendation reproducibility. In general, the reproducibility crisis is reported in various scientific disciplines Ioannidis (2005); McNutt (2014); Baker (2016b); Goodman et al. (2016); Lucic et al. (2018); Henderson et al. (2018). For more details we refer to the report of the NeurIPS 2019 Reproducibility Program Pineau et al. (2021) and the ICLR 2019 Reproducibility Challenge Pineau et al. (2019).
Bandit Algorithms. Stochastic multi-armed bandits for the general setting without structure have been studied extensively Slivkins (2019); Lattimore & Szepesvári (2020); Bubeck et al. (2012b); Auer et al. (2002); Cesa-Bianchi & Fischer (1998); Kaufmann et al. (2012a); Audibert et al. (2010); Agrawal & Goyal (2012); Kaufmann et al. (2012b). In this setting, the optimum regret achievable is O ( log(T ) ∑ i:∆i>0 ∆−1 ) ; this is achieved, e.g., by the upper confidence bound (UCB) algorithm of Auer et al. (2002). The setting of d-dimensional linear stochastic bandits is also well-explored Dani et al. (2008); Abbasi-Yadkori et al. (2011) under the well-specified linear reward model, achieving (near) optimal problem-independent regret of O(d √ T log(T )) Lattimore & Szepesvári (2020). Note that the best-known lower bound is Ω(d √ T ) Dani et al. (2008) and that the number of arms can, in principle, be unbounded. For a finite number of arms K, the best known upper bound is O( √ dT log(K)) Bubeck et al. (2012a). Our work focuses on the design of replicable bandit algorithms and we hence consider only stochastic environments. In general, there is also extensive work in adversarial bandits and we refer the interested reader to Lattimore & Szepesvári (2020).
Batched Bandits. While sequential bandit problems have been studied for almost a century, there is much interest in the batched setting too. In many settings, like medical trials, one has to take a lot of actions in parallel and observe their rewards later. The works of Auer & Ortner (2010) and CesaBianchi et al. (2013) provided sequential bandit algorithms which can easily work in the batched setting. The works of Gao et al. (2019) and Esfandiari et al. (2021) are focusing exclusively on the batched setting. Our work on replicable bandits builds upon some of the techniques from these two lines of work.
2 STOCHASTIC BANDITS AND REPLICABILITY
In this section, we first highlight the main challenges in order to guarantee replicability and then discuss how the results of Impagliazzo et al. (2022) can be applied in our setting.
2.1 WARM-UP I: NAIVE REPLICABILITY AND CHALLENGES
Let us consider the stochastic two-arm setting (K = 2) and a bandit algorithm A with two independent executions, A1 and A2. The algorithm Ai plays the sequence 1, 2, 1, 2, . . . until some, potentially random, round Ti ∈ N after which one of the two arms is eliminated and, from that point, the algorithm picks the winning arm ji ∈ {1, 2}. The algorithm A is ρ-replicable if and only if T1 = T2 and j1 = j2 with probability 1− ρ. Assume that |µ1 − µ2| = ∆ where µi is the mean of the distribution of the i-th arm. If we assume that ∆ is known, then we can run the algorithm for T1 = T2 = C∆2 log(1/ρ) for some universal constant C > 0 and obtain that, with probability 1 − ρ, it will hold that µ̂(j)1 ≈ µ1 and µ̂ (j) 2 ≈ µ2
for j ∈ {1, 2}, where µ̂(j)i is the estimation of arm’s i mean during execution j. Hence, knowing ∆ implies that the stopping criterion of the algorithm A is deterministic and that, with high probability, the winning arm will be detected at time T1 = T2. This will make the algorithm ρ-replicable.
Observe that when K = 2, the only obstacle to replicability is that the algorithm should decide at the same time to select the winning arm and the selection must be the same in the two execution threads. In the presence of multiple arms, there exists the additional constraint that the above conditions must be satisfied during, potentially, multiple arm eliminations. Hence, the two questions arising from the above discussion are (i) how to modify the above approach when ∆ is unknown and (ii) how to deal with K > 2 arms.
A potential solution to the second question (on handling K > 2 arms) is the Execute-Then-Commit (ETC) strategy. Consider the stochastic K-arm bandit setting. For any ρ ∈ (0, 1), the ETC algorithm with known ∆ = mini ∆i and horizon T that uses m = 4∆2 log(1/ρ) deterministic exploration phases before commitment is ρ-replicable. The intuition is exactly the same as in the K = 2 case. The caveats of this approach are that it assumes that ∆ is known and that the obtained regret is quite unsatisfying. In particular, it achieves regret bounded by m ∑ i∈[K] ∆i + ρ · (T −mK) ∑ i∈[k] ∆i.
Next, we discuss how to improve the regret bound without knowing the gaps ∆i. Before designing new algorithms, we will inspect the guarantees that can be obtained by combining ideas from previous results in the bandits literature and the recent work in replicable learning of Impagliazzo et al. (2022).
2.2 WARM-UP II: BANDIT ALGORITHMS AND REPLICABLE MEAN ESTIMATION
First, we remark that we work in the stochastic setting and the distributions of the rewards of the two arms are subgaussian. Thus, the problem of estimating their mean is an instance of a statistical query for which we can use the algorithm of Impagliazzo et al. (2022) to get a replicable mean estimator for the distributions of the rewards of the arms. Proposition 2 (Replicable Mean Estimation (Impagliazzo et al., 2022)). Let τ, δ, ρ ∈ [0, 1]. There exists a ρ-replicable algorithm ReprMeanEstimation that draws Ω ( log(1/δ) τ2(ρ−δ)2 ) samples from a distribution with mean µ and computes an estimate µ̂ that satisfies |µ̂ − µ| ≤ τ with probability at least 1− δ.
Notice that we are working in the regime where δ ≪ ρ, so the sample complexity is Ω (
log(1/δ) τ2ρ2
) .
The straightforward approach is to try to use an optimal multi-armed algorithm for the stochastic setting, such as UCB or arm-elimination (Even-Dar et al., 2006), combined with the replicable mean estimator. However, it is not hard to see that this approach does not give meaningful results: if we want to achieve replicability ρ we need to call the replicable mean estimator routine with parameter ρ/(KT ), due to the union bound that we need to take. This means that we need to pull every arm at least K2T 2 times, so the regret guarantee becomes vacuous. This gives us the first key insight to tackle the problem: we need to reduce the number of calls to the mean estimator. Hence, we will draw inspiration from the line of work in stochastic batched bandits (Gao et al., 2019; Esfandiari et al., 2021) to derive replicable bandit algorithms.
3 REPLICABLE MEAN ESTIMATION FOR BATCHED BANDITS
As a first step, we would like to show how one could combine the existing replicable algorithms of Impagliazzo et al. (2022) with the batched bandits approach of Esfandiari et al. (2021) to get some preliminary non-trivial results. We build an algorithm for the K-arm setting, where the gaps ∆j are unknown to the learner. Let δ be the confidence parameter of the arm elimination algorithm and ρ be the replicability guarantee we want to achieve. Our approach is the following: let us, deterministically, split the time interval into sub-intervals of increasing length. We treat each subinterval as a batch of samples where we pull each active arm the same number of times and use the replicable mean estimation algorithm to, empirically, compute the true mean. At the end of each batch, we decide to eliminate some arm j using the standard UCB estimate. Crucially, if we condition on the event that all the calls to the replicable mean estimator return the same number, then the algorithm we propose is replicable.
Algorithm 1 Mean-Estimation Based Replicable Algorithm for Stochastic MAB (Theorem 3) 1: Input: time horizon T, number of arms K, replicability ρ 2: Initialization: B ← log(T ), q ← T 1/B , c0 ← 0, A ← [K], r ← T , µ̂a ← 0,∀a ∈ A 3: for i = 1 to B − 1 do 4: if ⌊qi⌋ · |A| > r then 5: break 6: ci = ci−1 + ⌊qi⌋ 7: Pull every arm a ∈ A for ⌊qi⌋ times 8: for a ∈ A do 9: µ̂a ← ReprMeanEstimation(δ = 1/(2KTB), τ = 1, √ log(2KTB)/ci, ρ
′ = ρ/(KB)) ▷ Proposition 2
10: r ← r − |A| · ⌊qi⌋ 11: for a ∈ A do 12: if µ̂a < maxa∈A µ̂a − 2τ then 13: Remove a from A 14: In the last batch play the arm from A with the smallest index
Theorem 3. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 1) for the stochastic bandit problem with K arms and gaps (∆j)j∈[K] whose expected regret is
E[RT ] ≤ C · K2 log2(T )
ρ2
∑ j:∆j>0 ( ∆j + log(KT log(T )) ∆j ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in K,T and 1/ρ.
The above result, whose proof can be found in Appendix A, states that, by combining the tools from Impagliazzo et al. (2022) and Esfandiari et al. (2021), we can design a replicable bandit algorithm with (instance-dependent) expected regret O(K2 log3(T )/ρ2). Notice that the regret guarantee has an extra K2 log2(T )/ρ2 factor compared to its non-replicable counterpart in Esfandiari et al. (2021) (Theorem 5.1). This is because, due to a union bound over the rounds and the arms, we need to call the replicable mean estimator with parameter ρ/(K log(T )). In the next section, we show how to get rid of the log2(T ) by designing a new algorithm.
4 IMPROVED ALGORITHMS FOR REPLICABLE STOCHASTIC BANDITS
While the previous result provides a non-trivial regret bound, it is not optimal with respect to the time horizon T . In this section, we show how to improve it by designing a new algorithm, presented in Algorithm 2, which satisfies the guarantees of Theorem 4 and, essentially, decreases the dependence on the time horizon T from log3(T ) to log(T ). Our main result for replicable stochastic multi-armed bandits with K arms follows. Theorem 4. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 2) for the stochastic bandit problem with K arms and gaps (∆j)j∈[K] whose expected regret is
E[RT ] ≤ C · K2
ρ2 ∑ j:∆j>0 ( ∆j + log(KT log(T )) ∆j ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in K,T and 1/ρ.
Note that, compared to the non-replicable setting, we incur an extra factor of K2/ρ2 in the regret. The proof can be found in Appendix B. Let us now describe how Algorithm 2 works. We decompose the time horizon into B = log(T ) batches. Without the replicability constraint, one could draw qi samples in batch i from each arm and estimate the mean reward. With the replicability constraint, we have to boost this: in each batch i, we pull each active arm O(βqi) times, for some q to be determined, where β = O(K2/ρ2) is the replicability blow-up. Using these samples, we compute
Algorithm 2 Replicable Algorithm for Stochastic Multi-Armed Bandits (Theorem 4) 1: Input: time horizon T, number of arms K, replicability ρ 2: Initialization: B ← log(T ), q ← T 1/B , c0 ← 0, A0 ← [K], r ← T , µ̂a ← 0,∀a ∈ A0
3: β ← ⌊max{K2/ρ2, 2304}⌋ 4: for i = 1 to B − 1 do 5: if β⌊qi⌋ · |Ai| > r then 6: break 7: Ai ← Ai−1 8: for a ∈ Ai do 9: Pull arm a for β⌊qi⌋ times
10: Compute the empirical mean µ̂(i)α 11: ci ← ci−1 + ⌊qi⌋ 12: c̃i ← βci 13: Ũi ← √ 2 ln(2KTB)/c̃i
14: Ui ← √ 2 ln(2KTB)/ci 15: U i ← Uni[Ui/2, Ui] 16: r ← r − β · |Ai| · ⌊qi⌋ 17: for a ∈ Ai do 18: if µ̂(i)a + Ũi < maxa∈Ai µ̂ (i) a − U i then 19: Remove a from Ai 20: In the last batch play the arm from AB−1 with the smallest index
the empirical mean µ̂(i)α for any active arm α. Note that Ũi in Algorithm 2 corresponds to the size of the actual confidence interval of the estimation and Ui corresponds to the confidence interval of an algorithm that does not use the β-blow-up in the number of samples. The novelty of our approach comes from the choice of the interval around the mean of the maximum arm: we pick a threshold uniformly at random from an interval of size Ui/2 around the maximum mean. Then, the algorithm checks whether µ̂(i)a + Ũi < max µ̂ (i) a′ − U i, where max runs over the active arms a′ in batch i, and eliminates arms accordingly. To prove the result we show that there are three regions that some arm j can be in relative to the confidence interval of the best arm in batch i (cf. Appendix B). If it lies in two of these regions, then the decision of whether to keep it or discard it is the same in both executions of the algorithm. However, if it is in the third region, the decision could be different between parallel executions, and since it relies on some external and unknown randomness, it is not clear how to reason about it. To overcome this issue, we use the random threshold to argue about the probability that the decision between two executions differs. The crucial observation that allows us to get rid of the extra log2(T ) factor is that there are correlations between consecutive batches: we prove that if some arm j lies in this “bad” region in some batch i, then it will be outside this region after a constant number of batches.
5 REPLICABLE STOCHASTIC LINEAR BANDITS
We now investigate replicability in the more general setting of stochastic linear bandits. In this setting, each arm is a vector a ∈ Rd belonging to some action set A ⊆ Rd, and there is a parameter θ⋆ ∈ Rd unknown to the player. In round t, the player chooses some action at ∈ A and receives a reward rt = ⟨θ⋆, at⟩ + ηt, where ηt is a zero-mean 1-subgaussian random variable independent of any other source of randomness. This means that E[ηt] = 0 and satisfies E[exp(ληt)] ≤ exp(λ2/2) for any λ ∈ R. For normalization purposes, it is standard to assume that ∥θ⋆∥2 ≤ 1 and supa∈A ∥a∥2 ≤ 1. In the linear setting, the expected regret after T pulls a1, . . . , aT can be written as
E[RT ] = T sup a∈A ⟨θ⋆, a⟩ −E
[ T∑
t=1
⟨θ⋆, at⟩ ] .
In Section 5.1 we provide results for the finite action space case, i.e., when |A| = K. Next, in Section 5.2, we study replicable linear bandit algorithms when dealing with infinite action spaces. In the following, we work in the regime where T ≫ d. We underline that our approach leverages connections of stochastic linear bandits with G-optimal experiment design, core sets constructions, and least-squares estimators. Roughly speaking, the goal of G-optimal design is to find a (small) subset of arms A′, which is called the core set, and define a distribution π over them with the following property: for any ε > 0, δ > 0 pulling only these arms for an appropriate number of times and computing the least-squares estimate θ̂ guarantees that supa∈A⟨a, θ∗− θ̂⟩ ≤ ε, with probability 1−δ. For an extensive discussion, we refer to Chapters 21 and 22 of Lattimore & Szepesvári (2020).
5.1 FINITE ACTION SET
We first introduce a lemma that allows us to reduce the size of the action set that our algorithm has to search over.
Lemma 5 (See Chapters 21 and 22 in Lattimore & Szepesvári (2020)). For any finite action set A that spans Rd and any δ, ε > 0, there exists an algorithm that, in time polynomial in d, computes a multi-set of Θ(d log(1/δ)/ε2+d log log d) actions (possibly with repetitions) such that (i) they span Rd and (ii) if we perform these actions in a batched stochastic d-dimensional linear bandits setting with true parameter θ⋆ ∈ Rd and let θ̂ be the least-squares estimate for θ⋆, then, for any a ∈ A, with probability at least 1− δ, we have
∣∣∣〈a, θ⋆ − θ̂〉∣∣∣ ≤ ε. Essentially, the multi-set in Lemma 5 is obtained using an approximate G-optimal design algorithm. Thus, it is crucial to check whether this can be done in a replicable manner. Recall that the above set of distinct actions is called the core set and is the solution of an (approximate) Goptimal design problem. To be more specific, consider a distribution π : A → [0, 1] and define V (π) = ∑ a∈A π(a)aa
⊤ ∈ Rd×d and g(π) = supa∈A ∥a∥2V (π)−1 . The distribution π is called a design and the goal of G-optimal design is to find a design that minimizes g. Since the number of actions is finite, this problem reduces to an optimization problem which can be solved efficiently using standard optimization methods (e.g., the Frank-Wolfe method). Since the initialization is the same, the algorithm that finds the optimal (or an approximately optimal) design is replicable under the assumption that the gradients and the projections do not have numerical errors. This perspective is orthogonal to the work of Ahn et al. (2022), that defines reproducibility from a different viewpoint.
Algorithm 3 Replicable Algorithm for Stochastic Linear Bandits (Theorem 6) 1: Input: number of arms K, time horizon T, replicability ρ 2: Initialization: B ← log(T ), q ← (T/c)1/B , A ← [K], r ← T 3: β ← ⌊max{K2/ρ2, 2304}⌋ 4: for i = 1 to B − 1 do 5: ε̃i = √ d log(KT 2)/(βqi)
6: εi = √ d log(KT 2)/qi 7: ni = 10d log(KT 2)/ε2i 8: a1, . . . , ani ← multi-set given by Lemma 5 with parameters δ = 1/(KT 2) and ε = ε̃i 9: if ni > r then
10: break 11: Pull every arm a1, . . . , ani and receive rewards r1, . . . , rni 12: Compute the LSE θ̂i ← (∑ni j=1 aja T j )−1 (∑ni j=1 ajrj )
13: εi ← Uni[εi/2, εi]
14: r ← r − ni
15: for a ∈ A do 16: if ⟨a, θ̂i⟩+ ε̃i < maxa∈A⟨a, θ̂i⟩ − εi then 17: Remove a from A 18: In the last batch play argmaxa∈A⟨a, θ̂B−1⟩
In our batched bandit algorithm (Algorithm 3), the multi-set of arms a1, . . . , ani computed in each batch is obtained via a deterministic algorithm with runtime poly(K, d), where |A| = K. Hence, the
multi-set will be the same in two different executions of the algorithm. On the other hand, the LSE will not be since it depends on the stochastic rewards. We apply the techniques that we developed in the replicable stochastic MAB setting in order to design our algorithm. Our main result for replicable d-dimensional stochastic linear bandits with K arms follows. For the proof, we refer to Appendix C. Theorem 6. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm for the stochastic ddimensional linear bandit problem with K arms whose expected regret is
E[RT ] ≤ C · K2
ρ2
√ dT log(KT ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in d,K, T and 1/ρ.
Note that the best known non-replicable algorithm achieves an upper bound of Õ( √ dT log(K)) and, hence, our algorithm incurs a replicability overhead of order K2/ρ2. The intuition behind the proof is similar to the multi-armed bandit setting in Section 4.
5.2 INFINITE ACTION SET
Let us proceed to the setting where the action set A is unbounded. Unfortunately, even when d = 1, we cannot directly get an algorithm that has satisfactory regret guarantees by discretizing the space and using Algorithm 3. The approach of Esfandiari et al. (2021) is to discretize the action space and use an 1/T -net to cover it, i.e. a set A′ ⊆ A such that for all a ∈ A there exists some a′ ∈ A′ with ||a − a′||2 ≤ 1/T . It is known that there exists such a net of size at most (3T )d (Vershynin, 2018, Corollary 4.2.13). Then, they apply the algorithm for the finite arms setting, increasing their regret guarantee by a factor of √ d. However, our replicable algorithm for this setting contains an additional factor of K2 in the regret bound. Thus, even when d = 1, our regret guarantee is greater than T, so the bound is vacuous. One way to fix this issue and get a sublinear regret guarantee is to use a smaller net. We use a 1/T 1/(4d+2)−net that has size at most (3T ) d 4d+2 and this yields an expected
regret of order O(T 4d+1/(4d+2) √ d log(T )/ρ2). For further details, we refer to Appendix D.
Even though the regret guarantee we managed to get using the smaller net of Appendix D is sublinear in T , it is not a satisfactory bound. The next step is to provide an algorithm for the infinite action setting using a replicable LSE subroutine combined with the batching approach of Esfandiari et al. (2021). We will make use of the next lemma. Lemma 7 (Section 21.2 Note 3 of Lattimore & Szepesvári (2020)). There exists a deterministic algorithm that, given an action space A ⊆ Rd, computes a 2-approximate G-optimal design π with a core set of size O(d log log(d)).
We additionally prove the next useful lemma, which, essentially, states that we can assume without loss of generality that every arm in the support of π has mass at least Ω(1/(d log(d))). We refer to Appendix F.1 for the proof. Lemma 8 (Effective Support). Let π be the distribution that corresponds to the 2-approximate optimal G-design of Lemma 7 with input A. Assume that π(a) ≤ c/(d log(d)), where c > 0 is some absolute numerical constant, for some arm a in the core set. Then, we can construct a distribution π̂ such that, for any arm a in the core set, π̂(a) ≥ C/(d log(d)), where C > 0 is an absolute constant, so that it holds
sup a′∈A
∥a′∥2V (π̂)−1 ≤ 4d .
The upcoming lemma is a replicable algorithm for the least-squares estimator and, essentially, builds upon Lemma 7 and Lemma 8. Its proof can be found at Appendix F.2. Lemma 9 (Replicable LSE). Let ρ, ε ∈ (0, 1] and 0 < δ ≤ min{ρ, 1/d}1. Consider an environment of d-dimensional stochastic linear bandits with infinite action space A. Assume that π is a 4- approximate optimal design with associated core set C as computed by Lemma 7 with input A. There exists a ρ-replicable algorithm that pulls each arm a ∈ C a total of
Ω
( d4 log(d/δ) log2 log(d) log log log(d)
ε2ρ2 ) 1We can handle the case of 0 < δ ≤ d by paying an extra log d factor in the sample complexity.
times and outputs θSQ that satisfies supa∈A |⟨a, θSQ − θ⋆⟩| ≤ ε , with probability at least 1− δ.
Algorithm 4 Replicable LSE Algorithm for Stochastic Infinite Action Set (Theorem 10) 1: Input: time horizon T, action set A ⊆ Rd, replicability ρ 2: A′ ← 1/T -net of A 3: Initialization: r ← T,B ← log(T ), q ← (T/c)1/B 4: for i = 1 to B − 1 do 5: qi denotes the number of pulls of all arms before the replicability blow-up 6: εi = c · d √ log(T )/qi
7: The blow-up is Mi = qi · d3 log(d) log2 log(d) log log log(d) log2(T )/ρ2 8: a1, . . . , a|Ci| ← core set Ci of the design given by Lemma 7 with parameter A′ 9: if ⌈Mi⌉ > r then
10: break 11: Pull every arm aj for Ni = ⌈Mi⌉/|Ci| rounds and receive rewards r(j)1 , ..., r (j) Ni
for j ∈ [|Ci|] 12: Si = {(aj , r(j)t ) : t ∈ [Ni], j ∈ [|Ci|]} 13: θ̂i ← ReplicableLSE(Si, ρ′ = ρ/(dB), δ = 1/(2|A′|T 2), τ = min{εi, 1}) 14: r ← r − ⌈Mi⌉ 15: for a ∈ A′ do 16: if ⟨a, θ̂i⟩ < maxa∈A′⟨a, θ̂i⟩ − 2εi then 17: Remove a from A′ 18: In the last batch play argmaxa∈A′⟨a, θ̂B−1⟩ 19: 20: ReplicableLSE(S, ρ, δ, τ) 21: for a ∈ C do 22: v(a)← ReplicableSQ(ϕ : x ∈ R 7→ x ∈ R, S, ρ, δ, τ) ▷ Impagliazzo et al. (2022) 23: return ( ∑ j∈|S| aja ⊤ j ) −1 · ( ∑ a∈C a na v(a))
The main result for the infinite actions’ case, obtained by Algorithm 4, follows. Its proof can be found at Appendix E. Theorem 10. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (Algorithm 4) for the stochastic d-dimensional linear bandit problem with infinite action set whose expected regret is
E[RT ] ≤ C · d4 log(d) log2 log(d) log log log(d)
ρ2
√ T log3/2(T ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in T d and 1/ρ.
Our algorithm for the infinite arm linear bandit case enjoys an expected regret of order Õ(poly(d) √ T ). We underline that the dependence of the regret on the time horizon is (almost) optimal, and we incur an extra d3 factor in the regret guarantee compared to the non-replicable algorithm of Esfandiari et al. (2021). We now comment on the time complexity of our algorithm. Remark 11. The current implementation of our algorithm requires time exponential in d. However, for a general convex set A, given access to a separation oracle for it and an oracle that computes an (approximate) G-optimal design, we can execute it in polynomial time and with polynomially many calls to the oracle. Notably, when A is a polytope such oracles exist. We underline that computational complexity issues also arise in the traditional setting of linear bandits with an infinite number of arms and the computational overhead that the replicability requirement adds is minimal. For further details, we refer to Appendix G.
6 CONCLUSION AND FUTURE DIRECTIONS
In this paper, we have provided a formal notion of reproducibility/replicability for stochastic bandits and we have developed algorithms for the multi-armed bandit and the linear bandit settings that satisfy this notion and enjoy a small regret decay compared to their non-replicable counterparts. We hope and believe that our paper will inspire future works in replicable algorithms for more complicated interactive learning settings such as reinforcement learning. We also provide experimental evaluation in Appendix H.
7 ACKNOWLEDGEMENTS
Alkis Kalavasis was supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the “First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant”, project BALSAM, HFRIFM17-1424. Amin Karbasi acknowledges funding in direct support of this work from NSF (IIS-1845032), ONR (N00014- 19-1-2406), and the AI Institute for Learning-Enabled Optimization at Scale (TILOS). Andreas Krause was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program grant agreement no. 815943 and the Swiss National Science Foundation under NCCR Automation, grant agreement 51NF40 180545. Grigoris Velegkas was supported by NSF (IIS-1845032), an Onassis Foundation PhD Fellowship and a Bodossaki Foundation PhD Fellowship.
A THE PROOF OF THEOREM 3
Theorem. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 1) for the stochastic bandit problem with K arms and gaps (∆j)j∈[K] whose expected regret is
E[RT ] ≤ C · K2 log2(T )
ρ2
∑ j:∆j>0 ( ∆j + log(2KT log(T )) ∆j ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in K,T and 1/ρ.
Proof. First, we claim that the algorithm is ρ-replicable: since the elimination decisions are taken in the same iterates and are based solely on the mean estimations, the replicability of the algorithm of Proposition 2 implies the replicability of the whole algorithm. In particular,
Pr[(a1, ..., aT ) ̸= (a′1, ..., a′T )] = Pr[∃i ∈ [B],∃j ∈ [K] : µ̂ (i) j was not replicable] ≤ ρ .
During each batch i, we draw for any active arm ⌊qi⌋ fresh samples for a total of ci samples and use the replicable mean estimation algorithm to estimate its mean. For an active arm, at the end of some batch i ∈ [B], we say that its estimation is “correct” if the estimation of its mean is within√ log(2KTB)/ci from the true mean. Using Proposition 2, the estimation of any active arm at the end of any batch (except possibly the last batch) is correct with probability at least 1− 1/(2KTB) and so, by the union bound, the probability that the estimation is incorrect for some arm at the end of some batch is bounded by 1/T . We remark that when δ < ρ, the sample complexity of Proposition 2 reduces to O(log(1/δ)/(τ2ρ2)). Let E denote the event that our estimates are correct. The total expected regret can be bounded as
E[RT ] ≤ T · 1/T +E[RT |E ] .
It suffices to bound the second term of the RHS and hence we can assume that each gap is correctly estimated within an additive factor of √ log(2KTB)/ci after batch i. First, due to the elimination condition, we get that the best arm is never eliminated. Next, we have that
E[RT |E ] = ∑
j:∆j>0
∆j E[Tj |E ] ,
where Tj is the total number of pulls of arm j. Fix a sub-optimal arm j and assume that i + 1 was the last batch it was active. Since this arm is not eliminated at the end of batch i, and the estimations are correct, we have that
∆j ≤ √ log(2KTB)/ci ,
and so ci ≤ log(2KTB)/∆2j . Hence, the number of pulls to get the desired bound due to Proposition 2 is (since we need to pull an arm ci/ρ21 times in order to get an estimate at distance√
log(1/δ)/c2i with probability 1− δ in a ρ1-replicable manner when δ < ρ1)
Tj ≤ ci+1/ρ21 = q/ρ21(1 + ci) ≤ q/ρ21 · (1 + log(2KTB)/∆2j ) .
This implies that the total regret is bounded by
E[RT ] ≤ 1 + q/ρ21 · ∑
j:∆j>0
( ∆j + log(2KTB)
∆j
) .
We finally set q = T 1/B and B = log(T ). Moreover, we have that ρ1 = ρ/(KB). These yield
E[RT ] ≤ K2 log2(T )
ρ2
∑ j:∆j>0 ( ∆j + log(2KT log(T )) ∆j ) .
This completes the proof.
B THE PROOF OF THEOREM 4
Theorem. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 2) for the stochastic bandit problem with K arms and gaps (∆j)j∈[K] whose expected regret is
E[RT ] ≤ C · K2
ρ2 ∑ j:∆j>0 (∆j + log(KT log(T ))/∆j) ,
for some absolute numerical constant C > 0, and its running time is polynomial in K,T and 1/ρ.
To give some intuition, we begin with a non tight analysis which, however, provides the main ideas behind the actual proof.
Non Tight Analysis Assume that the environment has K arms with unknown means µi and let T be the number of rounds. Consider B to the total number of batches and β > 1. We set q = T 1/B . In each batch i ∈ [B], we pull each arm β⌊ qi⌋ times. Hence, after the i-th batch, we will have drawn c̃i = ∑ 1≤j≤i β⌊qj⌋ independent and identically distributed samples from each arm. Let us
also set ci = ∑ 1≤j≤i⌊qj⌋.
Let us fix i ∈ [B]. Using Hoeffding’s bound for subgaussian concentration, the length of the confidence bound for arm j ∈ [K] that guarantees 1 − δ probability of success (in the sense that the empirical estimate µ̂j will be close to the true µj) is equal to
Ũi = √ 2 log(1/δ)/c̃i ,
when the estimator uses c̃i samples. Also, let Ui = √ 2 log(1/δ)/ci .
Assume that the active arms at the batch iteration i lie in the set Ai. Consider the estimates {µ̂(i)j }i∈[B],j∈Ai , where µ̂ (i) j is the empirical mean of arm j using c̃i samples. We will eliminate an arm j at the end of the batch iteration i if
µ̂ (i) j + Ũi ≤ max t∈Ai µ̂ (i) t − U i ,
where U i ∼ Uni[Ui/2, Ui]. For the remaining of the proof, we condition on the event E that for every arm j ∈ [K] and every batch i ∈ [B] the true mean is within Ũi from the empirical one. We first argue about the replicability of our algorithm. Consider a fixed round i (end of i-th batch) and a fixed arm j. Let i⋆ be the optimal empirical arm after the i-th batch.
Let µ̂(i) ′ j , µ̂ (i)′ i⋆ the empirical estimates of arms j, i ⋆ after the i-th batch, under some other execution of the algorithm. We condition on the event E ′ for the other execution as well. Notice that |µ̂(i) ′
j − µ̂ (i) j | ≤ 2Ũi, |µ̂ (i)′ i⋆ − µ̂ (i) i⋆ | ≤ 2Ũi. Notice that, since the randomness of U i is shared, if µ̂ (i) j + Ũi ≥ µ̂ (i) i⋆ − U i + 4Ũi, then the arm j will not be eliminated after the i-th batch in some other execution of the algorithm as well. Similarly, if µ̂(i)j + Ũi < µ̂ (i) i⋆ −U i − 4Ũi the the arm j will get eliminated after the i-th batch in some other execution of the algorithm as well. In particular, this means that if µ̂(i)j − 2Ũi > µ̂ (i) i⋆ + Ũi − Ui/2 then the arm j will not get eliminated in some other execution of the algorithm and if µ̂(i)j + 5Ũi < µ̂ (i) i⋆ − Ui then the arm j will also get eliminated in some other execution of the algorithm with probability 1 under the event E ∩ E ′. We call the above two cases good since they preserve replicability. Thus, it suffices to bound the probability that the decision about arm j will be different between the two executions when we are in neither of these cases. Then, the worst case bound due to the mass of the uniform probability measure is
16 √ 2 log(1/δ)/c̃i√
2 log(1/δ)/ci . This implies that the probability mass of the bad event is at most 16 √ ci/c̃i = 16 √ 1/β. A union bound over all arms and batches yields that the probability that two distinct executions differ in at least one pull is
Pr[(a1, . . . , aT ) ̸= (a′1, . . . , a′T )] ≤ 16KB √ 1/β + 2δ ,
and since δ ≤ ρ it suffices to pick β = 768K2B2/ρ2. We now focus on the regret of our algorithm. Let us set δ = 1/(KTB). Fix a sub-optimal arm j and assume that batch i+ 1 was the last batch that is was active. We obtain that the total number of pulls of this arm is
Tj ≤ c̃i+1 ≤ βq(1 + ci) ≤ βq(1 + 8 log(1/δ)/∆2j )
From the replicability analysis, it suffices to take β of order K2 log2(T )/ρ2 and so E[RT ] ≤ T ·1/T+E[RT |E ] = 1+ ∑
j:∆j>0
∆j E[Tj |E ] ≤ C ·K2 log2(T )
ρ2
∑ j:∆j>0 ( ∆j + log(KT log(T )) ∆j ) ,
for some absolute constant C > 0.
Notice that the above analysis, which uses a naive union bound, does not yield the desired regret bound. We next provide a more tight analysis of the same algorithm that achieves the regret bound of Theorem 4.
Improved Analysis (The Proof of Theorem 4) In the previous analysis, we used a union bound over all arms and all batches in order to control the probability of the bad event. However, we can obtain an improved regret bound as follows. Fix a sub-optimal arm i ∈ [K] and let t be the first round that it appears in the bad event. We claim that after a constant number of rounds, this arm will be eliminated. This will shave the O(log2(T )) factor from the regret bound. Essentially, as indicated in the previous proof, the bad event corresponds to the case where the randomness of the cut-off threshold U can influence the decision of whether the algorithm eliminates an arm or not. The intuition is that during the rounds t and t+1, given that the two intervals intersected at round t, we know that the probability that they intersect again is quite small since the interval of the optimal mean is moving upwards, the interval of the sub-optimal mean is concentrating around the guess and the two estimations have been moved by at most a constant times the interval’s length.
Since the bad event occurs at round t, we know that
µ̂ (t) j ∈ [ µ̂ (t) t⋆ − Ut − 5Ũt, µ̂ (t) t⋆ − Ut/2 + 3Ũt ] .
In the above µ̂tt⋆ is the estimate of the optimal mean at round t whose index is denoted by t ⋆. Now assume that the bad event for arm j also occurs at round t+ k. Then, we have that
µ̂ (t+k) j ∈ [ µ̂ (t+k) (t+k)⋆ − Ut+k − 5Ũt+k, µ̂ (t+k) (t+k)⋆ − Ut+k/2 + 3Ũt+k ] .
First, notice that since the concentration inequality under event E holds for rounds t, t+ k we have that µ̂(t+k)j ≤ µ̂ (t) j + Ũt + Ũt+k. Thus, combining it with the above inequalities gives us
µ̂ (t+k) (t+k)⋆ − Ut+k − 5Ũt+k ≤ µ̂ (t+k) j ≤ µ̂ (t) j + Ũt + Ũt+k ≤ µ̂ (t) t⋆ − Ut/2 + 4Ũt + Ũt+k.
We now compare µ̂(t)t⋆ , µ̂ (t+k) (t+k)⋆ . Let o denote the optimal arm. We have that
µ̂ (t+k) (t+k)⋆ ≥ µ̂ (t+k) o ≥ µo − Ũt+k ≥ µt⋆ − Ũt+k ≥ µ̂ (t) t⋆ − Ũt − Ũt+k.
This gives us that
µ̂ (t) t⋆ − Ut+k − 6Ũt+k − Ũt ≤ µ̂ (t+k) (t+k)⋆ − Ut+k − 5Ũt+k.
Thus, we have established that
µ̂ (t) t⋆ − Ut+k − 6Ũt+k − Ũt ≤ µ̂ (t) t⋆ − Ut/2 + 4Ũt + Ũt+k =⇒
Ut+k ≥ Ut/2− 7Ũt+k − 5Ũt ≥ Ut/2− 12Ũt.
Since β ≥ 2304, we get that 12Ũt ≤ Ut/4. Thus, we get that
Ut+k ≥ Ut/4.
Notice that Ut+k Ut =
√ ct
ct+k ,
thus it immediately follows that
ct ct+k ≥ 1 16 =⇒ q t+1 − 1 qt+k+1 − 1 ≥ 1 16 =⇒ 16
( 1− 1
qt+1
) ≥ qk − 1
qt+1 =⇒
qk ≤ 16 + 1 qt+1 ≤ 17 =⇒ k log q ≤ log 17 =⇒ k ≤ 5,
when we pick B = log(T ) batches. Thus, for every arm the bad event can happen at most 6 times, by taking a union bound over the K arms we see that the probability that our algorithm is not replicable is at most O(K √ 1/β), so picking β = Θ(K2/ρ2) suffices to get the result.
C THE PROOF OF THEOREM 6
Theorem. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 3) for the stochastic d-dimensional linear bandit problem with K arms whose expected regret is
E[RT ] ≤ C · K2
ρ2
√ dT log(KT ) ,
for some absolute numerical constant C > 0, and its running time is polynomial in d,K, T and 1/ρ.
Proof. Let c, C be the numerical constants hidden in Lemma 5, i.e., the size of the multi-set is in the interval [cd log(1/δ)/ε2, Cd log(1/δ)/ε2]. We know that the size of each batch ni ∈ [cqi, Cqi] (see Lemma 5), so by the end of the B − 1 batch we will have less than nB pulls left. Hence, the number of batches is at most B.
We first define the event E that the estimates of all arms after the end of each batch are accurate, i.e., for every active arm a at the beginning of the i-th batch, at the end of the batch we have that∣∣∣〈a, θ̂i − θ⋆〉∣∣∣ ≤ ε̃i. Since δ = 1/(KT 2) and there are at most T batches and K active arms in each batch, a simple union bound shows that E happens with probability at least 1 − 1/T. We condition on the event E throughout the rest of the proof. We now argue about the regret bound of our algorithm. We first show that any optimal arm a∗ will not get eliminated. Indeed, consider any sub-optimal arm a ∈ [K] and any batch i ∈ [B]. Under the event E we have that
⟨a, θ̂i⟩ − ⟨a∗, θ̂i⟩ ≤ (⟨a, θ∗⟩+ ε̃i)− (⟨a∗, θ∗⟩ − ε̃i) < 2ε̃i < εi + εi.
Next, we need to bound the number of times we pull some fixed suboptimal arm a ∈ [K]. We let ∆ = ⟨a∗ − a, θ∗⟩ denote the gap and we let i be the smallest integer such that εi < ∆/4. We claim that this arm will get eliminated by the end of batch i. Indeed,
⟨a∗, θ̂i⟩ − ⟨a, θ̂i⟩ ≥ (⟨a∗, θ̂i⟩ − ε̃i)− (⟨a, θ̂i⟩+ ε̃i) = ∆− 2ε̃i > 4εi − 2ε̃i > ε̃i + εi.
This shows that during any batch i, all the active arms have gap at most 4εi−1. Thus, the regret of the algorithm conditioned on the event E is at most
B∑ i=1 4niεi−1 ≤ 4βC B∑ i=1 qi √ d log(KT 2)/qi−1 ≤ 6βCq √ d log(KT ) B−1∑ i=0 qi/2 ≤
O ( βqB/2+1 √ d log(KT ) ) = O ( K2
ρ2 qB/2+1
√ d log(KT ) ) = O ( K2 ρ2 q √ dT log(KT ) ) .
Thus, the overall regret is bounded by δ · T + (1 − δ) · O ( K2 ρ2 q √ dT log(KT ) ) =
O ( K2 ρ2 q √ dT log(KT ) ) .
We now argue about the replicability of our algorithm. The analysis follows in a similar fashion as in Theorem 4. Let θ̂i, θ̂′i be the LSE after the i-th batch, under two different executions of the algorithm and assume that the set of active arms. We condition on the event E ′ for the other execution as well. Assume that the set of active arms is the same under both executions at the beginning of batch i. Notice that since the set that is guaranteed by Lemma 5 is computed by a deterministic algorithm, both executions will pull the same arms in batch i. Consider a suboptimal arm a and let ai∗ = argmaxa∈A⟨θ̂i, a⟩, a′i∗ = argmaxa∈A⟨θ̂′i, a⟩. Under the event E ∩ E ′ we have that |⟨a, θ̂i − θ̂′i⟩| ≤ 2ε̃i, |⟨ai∗ , θ̂i − θ̂′i⟩| ≤ 2ε̃i, and |⟨a′i∗ , θ̂′i⟩ − ⟨ai∗ , θ̂i⟩| ≤ 2ε̃i. Notice that, since the randomness of εi is shared, if ⟨a, θ̂i⟩ + ε̃i ≥ ⟨ai∗ , θ̂i⟩ − εi + 4ε̃i, then the arm a will not be eliminated after the i-th batch in some other execution of the algorithm as well. Similarly, if ⟨a, θ̂i⟩+ ε̃i < ⟨ai∗ , θ̂i⟩− εi− 4ε̃i the the arm a will get eliminated after the i-th batch in some other execution of the algorithm as well. In particular, this means that if ⟨a, θ̂i⟩−2ε̃i > ⟨ai∗ , θ̂i⟩+ε̃i−εi/2 then the arm a will not get eliminated in some other execution of the algorithm and if ⟨a, θ̂i⟩+5ε̃i < ⟨ai∗ , θ̂i⟩ − εi then the arm j will also get eliminated in some other execution of the algorithm with probability 1 under the event E ∩E ′. Thus, it suffices to bound the probability that the decision about arm j will be different between the two executions when we are in neither of these cases. Then, the worst case bound due to the mass of the uniform probability measure is
16 √
d log(1/δ)/c̃i√ d log(1/δ)/ci .
This implies that the probability mass of the bad event is at most 16 √ ci/c̃i = 16 √ 1/β. A naive union bound would require us to pick β = Θ(K2 log2 T/ρ2). We next show to avoid the log2 T factor. Fix a sub-optimal arm a ∈ [K] and let t be the first round that it appears in the bad event. Since the bad event occurs at round t, we know that
⟨a, θ̂t⟩ ∈ [ ⟨at∗ , θ̂t⟩ − εt − 5ε̃t, ⟨at∗ , θ̂t⟩ − εt/2 + 3ε̃t ] .
In the above, at∗ is the optimal arm at round t w.r.t. the LSE. Now assume that the bad event for arm a also occurs at round t+ k. Then, we have that
⟨a, θ̂t+k⟩ ∈ [ ⟨a(t+k)∗ , θ̂t+k⟩ − εt+k − 5ε̃t+k, ⟨a(t+k)∗ , θ̂t+k⟩ − εt/2 + 3ε̃t+k ] .
First, notice that since the concentration inequality under event E holds for rounds t, t+ k we have that ⟨a, θ̂t+k⟩ ≤ ⟨a, θ̂t⟩+ ε̃t + ε̃t+k. Thus, combining it with the above inequalities gives us ⟨a(t+k)∗ , θ̂t+k⟩− εt+k − 5ε̃t+k ≤ ⟨a, θ̂t+k⟩ ≤ ⟨a, θ̂t⟩+ ε̃t + ε̃t+k ≤ ⟨at∗ , θ̂t⟩− εt/2+ 4ε̃t + ε̃t+k. We now compare ⟨at∗ , θ̂t⟩, ⟨a(t+k)∗ , θ̂t+k⟩. Let a∗ denote the optimal arm. We have that ⟨a(t+k)∗ , θ̂t+k⟩ ≥ ⟨a∗, θ̂t+k⟩ ≥ ⟨a∗, θ∗⟩ − ε̃t+k ≥ ⟨at∗ , θ∗⟩ − ε̃t+k ≥ ⟨at∗ , θ̂t⟩ − ε̃t+k − ε̃t.
This gives us that
⟨at∗ , θ̂t⟩ − εt+k − 6ε̃t+k − ε̃t ≤ ⟨a(t+k)∗ , θ̂t+k⟩ − εt+k − 5ε̃t+k. Thus, we have established that
⟨at∗ , θ̂t⟩ − εt+k − 6ε̃t+k − ε̃t ≤ ⟨at∗ , θ̂t⟩ − εt/2 + 4ε̃t + ε̃t+k =⇒ εt+k ≥ εt/2− 7ε̃t+k − 5ε̃t ≥ εt/2− 12ε̃t.
Since β ≥ 2304, we get that 12ε̃t ≤ εt/4. Thus, we get that εt+k ≥ εt/4.
Notice that εt+k εt =
√ qt
qt+k ,
thus it immediately follows that qt
qt+k ≥ 1 16 =⇒ qk ≤ 16 =⇒ k log q ≤ log 16 =⇒ k ≤ 4,
when we pick B = log(T ) batches. Thus, for every arm the bad event can happen at most 5 times, by taking a union bound over the K arms we see that the probability that our algorithm is not replicable is at most O(K √ 1/β), so picking β = Θ(K2/ρ2) suffices to get the result.
D NAIVE APPLICATION OF ALGORITHM 3 WITH INFINITE ACTION SPACE
We use a 1/T 1/(4d+2)−net that has size at most (3T ) d 4d+2 . Let A′ be the new set of arms. We then run Algorithm 3 using A′. This gives us the following result, that is proved right after. Corollary 12. Let T ∈ N, ρ ∈ (0, 1]. There is a ρ-replicable algorithm for the stochastic ddimensional linear bandit problem with infinite arms whose expected regret is at most
E[RT ] ≤ C · T
4d+1 4d+2
ρ2
√ d log(T ) ,
where C > 0 is an absolute numerical constant.
Proof. Since K ≤ (3T ) d 4d+2 , we have that
T sup a∈A′ ⟨a, θ∗⟩ −E
[ T∑
i=1
⟨at, θ∗⟩ ] ≤ O ( (3T ) 2d 4d+2
ρ2
√ dT log ( T (3T ) d 4d+2 )) = O ( T 4d+1 4d+2
ρ2
√ d log(T ) ) Comparing to the best arm in A, we have that:
T sup a∈A ⟨a, θ∗⟩ −E
[ T∑
i=1
⟨at, θ∗⟩ ] = ( T sup
a∈A ⟨a, θ∗⟩ − T sup a∈A′ ⟨a, θ∗⟩
) + ( T sup
a∈A′ ⟨a, θ∗⟩ −E
[ T∑
i=1
⟨at, θ∗⟩ ]) Our choice of the 1/T 1/(4d+2)-net implies that for every a ∈ A there exists some a′ ∈ A′ such that ||a − a′||2 ≤ 1/T 1/(4d+2). Thus, supa∈A⟨a, θ∗⟩ − supa′∈A′⟨a′, θ∗⟩ ≤ ||a − a′||2||θ∗||2 ≤ 1/T 1/(4d+2). Thus, the total regret is at most
T · 1/T 1/(4d+2) +O
( T 4d+1 4d+2
ρ2
√ d log(T ) ) = O ( T 4d+1 4d+2
ρ2
√ d log(T ) ) .
E THE PROOF OF THEOREM 10
Theorem. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 4) for the stochastic d-dimensional linear bandit problem with infinite action set whose expected regret is
E[RT ] ≤ C · d4 log(d) log2 log(d) log log log(d)
ρ2
√ T log3/2(T ) ,
for some absolute numerical constant C > 0, and its running time is polynomial in T d and 1/ρ.
Proof. First, the algorithm is ρ-replicable since in each batch we use a replicable LSE sub-routine with parameter ρ′ = ρ/B. This implies that
Pr[(a1, ..., aT ) ̸= (a′1, ..., a′T )] = Pr[∃i ∈ [B] : θ̂i was not replicable] ≤ ρ . Let us fix a batch iteration i ∈ [B− 1]. Set Ci be the core set computed by Lemma 7. The algorithm first pulls ni =
Cd4 log(d/δ) log2 log(d) log log log(d) ε2iρ
′2 times each one of the arms of the i-th core set Ci, as indicated by Lemma 9 and computes the LSE θ̂i in a replicable way using the algorithm of Lemma 9. Let E be the event that over all batches the estimations are correct. We pick δ = 1/(2|A′|T 2) so that this good event does hold with probability at least 1− 1/T . Our goal is to control the expected regret which can be written as
E[RT ] = T sup a∈A ⟨a, θ⋆⟩ −E T∑ t=1 ⟨at, θ⋆⟩ .
We have that T sup
a∈A ⟨a, θ⋆⟩ − T sup a′∈A′ ⟨a′, θ⋆⟩ ≤ 1 ,
since A′ is a deterministic 1/T -net of A. Also, let us set the expected regret of the bounded action sub-problem as
E[R′T ] = T sup a′∈A′
⟨a′, θ⋆⟩ −E T∑
t=1
⟨at, θ⋆⟩ .
We can now employ the analysis of the finite arm case. During batch i, any active arm has gap at most 4εi−1, so the instantaneous regret in any round is not more than 4εi−1. The expected regret conditional on the good event E is upper bounded by
E[R′T |E ] ≤ B∑ i=1 4Miεi−1 ,
where Mi is the total number of pulls in batch i (using the replicability blow-up) and εi−1 is the error one would achieve by drawing qi samples (ignoring the blow-up). Then, for some absolute constant C > 0, we have that
E[R′T |E ] ≤ B∑ i=1 4 ( qi d3 log(d) log2 log(d) log log log(d) log2 T ρ2 ) · √ d2 log(T )/qi−1 ,
which yields that
E[R′T |E ] ≤ C d4 log(d) log2 log(d) log log log(d) log(T )
√ log(T )
ρ2 · S ,
where we set
S := B∑ i=1
qi
q(i−1)/2 = q1/2 B∑ i=1 qi/2 = q(1+B)/2 .
We pick B = log(T ) and get that, if q = T 1/B then S = Θ( √ T ). We remark that this choice of q is valid since B∑ i=1 qi = qB+1 − q q − 1 = Θ(qB)− 1 ≥ Tρ 2 d3 log(d) log2 log(d) log log log(d) .
Hence, we have that E[R′T |E ] ≤ O ( d4 log(d) log2 log(d) log log log(d)
ρ2
√ T log3/2(T ) ) .
Note that when E does not hold, we can bound the expected regret by 1/T · T = 1. This implies that the overall regret E[RT ] ≤ 2 + E[R′T |E ] and so it satisfies the desired bound and the proof is complete.
F DEFERRED LEMMATA
F.1 THE PROOF OF LEMMA 8
Proof. Consider the distribution π that is a 2-approximation to the optimal G-design and has support |C| = O(d log log d). Let C′ be the set of arms in the support such that π(a) ≤ c/d log d. We consider π̃ = (1 − x)π + xa, where a ∈ C′ and x will be specified later. Consider now the matrix V (π̃). Using the Sherman-Morrison formula, we have that
V (π̃)−1 = 1 1− x V (π)−1 − xV (π)
−1aa⊤V (π)−1 (1− x)2 ( 1 + 11−x ||a|| 2 V (π)−1 ) = 1 1− x
( V (π)−1 − xV (π) −1aa⊤V (π)−1
1− x+ ||a||2V (π)−1
) .
Consider any arm a′. Then,
||a′||2V (π̃)−1 = 1
1− x ||a||2V (π)−1 −
x 1− x · (a
⊤V (π)−1a′)2
1− x+ ||a||2V (π)−1 ≤ 1 1− x ||a||2V (π)−1 .
Note that we apply this transformation at most O(d log log d) times. Let π̂ be the distribution we end up with. We see that
||a′||2V (π̂)−1 ≤ ( 1
1− x
)cd log log d ||a||2V (π)−1 ≤ 2 ( 1
1− x
)cd log log d d.
Notice that there is a constant c′ such that when x = c′/d log d we have that (
1 1−x
)cd log log d ≤ 2.
Moreover, notice that the mass of every arm is at least x(1 − x)|C| ≥ x − |C|x2 = c′/(d log(d)) − c′′d log log d/(d2 log2(d)) ≥ c/(d log(d)), for some absolute numerical constant c > 0. This concludes the claim.
F.2 THE PROOF OF LEMMA 9
Proof. The proof works when we can treat Ω(⌈d log(1/δ)π(a)/ε2⌉) as Ω(d log(1/δ)π(a)/ε2), i.e., as long as π(a) = Ω(ε2/d log(1/δ)). In the regime we are in, this point is handled thanks to Lemma 8. Combining the following proof with Lemma 8, we can obtain the desired result.
We underline that we work in the fixed design setting: the arms ai are deterministically chosen independently of the rewards ri. Assume that the core set of Lemma 7 is the set C. Fix the multi-set S = {(ai, ri) : i ∈ [M ]}, where each arm a lies in the core set and is pulled na = Θ(π(a)d log(d) log(|C|/δ)/ε2) times2. Hence, we have that
M = ∑ a∈C na = Θ ( d log(d) log(|C|/δ)/ε2 ) .
Let also V = ∑
i∈[M ] aia ⊤ i . The least-squares estimator can be written as
θ (ε) LSE = V −1 ∑ i∈[M ] airi = V −1 ∑ a∈C a ∑ i∈[na] ri(a) ,
where each a lies in the core set (deterministically) and ri(a) is the i-th reward generated independently by the linear regression process ⟨θ⋆, a⟩+ξ, where ξ is a fresh zero mean sub-gaussian random variable. Our goal is to reproducibly estimate the value ∑ i∈[na] ri(a) for any a. This is sufficient since two independent executions of the algorithm share the set C and na for any a. Note that the above sum is a random variable. In the following, we condition on the high-probability event that the average reward of the arm a is ε-close to the expected one, i.e., the value ⟨θ⋆, a⟩. This happens with probability at least 1− δ/(2|C|), given Ω(π(a)d log(d) log(|C|/δ)/ε2) samples from arm a ∈ C. In order to guarantee replicability, we will apply a result from Impagliazzo et al. (2022). Since we will union bound over all arms in the core set and |C| = O(d log log(d)) (via Lemma 7), we will make use of a (ρ/|C|)-replicable algorithm that gives an estimate v(a) ∈ R such that
|⟨θ⋆, a⟩ − v(a)| ≤ τ ,
with probability at least 1− δ/(2|C|). For δ < ρ, the algorithm uses Sa = Ω ( d2 log(d/δ) log2 log(d) log log log(d)/(ρ2τ2) ) many samples from the linear regression with fixed arm a ∈ C. Since we have conditioned on the randomness of ri(a) for any i, we get∣∣∣∣∣∣ 1na ∑ i∈[na] ri(a)− v(a) ∣∣∣∣∣∣ ≤ ∣∣∣∣∣∣ 1na ∑ i∈[na] ri(a)− ⟨θ∗, a⟩
∣∣∣∣∣∣+ |⟨θ∗, a⟩ − v(a)| ≤ ε+ τ , with probability at least 1− δ/(2|C|). Hence, by repeating this approach for all arms in the core set, we set θSQ = V −1 ∑ a∈C a na v(a). Let us condition on the randomness of the estimate θ (ε) LSE. We have that
sup a′∈A |⟨a′, θSQ − θ⋆⟩| ≤ sup a′∈A |⟨a′, θSQ − θ(ε)LSE⟩|+ sup a′∈A |⟨a′, θ(ε)LSE − θ ⋆⟩| .
2Recall that π(a) ≥ c/(d log(d)), for some constant c > 0, so the previous expression is Ω(log(δ/|C|)/ε2).
Note that the second term is ε with probability at least 1− δ via Lemma 5. Our next goal is to tune the accuracy τ ∈ (0, 1) so that the first term yields another ε error. For the first term, we have that
sup a′∈A |⟨a′, θSQ − θ(ε)LSE⟩| ≤ sup a′∈A ∣∣∣∣∣⟨a′, V −1∑ a∈C a na (ε+ τ)⟩ ∣∣∣∣∣ Note that V = Cd log(d) log(|C|/δ)ε2 ∑ a∈C π(a)aa ⊤ and so V −1 = ε 2 Cd log(d) log(|C|/δ)V (π) −1, for some absolute constant C > 0. This implies that
sup a′∈A |⟨a′, θSQ−θ(ε)LSE⟩| ≤ (ε+τ) sup a′∈A ∣∣∣∣∣ 〈 a′,
ε2
Cd log(d) log(|C|/δ) V (π)−1 ∑ a∈C Cd log(d) log(|C|/δ)π(a) ε2 a 〉∣∣∣∣∣ . Hence, we get that
sup a′∈A |⟨a′, θSQ − θ(ε)LSE⟩| ≤ (ε+ τ) sup a′∈A ∣∣∣∣∣ 〈 a′, V (π)−1 ∑ a∈C π(a)a 〉∣∣∣∣∣ . Consider a fixed arm a′ ∈ A. Then,∣∣∣∣∣ 〈 a′, V (π)−1 ∑ a∈C π(a)a 〉∣∣∣∣∣ ≤∑ a∈C π(a) ∣∣⟨a′, V (π)−1a⟩∣∣
≤ ∑ a∈C π(a) ( 1 + ∣∣⟨a′, V (π)−1a⟩∣∣2) = 1 +
∑ a∈C π(a) ∣∣⟨a′, V (π)−1a⟩∣∣ | 1. What is the focus and contribution of the paper regarding reproducibility and bandits?
2. What are the strengths of the proposed approach, particularly in terms of introducing reproducibility and presenting algorithms step-by-step?
3. Do you have any concerns or questions about the paper's content, such as a potential error in step 9 of Algorithm 1?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The submission studies the intersection of reproducibility and bandits. The submission proposes and analyses four algorithms for the stochastic MABs and stochastic linear bandits, with optimal dependency on the reproducibility parameter. The submission reveals a valuable insight between reproducible interpretability and exploration-exploitation tradeoff.
Strengths And Weaknesses
(a) Introducing reproducibility to bandits provides a new perspective to contemplate the bandit problem.
(b) The submission presents its algorithms step-by-step, providing a clear picture of which parts are the novelties and how a novel design is combined with existing techniques to achieve the desired regret bound. The critical components of leveraging batch bandits, uniformly sampled thresholds (step 13 of Algorithm2 and step 13 of Algorithm 3), G-optimal design, LSE, and ReproducibleLSE become intuitive and easy to follow.
(c) The submission clearly explains the challenges and the intuitions of algorithm design. Regarding similar approaches, the submission carefully compares differences between the proposed method and published ones.
(d) The regret bounds offer insights and the impact of reproducible learning. The submission explains how to spend additional rounds to guarantee reproducibility and the extra factors compared to the conventional bounds.
Given the above, the results and contribution of this submission are substantial. The proofs are rigorous.
(e) The minor question is, in step 9 of Algorithm 1, is a "min" missing in the assignment of \tau?
Clarity, Quality, Novelty And Reproducibility
The paper is well-written. The novelties provide a new perspective and algorithms for studying bandit problems. |
ICLR | Title
Replicable Bandits
Abstract
In this paper, we introduce the notion of replicable policies in the context of stochastic bandits, one of the canonical problems in interactive learning. A policy in the bandit environment is called replicable if it pulls, with high probability, the exact same sequence of arms in two different and independent executions (i.e., under independent reward realizations). We show that not only do replicable policies exist, but also they achieve almost the same optimal (non-replicable) regret bounds in terms of the time horizon. More specifically, in the stochastic multi-armed bandits setting, we develop a policy with an optimal problem-dependent regret bound whose dependence on the replicability parameter is also optimal. Similarly, for stochastic linear bandits (with finitely and infinitely many arms) we develop replicable policies that achieve the best-known problem-independent regret bounds with an optimal dependency on the replicability parameter. Our results show that even though randomization is crucial for the exploration-exploitation trade-off, an optimal balance can still be achieved while pulling the exact same arms in two different rounds of executions.
1 INTRODUCTION
In order for scientific findings to be valid and reliable, the experimental process must be repeatable, and must provide coherent results and conclusions across these repetitions. In fact, lack of reproducibility has been a major issue in many scientific areas; a 2016 survey that appeared in Nature (Baker, 2016a) revealed that more than 70% of researchers failed in their attempt to reproduce another researcher’s experiments. What is even more concerning is that over 50% of them failed to reproduce their own findings. Similar concerns have been raised by the machine learning community, e.g., the ICLR 2019 Reproducibility Challenge (Pineau et al., 2019) and NeurIPS 2019 Reproducibility Program (Pineau et al., 2021), due to the to the exponential increase in the number of publications and the reliability of the findings.
The aforementioned empirical evidence has recently led to theoretical studies and rigorous definitions of replicability. In particular, the works of Impagliazzo et al. (2022) and Ahn et al. (2022) considered replicability as an algorithmic property through the lens of (offline) learning and convex optimization, respectively. In a similar vein, in the current work, we introduce the notion of replicability in the context of interactive learning and decision making. In particular, we study replicable policy design for the fundamental setting of stochastic bandits.
A multi-armed bandit (MAB) is a one-player game that is played over T rounds where there is a set of different arms/actions A of size |A| = K (in the more general case of linear bandits, we can consider even an infinite number of arms). In each round t = 1, 2, . . . , T , the player pulls an arm at ∈ A and receives a corresponding reward rt. In the stochastic setting, the rewards of each
arm are sampled in each round independently, from some fixed but unknown, distribution supported on [0, 1]. Crucially, each arm has a potentially different reward distribution, but the distribution of each arm is fixed over time. A bandit algorithm A at every round t takes as input the sequence of arm-reward pairs that it has seen so far, i.e., (a1, r1), . . . , (at−1, rt−1), then uses (potentially) some internal randomness ξ to pull an arm at ∈ A and, finally, observes the associated reward rt ∼ Dat . We propose the following natural notion of a replicable bandit algorithm, which is inspired by the definition of Impagliazzo et al. (2022). Intuitively, a bandit algorithm is replicable if two distinct executions of the algorithm, with internal randomness fixed between both runs, but with independent reward realizations, give the exact same sequence of played arms, with high probability. More formally, we have the following definition. Definition 1 (Replicable Bandit Algorithm). Let ρ ∈ [0, 1]. We call a bandit algorithm A ρreplicable in the stochastic setting if for any distribution Daj over [0, 1] of the rewards of the j-th arm aj ∈ A, and for any two executions of A, where the internal randomness ξ is shared across the executions, it holds that
Pr ξ,r(1),r(2)
[( a (1) 1 , . . . , a (1) T ) = ( a (2) 1 , . . . , a (2) T )] ≥ 1− ρ .
Here, a(i)t = A(a (i) 1 , r (i) 1 , ..., a (i) t−1, r (i) t−1; ξ) is the t-th action taken by the algorithm A in execution i ∈ {1, 2}.
The reason why we allow for some fixed internal randomness is that the algorithm designer has control over it, e.g., they can use the same seed for their (pseudo)random generator between two executions. Clearly, naively designing a replicable bandit algorithm is not quite challenging. For instance, an algorithm that always pulls the same arm or an algorithm that plays the arms in a particular random sequence determined by the shared random seed ξ are both replicable. The caveat is that the performance of these algorithms in terms of expected regret will be quite poor. In this work, we aim to design bandit algorithms which are replicable and enjoy small expected regret. In the stochastic setting, the (expected) regret after T rounds is defined as
E[RT ] = T max a∈A
µa −E
[ T∑
t=1
µat
] ,
where µa = Er∼Da [r] is the mean reward for arm a ∈ A. In a similar manner, we can define the regret in the more general setting of linear bandits (see, Section 5) Hence, the overarching question in this work is the following:
Is it possible to design replicable bandit algorithms with small expected regret?
At a first glance, one might think that this is not possible, since it looks like replicability contradicts the exploratory behavior that a bandit algorithm should possess. However, our main results answer this question in the affirmative and can be summarized in Table 1.
1.1 RELATED WORK
Reproducibility/Replicability. In this work, we introduce the notion of replicability in the context of interactive learning and, in particular, in the fundamental setting of stochastic bandits. Close to our work, the notion of a replicable algorithm in the context of learning was proposed by Impagliazzo et al. (2022), where it is shown how any statistical query algorithm can be made replicable with a moderate increase in its sample complexity. Using this result, they provide replicable algorithms for finding approximate heavy-hitters, medians, and the learning of half-spaces. Reproducibility has been also considered in the context of optimization by Ahn et al. (2022). We mention that in Ahn et al. (2022) the notion of a replicable algorithm is different from our work and that of Impagliazzo et al. (2022), in the sense that the outputs of two different executions of the algorithm do not need to be exactly the same. From a more application-oriented perspective, Shamir & Lin (2022) study irreproducibility in recommendation systems and propose the use of smooth activations (instead of ReLUs) to improve recommendation reproducibility. In general, the reproducibility crisis is reported in various scientific disciplines Ioannidis (2005); McNutt (2014); Baker (2016b); Goodman et al. (2016); Lucic et al. (2018); Henderson et al. (2018). For more details we refer to the report of the NeurIPS 2019 Reproducibility Program Pineau et al. (2021) and the ICLR 2019 Reproducibility Challenge Pineau et al. (2019).
Bandit Algorithms. Stochastic multi-armed bandits for the general setting without structure have been studied extensively Slivkins (2019); Lattimore & Szepesvári (2020); Bubeck et al. (2012b); Auer et al. (2002); Cesa-Bianchi & Fischer (1998); Kaufmann et al. (2012a); Audibert et al. (2010); Agrawal & Goyal (2012); Kaufmann et al. (2012b). In this setting, the optimum regret achievable is O ( log(T ) ∑ i:∆i>0 ∆−1 ) ; this is achieved, e.g., by the upper confidence bound (UCB) algorithm of Auer et al. (2002). The setting of d-dimensional linear stochastic bandits is also well-explored Dani et al. (2008); Abbasi-Yadkori et al. (2011) under the well-specified linear reward model, achieving (near) optimal problem-independent regret of O(d √ T log(T )) Lattimore & Szepesvári (2020). Note that the best-known lower bound is Ω(d √ T ) Dani et al. (2008) and that the number of arms can, in principle, be unbounded. For a finite number of arms K, the best known upper bound is O( √ dT log(K)) Bubeck et al. (2012a). Our work focuses on the design of replicable bandit algorithms and we hence consider only stochastic environments. In general, there is also extensive work in adversarial bandits and we refer the interested reader to Lattimore & Szepesvári (2020).
Batched Bandits. While sequential bandit problems have been studied for almost a century, there is much interest in the batched setting too. In many settings, like medical trials, one has to take a lot of actions in parallel and observe their rewards later. The works of Auer & Ortner (2010) and CesaBianchi et al. (2013) provided sequential bandit algorithms which can easily work in the batched setting. The works of Gao et al. (2019) and Esfandiari et al. (2021) are focusing exclusively on the batched setting. Our work on replicable bandits builds upon some of the techniques from these two lines of work.
2 STOCHASTIC BANDITS AND REPLICABILITY
In this section, we first highlight the main challenges in order to guarantee replicability and then discuss how the results of Impagliazzo et al. (2022) can be applied in our setting.
2.1 WARM-UP I: NAIVE REPLICABILITY AND CHALLENGES
Let us consider the stochastic two-arm setting (K = 2) and a bandit algorithm A with two independent executions, A1 and A2. The algorithm Ai plays the sequence 1, 2, 1, 2, . . . until some, potentially random, round Ti ∈ N after which one of the two arms is eliminated and, from that point, the algorithm picks the winning arm ji ∈ {1, 2}. The algorithm A is ρ-replicable if and only if T1 = T2 and j1 = j2 with probability 1− ρ. Assume that |µ1 − µ2| = ∆ where µi is the mean of the distribution of the i-th arm. If we assume that ∆ is known, then we can run the algorithm for T1 = T2 = C∆2 log(1/ρ) for some universal constant C > 0 and obtain that, with probability 1 − ρ, it will hold that µ̂(j)1 ≈ µ1 and µ̂ (j) 2 ≈ µ2
for j ∈ {1, 2}, where µ̂(j)i is the estimation of arm’s i mean during execution j. Hence, knowing ∆ implies that the stopping criterion of the algorithm A is deterministic and that, with high probability, the winning arm will be detected at time T1 = T2. This will make the algorithm ρ-replicable.
Observe that when K = 2, the only obstacle to replicability is that the algorithm should decide at the same time to select the winning arm and the selection must be the same in the two execution threads. In the presence of multiple arms, there exists the additional constraint that the above conditions must be satisfied during, potentially, multiple arm eliminations. Hence, the two questions arising from the above discussion are (i) how to modify the above approach when ∆ is unknown and (ii) how to deal with K > 2 arms.
A potential solution to the second question (on handling K > 2 arms) is the Execute-Then-Commit (ETC) strategy. Consider the stochastic K-arm bandit setting. For any ρ ∈ (0, 1), the ETC algorithm with known ∆ = mini ∆i and horizon T that uses m = 4∆2 log(1/ρ) deterministic exploration phases before commitment is ρ-replicable. The intuition is exactly the same as in the K = 2 case. The caveats of this approach are that it assumes that ∆ is known and that the obtained regret is quite unsatisfying. In particular, it achieves regret bounded by m ∑ i∈[K] ∆i + ρ · (T −mK) ∑ i∈[k] ∆i.
Next, we discuss how to improve the regret bound without knowing the gaps ∆i. Before designing new algorithms, we will inspect the guarantees that can be obtained by combining ideas from previous results in the bandits literature and the recent work in replicable learning of Impagliazzo et al. (2022).
2.2 WARM-UP II: BANDIT ALGORITHMS AND REPLICABLE MEAN ESTIMATION
First, we remark that we work in the stochastic setting and the distributions of the rewards of the two arms are subgaussian. Thus, the problem of estimating their mean is an instance of a statistical query for which we can use the algorithm of Impagliazzo et al. (2022) to get a replicable mean estimator for the distributions of the rewards of the arms. Proposition 2 (Replicable Mean Estimation (Impagliazzo et al., 2022)). Let τ, δ, ρ ∈ [0, 1]. There exists a ρ-replicable algorithm ReprMeanEstimation that draws Ω ( log(1/δ) τ2(ρ−δ)2 ) samples from a distribution with mean µ and computes an estimate µ̂ that satisfies |µ̂ − µ| ≤ τ with probability at least 1− δ.
Notice that we are working in the regime where δ ≪ ρ, so the sample complexity is Ω (
log(1/δ) τ2ρ2
) .
The straightforward approach is to try to use an optimal multi-armed algorithm for the stochastic setting, such as UCB or arm-elimination (Even-Dar et al., 2006), combined with the replicable mean estimator. However, it is not hard to see that this approach does not give meaningful results: if we want to achieve replicability ρ we need to call the replicable mean estimator routine with parameter ρ/(KT ), due to the union bound that we need to take. This means that we need to pull every arm at least K2T 2 times, so the regret guarantee becomes vacuous. This gives us the first key insight to tackle the problem: we need to reduce the number of calls to the mean estimator. Hence, we will draw inspiration from the line of work in stochastic batched bandits (Gao et al., 2019; Esfandiari et al., 2021) to derive replicable bandit algorithms.
3 REPLICABLE MEAN ESTIMATION FOR BATCHED BANDITS
As a first step, we would like to show how one could combine the existing replicable algorithms of Impagliazzo et al. (2022) with the batched bandits approach of Esfandiari et al. (2021) to get some preliminary non-trivial results. We build an algorithm for the K-arm setting, where the gaps ∆j are unknown to the learner. Let δ be the confidence parameter of the arm elimination algorithm and ρ be the replicability guarantee we want to achieve. Our approach is the following: let us, deterministically, split the time interval into sub-intervals of increasing length. We treat each subinterval as a batch of samples where we pull each active arm the same number of times and use the replicable mean estimation algorithm to, empirically, compute the true mean. At the end of each batch, we decide to eliminate some arm j using the standard UCB estimate. Crucially, if we condition on the event that all the calls to the replicable mean estimator return the same number, then the algorithm we propose is replicable.
Algorithm 1 Mean-Estimation Based Replicable Algorithm for Stochastic MAB (Theorem 3) 1: Input: time horizon T, number of arms K, replicability ρ 2: Initialization: B ← log(T ), q ← T 1/B , c0 ← 0, A ← [K], r ← T , µ̂a ← 0,∀a ∈ A 3: for i = 1 to B − 1 do 4: if ⌊qi⌋ · |A| > r then 5: break 6: ci = ci−1 + ⌊qi⌋ 7: Pull every arm a ∈ A for ⌊qi⌋ times 8: for a ∈ A do 9: µ̂a ← ReprMeanEstimation(δ = 1/(2KTB), τ = 1, √ log(2KTB)/ci, ρ
′ = ρ/(KB)) ▷ Proposition 2
10: r ← r − |A| · ⌊qi⌋ 11: for a ∈ A do 12: if µ̂a < maxa∈A µ̂a − 2τ then 13: Remove a from A 14: In the last batch play the arm from A with the smallest index
Theorem 3. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 1) for the stochastic bandit problem with K arms and gaps (∆j)j∈[K] whose expected regret is
E[RT ] ≤ C · K2 log2(T )
ρ2
∑ j:∆j>0 ( ∆j + log(KT log(T )) ∆j ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in K,T and 1/ρ.
The above result, whose proof can be found in Appendix A, states that, by combining the tools from Impagliazzo et al. (2022) and Esfandiari et al. (2021), we can design a replicable bandit algorithm with (instance-dependent) expected regret O(K2 log3(T )/ρ2). Notice that the regret guarantee has an extra K2 log2(T )/ρ2 factor compared to its non-replicable counterpart in Esfandiari et al. (2021) (Theorem 5.1). This is because, due to a union bound over the rounds and the arms, we need to call the replicable mean estimator with parameter ρ/(K log(T )). In the next section, we show how to get rid of the log2(T ) by designing a new algorithm.
4 IMPROVED ALGORITHMS FOR REPLICABLE STOCHASTIC BANDITS
While the previous result provides a non-trivial regret bound, it is not optimal with respect to the time horizon T . In this section, we show how to improve it by designing a new algorithm, presented in Algorithm 2, which satisfies the guarantees of Theorem 4 and, essentially, decreases the dependence on the time horizon T from log3(T ) to log(T ). Our main result for replicable stochastic multi-armed bandits with K arms follows. Theorem 4. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 2) for the stochastic bandit problem with K arms and gaps (∆j)j∈[K] whose expected regret is
E[RT ] ≤ C · K2
ρ2 ∑ j:∆j>0 ( ∆j + log(KT log(T )) ∆j ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in K,T and 1/ρ.
Note that, compared to the non-replicable setting, we incur an extra factor of K2/ρ2 in the regret. The proof can be found in Appendix B. Let us now describe how Algorithm 2 works. We decompose the time horizon into B = log(T ) batches. Without the replicability constraint, one could draw qi samples in batch i from each arm and estimate the mean reward. With the replicability constraint, we have to boost this: in each batch i, we pull each active arm O(βqi) times, for some q to be determined, where β = O(K2/ρ2) is the replicability blow-up. Using these samples, we compute
Algorithm 2 Replicable Algorithm for Stochastic Multi-Armed Bandits (Theorem 4) 1: Input: time horizon T, number of arms K, replicability ρ 2: Initialization: B ← log(T ), q ← T 1/B , c0 ← 0, A0 ← [K], r ← T , µ̂a ← 0,∀a ∈ A0
3: β ← ⌊max{K2/ρ2, 2304}⌋ 4: for i = 1 to B − 1 do 5: if β⌊qi⌋ · |Ai| > r then 6: break 7: Ai ← Ai−1 8: for a ∈ Ai do 9: Pull arm a for β⌊qi⌋ times
10: Compute the empirical mean µ̂(i)α 11: ci ← ci−1 + ⌊qi⌋ 12: c̃i ← βci 13: Ũi ← √ 2 ln(2KTB)/c̃i
14: Ui ← √ 2 ln(2KTB)/ci 15: U i ← Uni[Ui/2, Ui] 16: r ← r − β · |Ai| · ⌊qi⌋ 17: for a ∈ Ai do 18: if µ̂(i)a + Ũi < maxa∈Ai µ̂ (i) a − U i then 19: Remove a from Ai 20: In the last batch play the arm from AB−1 with the smallest index
the empirical mean µ̂(i)α for any active arm α. Note that Ũi in Algorithm 2 corresponds to the size of the actual confidence interval of the estimation and Ui corresponds to the confidence interval of an algorithm that does not use the β-blow-up in the number of samples. The novelty of our approach comes from the choice of the interval around the mean of the maximum arm: we pick a threshold uniformly at random from an interval of size Ui/2 around the maximum mean. Then, the algorithm checks whether µ̂(i)a + Ũi < max µ̂ (i) a′ − U i, where max runs over the active arms a′ in batch i, and eliminates arms accordingly. To prove the result we show that there are three regions that some arm j can be in relative to the confidence interval of the best arm in batch i (cf. Appendix B). If it lies in two of these regions, then the decision of whether to keep it or discard it is the same in both executions of the algorithm. However, if it is in the third region, the decision could be different between parallel executions, and since it relies on some external and unknown randomness, it is not clear how to reason about it. To overcome this issue, we use the random threshold to argue about the probability that the decision between two executions differs. The crucial observation that allows us to get rid of the extra log2(T ) factor is that there are correlations between consecutive batches: we prove that if some arm j lies in this “bad” region in some batch i, then it will be outside this region after a constant number of batches.
5 REPLICABLE STOCHASTIC LINEAR BANDITS
We now investigate replicability in the more general setting of stochastic linear bandits. In this setting, each arm is a vector a ∈ Rd belonging to some action set A ⊆ Rd, and there is a parameter θ⋆ ∈ Rd unknown to the player. In round t, the player chooses some action at ∈ A and receives a reward rt = ⟨θ⋆, at⟩ + ηt, where ηt is a zero-mean 1-subgaussian random variable independent of any other source of randomness. This means that E[ηt] = 0 and satisfies E[exp(ληt)] ≤ exp(λ2/2) for any λ ∈ R. For normalization purposes, it is standard to assume that ∥θ⋆∥2 ≤ 1 and supa∈A ∥a∥2 ≤ 1. In the linear setting, the expected regret after T pulls a1, . . . , aT can be written as
E[RT ] = T sup a∈A ⟨θ⋆, a⟩ −E
[ T∑
t=1
⟨θ⋆, at⟩ ] .
In Section 5.1 we provide results for the finite action space case, i.e., when |A| = K. Next, in Section 5.2, we study replicable linear bandit algorithms when dealing with infinite action spaces. In the following, we work in the regime where T ≫ d. We underline that our approach leverages connections of stochastic linear bandits with G-optimal experiment design, core sets constructions, and least-squares estimators. Roughly speaking, the goal of G-optimal design is to find a (small) subset of arms A′, which is called the core set, and define a distribution π over them with the following property: for any ε > 0, δ > 0 pulling only these arms for an appropriate number of times and computing the least-squares estimate θ̂ guarantees that supa∈A⟨a, θ∗− θ̂⟩ ≤ ε, with probability 1−δ. For an extensive discussion, we refer to Chapters 21 and 22 of Lattimore & Szepesvári (2020).
5.1 FINITE ACTION SET
We first introduce a lemma that allows us to reduce the size of the action set that our algorithm has to search over.
Lemma 5 (See Chapters 21 and 22 in Lattimore & Szepesvári (2020)). For any finite action set A that spans Rd and any δ, ε > 0, there exists an algorithm that, in time polynomial in d, computes a multi-set of Θ(d log(1/δ)/ε2+d log log d) actions (possibly with repetitions) such that (i) they span Rd and (ii) if we perform these actions in a batched stochastic d-dimensional linear bandits setting with true parameter θ⋆ ∈ Rd and let θ̂ be the least-squares estimate for θ⋆, then, for any a ∈ A, with probability at least 1− δ, we have
∣∣∣〈a, θ⋆ − θ̂〉∣∣∣ ≤ ε. Essentially, the multi-set in Lemma 5 is obtained using an approximate G-optimal design algorithm. Thus, it is crucial to check whether this can be done in a replicable manner. Recall that the above set of distinct actions is called the core set and is the solution of an (approximate) Goptimal design problem. To be more specific, consider a distribution π : A → [0, 1] and define V (π) = ∑ a∈A π(a)aa
⊤ ∈ Rd×d and g(π) = supa∈A ∥a∥2V (π)−1 . The distribution π is called a design and the goal of G-optimal design is to find a design that minimizes g. Since the number of actions is finite, this problem reduces to an optimization problem which can be solved efficiently using standard optimization methods (e.g., the Frank-Wolfe method). Since the initialization is the same, the algorithm that finds the optimal (or an approximately optimal) design is replicable under the assumption that the gradients and the projections do not have numerical errors. This perspective is orthogonal to the work of Ahn et al. (2022), that defines reproducibility from a different viewpoint.
Algorithm 3 Replicable Algorithm for Stochastic Linear Bandits (Theorem 6) 1: Input: number of arms K, time horizon T, replicability ρ 2: Initialization: B ← log(T ), q ← (T/c)1/B , A ← [K], r ← T 3: β ← ⌊max{K2/ρ2, 2304}⌋ 4: for i = 1 to B − 1 do 5: ε̃i = √ d log(KT 2)/(βqi)
6: εi = √ d log(KT 2)/qi 7: ni = 10d log(KT 2)/ε2i 8: a1, . . . , ani ← multi-set given by Lemma 5 with parameters δ = 1/(KT 2) and ε = ε̃i 9: if ni > r then
10: break 11: Pull every arm a1, . . . , ani and receive rewards r1, . . . , rni 12: Compute the LSE θ̂i ← (∑ni j=1 aja T j )−1 (∑ni j=1 ajrj )
13: εi ← Uni[εi/2, εi]
14: r ← r − ni
15: for a ∈ A do 16: if ⟨a, θ̂i⟩+ ε̃i < maxa∈A⟨a, θ̂i⟩ − εi then 17: Remove a from A 18: In the last batch play argmaxa∈A⟨a, θ̂B−1⟩
In our batched bandit algorithm (Algorithm 3), the multi-set of arms a1, . . . , ani computed in each batch is obtained via a deterministic algorithm with runtime poly(K, d), where |A| = K. Hence, the
multi-set will be the same in two different executions of the algorithm. On the other hand, the LSE will not be since it depends on the stochastic rewards. We apply the techniques that we developed in the replicable stochastic MAB setting in order to design our algorithm. Our main result for replicable d-dimensional stochastic linear bandits with K arms follows. For the proof, we refer to Appendix C. Theorem 6. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm for the stochastic ddimensional linear bandit problem with K arms whose expected regret is
E[RT ] ≤ C · K2
ρ2
√ dT log(KT ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in d,K, T and 1/ρ.
Note that the best known non-replicable algorithm achieves an upper bound of Õ( √ dT log(K)) and, hence, our algorithm incurs a replicability overhead of order K2/ρ2. The intuition behind the proof is similar to the multi-armed bandit setting in Section 4.
5.2 INFINITE ACTION SET
Let us proceed to the setting where the action set A is unbounded. Unfortunately, even when d = 1, we cannot directly get an algorithm that has satisfactory regret guarantees by discretizing the space and using Algorithm 3. The approach of Esfandiari et al. (2021) is to discretize the action space and use an 1/T -net to cover it, i.e. a set A′ ⊆ A such that for all a ∈ A there exists some a′ ∈ A′ with ||a − a′||2 ≤ 1/T . It is known that there exists such a net of size at most (3T )d (Vershynin, 2018, Corollary 4.2.13). Then, they apply the algorithm for the finite arms setting, increasing their regret guarantee by a factor of √ d. However, our replicable algorithm for this setting contains an additional factor of K2 in the regret bound. Thus, even when d = 1, our regret guarantee is greater than T, so the bound is vacuous. One way to fix this issue and get a sublinear regret guarantee is to use a smaller net. We use a 1/T 1/(4d+2)−net that has size at most (3T ) d 4d+2 and this yields an expected
regret of order O(T 4d+1/(4d+2) √ d log(T )/ρ2). For further details, we refer to Appendix D.
Even though the regret guarantee we managed to get using the smaller net of Appendix D is sublinear in T , it is not a satisfactory bound. The next step is to provide an algorithm for the infinite action setting using a replicable LSE subroutine combined with the batching approach of Esfandiari et al. (2021). We will make use of the next lemma. Lemma 7 (Section 21.2 Note 3 of Lattimore & Szepesvári (2020)). There exists a deterministic algorithm that, given an action space A ⊆ Rd, computes a 2-approximate G-optimal design π with a core set of size O(d log log(d)).
We additionally prove the next useful lemma, which, essentially, states that we can assume without loss of generality that every arm in the support of π has mass at least Ω(1/(d log(d))). We refer to Appendix F.1 for the proof. Lemma 8 (Effective Support). Let π be the distribution that corresponds to the 2-approximate optimal G-design of Lemma 7 with input A. Assume that π(a) ≤ c/(d log(d)), where c > 0 is some absolute numerical constant, for some arm a in the core set. Then, we can construct a distribution π̂ such that, for any arm a in the core set, π̂(a) ≥ C/(d log(d)), where C > 0 is an absolute constant, so that it holds
sup a′∈A
∥a′∥2V (π̂)−1 ≤ 4d .
The upcoming lemma is a replicable algorithm for the least-squares estimator and, essentially, builds upon Lemma 7 and Lemma 8. Its proof can be found at Appendix F.2. Lemma 9 (Replicable LSE). Let ρ, ε ∈ (0, 1] and 0 < δ ≤ min{ρ, 1/d}1. Consider an environment of d-dimensional stochastic linear bandits with infinite action space A. Assume that π is a 4- approximate optimal design with associated core set C as computed by Lemma 7 with input A. There exists a ρ-replicable algorithm that pulls each arm a ∈ C a total of
Ω
( d4 log(d/δ) log2 log(d) log log log(d)
ε2ρ2 ) 1We can handle the case of 0 < δ ≤ d by paying an extra log d factor in the sample complexity.
times and outputs θSQ that satisfies supa∈A |⟨a, θSQ − θ⋆⟩| ≤ ε , with probability at least 1− δ.
Algorithm 4 Replicable LSE Algorithm for Stochastic Infinite Action Set (Theorem 10) 1: Input: time horizon T, action set A ⊆ Rd, replicability ρ 2: A′ ← 1/T -net of A 3: Initialization: r ← T,B ← log(T ), q ← (T/c)1/B 4: for i = 1 to B − 1 do 5: qi denotes the number of pulls of all arms before the replicability blow-up 6: εi = c · d √ log(T )/qi
7: The blow-up is Mi = qi · d3 log(d) log2 log(d) log log log(d) log2(T )/ρ2 8: a1, . . . , a|Ci| ← core set Ci of the design given by Lemma 7 with parameter A′ 9: if ⌈Mi⌉ > r then
10: break 11: Pull every arm aj for Ni = ⌈Mi⌉/|Ci| rounds and receive rewards r(j)1 , ..., r (j) Ni
for j ∈ [|Ci|] 12: Si = {(aj , r(j)t ) : t ∈ [Ni], j ∈ [|Ci|]} 13: θ̂i ← ReplicableLSE(Si, ρ′ = ρ/(dB), δ = 1/(2|A′|T 2), τ = min{εi, 1}) 14: r ← r − ⌈Mi⌉ 15: for a ∈ A′ do 16: if ⟨a, θ̂i⟩ < maxa∈A′⟨a, θ̂i⟩ − 2εi then 17: Remove a from A′ 18: In the last batch play argmaxa∈A′⟨a, θ̂B−1⟩ 19: 20: ReplicableLSE(S, ρ, δ, τ) 21: for a ∈ C do 22: v(a)← ReplicableSQ(ϕ : x ∈ R 7→ x ∈ R, S, ρ, δ, τ) ▷ Impagliazzo et al. (2022) 23: return ( ∑ j∈|S| aja ⊤ j ) −1 · ( ∑ a∈C a na v(a))
The main result for the infinite actions’ case, obtained by Algorithm 4, follows. Its proof can be found at Appendix E. Theorem 10. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (Algorithm 4) for the stochastic d-dimensional linear bandit problem with infinite action set whose expected regret is
E[RT ] ≤ C · d4 log(d) log2 log(d) log log log(d)
ρ2
√ T log3/2(T ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in T d and 1/ρ.
Our algorithm for the infinite arm linear bandit case enjoys an expected regret of order Õ(poly(d) √ T ). We underline that the dependence of the regret on the time horizon is (almost) optimal, and we incur an extra d3 factor in the regret guarantee compared to the non-replicable algorithm of Esfandiari et al. (2021). We now comment on the time complexity of our algorithm. Remark 11. The current implementation of our algorithm requires time exponential in d. However, for a general convex set A, given access to a separation oracle for it and an oracle that computes an (approximate) G-optimal design, we can execute it in polynomial time and with polynomially many calls to the oracle. Notably, when A is a polytope such oracles exist. We underline that computational complexity issues also arise in the traditional setting of linear bandits with an infinite number of arms and the computational overhead that the replicability requirement adds is minimal. For further details, we refer to Appendix G.
6 CONCLUSION AND FUTURE DIRECTIONS
In this paper, we have provided a formal notion of reproducibility/replicability for stochastic bandits and we have developed algorithms for the multi-armed bandit and the linear bandit settings that satisfy this notion and enjoy a small regret decay compared to their non-replicable counterparts. We hope and believe that our paper will inspire future works in replicable algorithms for more complicated interactive learning settings such as reinforcement learning. We also provide experimental evaluation in Appendix H.
7 ACKNOWLEDGEMENTS
Alkis Kalavasis was supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the “First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant”, project BALSAM, HFRIFM17-1424. Amin Karbasi acknowledges funding in direct support of this work from NSF (IIS-1845032), ONR (N00014- 19-1-2406), and the AI Institute for Learning-Enabled Optimization at Scale (TILOS). Andreas Krause was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program grant agreement no. 815943 and the Swiss National Science Foundation under NCCR Automation, grant agreement 51NF40 180545. Grigoris Velegkas was supported by NSF (IIS-1845032), an Onassis Foundation PhD Fellowship and a Bodossaki Foundation PhD Fellowship.
A THE PROOF OF THEOREM 3
Theorem. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 1) for the stochastic bandit problem with K arms and gaps (∆j)j∈[K] whose expected regret is
E[RT ] ≤ C · K2 log2(T )
ρ2
∑ j:∆j>0 ( ∆j + log(2KT log(T )) ∆j ) ,
where C > 0 is an absolute numerical constant, and its running time is polynomial in K,T and 1/ρ.
Proof. First, we claim that the algorithm is ρ-replicable: since the elimination decisions are taken in the same iterates and are based solely on the mean estimations, the replicability of the algorithm of Proposition 2 implies the replicability of the whole algorithm. In particular,
Pr[(a1, ..., aT ) ̸= (a′1, ..., a′T )] = Pr[∃i ∈ [B],∃j ∈ [K] : µ̂ (i) j was not replicable] ≤ ρ .
During each batch i, we draw for any active arm ⌊qi⌋ fresh samples for a total of ci samples and use the replicable mean estimation algorithm to estimate its mean. For an active arm, at the end of some batch i ∈ [B], we say that its estimation is “correct” if the estimation of its mean is within√ log(2KTB)/ci from the true mean. Using Proposition 2, the estimation of any active arm at the end of any batch (except possibly the last batch) is correct with probability at least 1− 1/(2KTB) and so, by the union bound, the probability that the estimation is incorrect for some arm at the end of some batch is bounded by 1/T . We remark that when δ < ρ, the sample complexity of Proposition 2 reduces to O(log(1/δ)/(τ2ρ2)). Let E denote the event that our estimates are correct. The total expected regret can be bounded as
E[RT ] ≤ T · 1/T +E[RT |E ] .
It suffices to bound the second term of the RHS and hence we can assume that each gap is correctly estimated within an additive factor of √ log(2KTB)/ci after batch i. First, due to the elimination condition, we get that the best arm is never eliminated. Next, we have that
E[RT |E ] = ∑
j:∆j>0
∆j E[Tj |E ] ,
where Tj is the total number of pulls of arm j. Fix a sub-optimal arm j and assume that i + 1 was the last batch it was active. Since this arm is not eliminated at the end of batch i, and the estimations are correct, we have that
∆j ≤ √ log(2KTB)/ci ,
and so ci ≤ log(2KTB)/∆2j . Hence, the number of pulls to get the desired bound due to Proposition 2 is (since we need to pull an arm ci/ρ21 times in order to get an estimate at distance√
log(1/δ)/c2i with probability 1− δ in a ρ1-replicable manner when δ < ρ1)
Tj ≤ ci+1/ρ21 = q/ρ21(1 + ci) ≤ q/ρ21 · (1 + log(2KTB)/∆2j ) .
This implies that the total regret is bounded by
E[RT ] ≤ 1 + q/ρ21 · ∑
j:∆j>0
( ∆j + log(2KTB)
∆j
) .
We finally set q = T 1/B and B = log(T ). Moreover, we have that ρ1 = ρ/(KB). These yield
E[RT ] ≤ K2 log2(T )
ρ2
∑ j:∆j>0 ( ∆j + log(2KT log(T )) ∆j ) .
This completes the proof.
B THE PROOF OF THEOREM 4
Theorem. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 2) for the stochastic bandit problem with K arms and gaps (∆j)j∈[K] whose expected regret is
E[RT ] ≤ C · K2
ρ2 ∑ j:∆j>0 (∆j + log(KT log(T ))/∆j) ,
for some absolute numerical constant C > 0, and its running time is polynomial in K,T and 1/ρ.
To give some intuition, we begin with a non tight analysis which, however, provides the main ideas behind the actual proof.
Non Tight Analysis Assume that the environment has K arms with unknown means µi and let T be the number of rounds. Consider B to the total number of batches and β > 1. We set q = T 1/B . In each batch i ∈ [B], we pull each arm β⌊ qi⌋ times. Hence, after the i-th batch, we will have drawn c̃i = ∑ 1≤j≤i β⌊qj⌋ independent and identically distributed samples from each arm. Let us
also set ci = ∑ 1≤j≤i⌊qj⌋.
Let us fix i ∈ [B]. Using Hoeffding’s bound for subgaussian concentration, the length of the confidence bound for arm j ∈ [K] that guarantees 1 − δ probability of success (in the sense that the empirical estimate µ̂j will be close to the true µj) is equal to
Ũi = √ 2 log(1/δ)/c̃i ,
when the estimator uses c̃i samples. Also, let Ui = √ 2 log(1/δ)/ci .
Assume that the active arms at the batch iteration i lie in the set Ai. Consider the estimates {µ̂(i)j }i∈[B],j∈Ai , where µ̂ (i) j is the empirical mean of arm j using c̃i samples. We will eliminate an arm j at the end of the batch iteration i if
µ̂ (i) j + Ũi ≤ max t∈Ai µ̂ (i) t − U i ,
where U i ∼ Uni[Ui/2, Ui]. For the remaining of the proof, we condition on the event E that for every arm j ∈ [K] and every batch i ∈ [B] the true mean is within Ũi from the empirical one. We first argue about the replicability of our algorithm. Consider a fixed round i (end of i-th batch) and a fixed arm j. Let i⋆ be the optimal empirical arm after the i-th batch.
Let µ̂(i) ′ j , µ̂ (i)′ i⋆ the empirical estimates of arms j, i ⋆ after the i-th batch, under some other execution of the algorithm. We condition on the event E ′ for the other execution as well. Notice that |µ̂(i) ′
j − µ̂ (i) j | ≤ 2Ũi, |µ̂ (i)′ i⋆ − µ̂ (i) i⋆ | ≤ 2Ũi. Notice that, since the randomness of U i is shared, if µ̂ (i) j + Ũi ≥ µ̂ (i) i⋆ − U i + 4Ũi, then the arm j will not be eliminated after the i-th batch in some other execution of the algorithm as well. Similarly, if µ̂(i)j + Ũi < µ̂ (i) i⋆ −U i − 4Ũi the the arm j will get eliminated after the i-th batch in some other execution of the algorithm as well. In particular, this means that if µ̂(i)j − 2Ũi > µ̂ (i) i⋆ + Ũi − Ui/2 then the arm j will not get eliminated in some other execution of the algorithm and if µ̂(i)j + 5Ũi < µ̂ (i) i⋆ − Ui then the arm j will also get eliminated in some other execution of the algorithm with probability 1 under the event E ∩ E ′. We call the above two cases good since they preserve replicability. Thus, it suffices to bound the probability that the decision about arm j will be different between the two executions when we are in neither of these cases. Then, the worst case bound due to the mass of the uniform probability measure is
16 √ 2 log(1/δ)/c̃i√
2 log(1/δ)/ci . This implies that the probability mass of the bad event is at most 16 √ ci/c̃i = 16 √ 1/β. A union bound over all arms and batches yields that the probability that two distinct executions differ in at least one pull is
Pr[(a1, . . . , aT ) ̸= (a′1, . . . , a′T )] ≤ 16KB √ 1/β + 2δ ,
and since δ ≤ ρ it suffices to pick β = 768K2B2/ρ2. We now focus on the regret of our algorithm. Let us set δ = 1/(KTB). Fix a sub-optimal arm j and assume that batch i+ 1 was the last batch that is was active. We obtain that the total number of pulls of this arm is
Tj ≤ c̃i+1 ≤ βq(1 + ci) ≤ βq(1 + 8 log(1/δ)/∆2j )
From the replicability analysis, it suffices to take β of order K2 log2(T )/ρ2 and so E[RT ] ≤ T ·1/T+E[RT |E ] = 1+ ∑
j:∆j>0
∆j E[Tj |E ] ≤ C ·K2 log2(T )
ρ2
∑ j:∆j>0 ( ∆j + log(KT log(T )) ∆j ) ,
for some absolute constant C > 0.
Notice that the above analysis, which uses a naive union bound, does not yield the desired regret bound. We next provide a more tight analysis of the same algorithm that achieves the regret bound of Theorem 4.
Improved Analysis (The Proof of Theorem 4) In the previous analysis, we used a union bound over all arms and all batches in order to control the probability of the bad event. However, we can obtain an improved regret bound as follows. Fix a sub-optimal arm i ∈ [K] and let t be the first round that it appears in the bad event. We claim that after a constant number of rounds, this arm will be eliminated. This will shave the O(log2(T )) factor from the regret bound. Essentially, as indicated in the previous proof, the bad event corresponds to the case where the randomness of the cut-off threshold U can influence the decision of whether the algorithm eliminates an arm or not. The intuition is that during the rounds t and t+1, given that the two intervals intersected at round t, we know that the probability that they intersect again is quite small since the interval of the optimal mean is moving upwards, the interval of the sub-optimal mean is concentrating around the guess and the two estimations have been moved by at most a constant times the interval’s length.
Since the bad event occurs at round t, we know that
µ̂ (t) j ∈ [ µ̂ (t) t⋆ − Ut − 5Ũt, µ̂ (t) t⋆ − Ut/2 + 3Ũt ] .
In the above µ̂tt⋆ is the estimate of the optimal mean at round t whose index is denoted by t ⋆. Now assume that the bad event for arm j also occurs at round t+ k. Then, we have that
µ̂ (t+k) j ∈ [ µ̂ (t+k) (t+k)⋆ − Ut+k − 5Ũt+k, µ̂ (t+k) (t+k)⋆ − Ut+k/2 + 3Ũt+k ] .
First, notice that since the concentration inequality under event E holds for rounds t, t+ k we have that µ̂(t+k)j ≤ µ̂ (t) j + Ũt + Ũt+k. Thus, combining it with the above inequalities gives us
µ̂ (t+k) (t+k)⋆ − Ut+k − 5Ũt+k ≤ µ̂ (t+k) j ≤ µ̂ (t) j + Ũt + Ũt+k ≤ µ̂ (t) t⋆ − Ut/2 + 4Ũt + Ũt+k.
We now compare µ̂(t)t⋆ , µ̂ (t+k) (t+k)⋆ . Let o denote the optimal arm. We have that
µ̂ (t+k) (t+k)⋆ ≥ µ̂ (t+k) o ≥ µo − Ũt+k ≥ µt⋆ − Ũt+k ≥ µ̂ (t) t⋆ − Ũt − Ũt+k.
This gives us that
µ̂ (t) t⋆ − Ut+k − 6Ũt+k − Ũt ≤ µ̂ (t+k) (t+k)⋆ − Ut+k − 5Ũt+k.
Thus, we have established that
µ̂ (t) t⋆ − Ut+k − 6Ũt+k − Ũt ≤ µ̂ (t) t⋆ − Ut/2 + 4Ũt + Ũt+k =⇒
Ut+k ≥ Ut/2− 7Ũt+k − 5Ũt ≥ Ut/2− 12Ũt.
Since β ≥ 2304, we get that 12Ũt ≤ Ut/4. Thus, we get that
Ut+k ≥ Ut/4.
Notice that Ut+k Ut =
√ ct
ct+k ,
thus it immediately follows that
ct ct+k ≥ 1 16 =⇒ q t+1 − 1 qt+k+1 − 1 ≥ 1 16 =⇒ 16
( 1− 1
qt+1
) ≥ qk − 1
qt+1 =⇒
qk ≤ 16 + 1 qt+1 ≤ 17 =⇒ k log q ≤ log 17 =⇒ k ≤ 5,
when we pick B = log(T ) batches. Thus, for every arm the bad event can happen at most 6 times, by taking a union bound over the K arms we see that the probability that our algorithm is not replicable is at most O(K √ 1/β), so picking β = Θ(K2/ρ2) suffices to get the result.
C THE PROOF OF THEOREM 6
Theorem. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 3) for the stochastic d-dimensional linear bandit problem with K arms whose expected regret is
E[RT ] ≤ C · K2
ρ2
√ dT log(KT ) ,
for some absolute numerical constant C > 0, and its running time is polynomial in d,K, T and 1/ρ.
Proof. Let c, C be the numerical constants hidden in Lemma 5, i.e., the size of the multi-set is in the interval [cd log(1/δ)/ε2, Cd log(1/δ)/ε2]. We know that the size of each batch ni ∈ [cqi, Cqi] (see Lemma 5), so by the end of the B − 1 batch we will have less than nB pulls left. Hence, the number of batches is at most B.
We first define the event E that the estimates of all arms after the end of each batch are accurate, i.e., for every active arm a at the beginning of the i-th batch, at the end of the batch we have that∣∣∣〈a, θ̂i − θ⋆〉∣∣∣ ≤ ε̃i. Since δ = 1/(KT 2) and there are at most T batches and K active arms in each batch, a simple union bound shows that E happens with probability at least 1 − 1/T. We condition on the event E throughout the rest of the proof. We now argue about the regret bound of our algorithm. We first show that any optimal arm a∗ will not get eliminated. Indeed, consider any sub-optimal arm a ∈ [K] and any batch i ∈ [B]. Under the event E we have that
⟨a, θ̂i⟩ − ⟨a∗, θ̂i⟩ ≤ (⟨a, θ∗⟩+ ε̃i)− (⟨a∗, θ∗⟩ − ε̃i) < 2ε̃i < εi + εi.
Next, we need to bound the number of times we pull some fixed suboptimal arm a ∈ [K]. We let ∆ = ⟨a∗ − a, θ∗⟩ denote the gap and we let i be the smallest integer such that εi < ∆/4. We claim that this arm will get eliminated by the end of batch i. Indeed,
⟨a∗, θ̂i⟩ − ⟨a, θ̂i⟩ ≥ (⟨a∗, θ̂i⟩ − ε̃i)− (⟨a, θ̂i⟩+ ε̃i) = ∆− 2ε̃i > 4εi − 2ε̃i > ε̃i + εi.
This shows that during any batch i, all the active arms have gap at most 4εi−1. Thus, the regret of the algorithm conditioned on the event E is at most
B∑ i=1 4niεi−1 ≤ 4βC B∑ i=1 qi √ d log(KT 2)/qi−1 ≤ 6βCq √ d log(KT ) B−1∑ i=0 qi/2 ≤
O ( βqB/2+1 √ d log(KT ) ) = O ( K2
ρ2 qB/2+1
√ d log(KT ) ) = O ( K2 ρ2 q √ dT log(KT ) ) .
Thus, the overall regret is bounded by δ · T + (1 − δ) · O ( K2 ρ2 q √ dT log(KT ) ) =
O ( K2 ρ2 q √ dT log(KT ) ) .
We now argue about the replicability of our algorithm. The analysis follows in a similar fashion as in Theorem 4. Let θ̂i, θ̂′i be the LSE after the i-th batch, under two different executions of the algorithm and assume that the set of active arms. We condition on the event E ′ for the other execution as well. Assume that the set of active arms is the same under both executions at the beginning of batch i. Notice that since the set that is guaranteed by Lemma 5 is computed by a deterministic algorithm, both executions will pull the same arms in batch i. Consider a suboptimal arm a and let ai∗ = argmaxa∈A⟨θ̂i, a⟩, a′i∗ = argmaxa∈A⟨θ̂′i, a⟩. Under the event E ∩ E ′ we have that |⟨a, θ̂i − θ̂′i⟩| ≤ 2ε̃i, |⟨ai∗ , θ̂i − θ̂′i⟩| ≤ 2ε̃i, and |⟨a′i∗ , θ̂′i⟩ − ⟨ai∗ , θ̂i⟩| ≤ 2ε̃i. Notice that, since the randomness of εi is shared, if ⟨a, θ̂i⟩ + ε̃i ≥ ⟨ai∗ , θ̂i⟩ − εi + 4ε̃i, then the arm a will not be eliminated after the i-th batch in some other execution of the algorithm as well. Similarly, if ⟨a, θ̂i⟩+ ε̃i < ⟨ai∗ , θ̂i⟩− εi− 4ε̃i the the arm a will get eliminated after the i-th batch in some other execution of the algorithm as well. In particular, this means that if ⟨a, θ̂i⟩−2ε̃i > ⟨ai∗ , θ̂i⟩+ε̃i−εi/2 then the arm a will not get eliminated in some other execution of the algorithm and if ⟨a, θ̂i⟩+5ε̃i < ⟨ai∗ , θ̂i⟩ − εi then the arm j will also get eliminated in some other execution of the algorithm with probability 1 under the event E ∩E ′. Thus, it suffices to bound the probability that the decision about arm j will be different between the two executions when we are in neither of these cases. Then, the worst case bound due to the mass of the uniform probability measure is
16 √
d log(1/δ)/c̃i√ d log(1/δ)/ci .
This implies that the probability mass of the bad event is at most 16 √ ci/c̃i = 16 √ 1/β. A naive union bound would require us to pick β = Θ(K2 log2 T/ρ2). We next show to avoid the log2 T factor. Fix a sub-optimal arm a ∈ [K] and let t be the first round that it appears in the bad event. Since the bad event occurs at round t, we know that
⟨a, θ̂t⟩ ∈ [ ⟨at∗ , θ̂t⟩ − εt − 5ε̃t, ⟨at∗ , θ̂t⟩ − εt/2 + 3ε̃t ] .
In the above, at∗ is the optimal arm at round t w.r.t. the LSE. Now assume that the bad event for arm a also occurs at round t+ k. Then, we have that
⟨a, θ̂t+k⟩ ∈ [ ⟨a(t+k)∗ , θ̂t+k⟩ − εt+k − 5ε̃t+k, ⟨a(t+k)∗ , θ̂t+k⟩ − εt/2 + 3ε̃t+k ] .
First, notice that since the concentration inequality under event E holds for rounds t, t+ k we have that ⟨a, θ̂t+k⟩ ≤ ⟨a, θ̂t⟩+ ε̃t + ε̃t+k. Thus, combining it with the above inequalities gives us ⟨a(t+k)∗ , θ̂t+k⟩− εt+k − 5ε̃t+k ≤ ⟨a, θ̂t+k⟩ ≤ ⟨a, θ̂t⟩+ ε̃t + ε̃t+k ≤ ⟨at∗ , θ̂t⟩− εt/2+ 4ε̃t + ε̃t+k. We now compare ⟨at∗ , θ̂t⟩, ⟨a(t+k)∗ , θ̂t+k⟩. Let a∗ denote the optimal arm. We have that ⟨a(t+k)∗ , θ̂t+k⟩ ≥ ⟨a∗, θ̂t+k⟩ ≥ ⟨a∗, θ∗⟩ − ε̃t+k ≥ ⟨at∗ , θ∗⟩ − ε̃t+k ≥ ⟨at∗ , θ̂t⟩ − ε̃t+k − ε̃t.
This gives us that
⟨at∗ , θ̂t⟩ − εt+k − 6ε̃t+k − ε̃t ≤ ⟨a(t+k)∗ , θ̂t+k⟩ − εt+k − 5ε̃t+k. Thus, we have established that
⟨at∗ , θ̂t⟩ − εt+k − 6ε̃t+k − ε̃t ≤ ⟨at∗ , θ̂t⟩ − εt/2 + 4ε̃t + ε̃t+k =⇒ εt+k ≥ εt/2− 7ε̃t+k − 5ε̃t ≥ εt/2− 12ε̃t.
Since β ≥ 2304, we get that 12ε̃t ≤ εt/4. Thus, we get that εt+k ≥ εt/4.
Notice that εt+k εt =
√ qt
qt+k ,
thus it immediately follows that qt
qt+k ≥ 1 16 =⇒ qk ≤ 16 =⇒ k log q ≤ log 16 =⇒ k ≤ 4,
when we pick B = log(T ) batches. Thus, for every arm the bad event can happen at most 5 times, by taking a union bound over the K arms we see that the probability that our algorithm is not replicable is at most O(K √ 1/β), so picking β = Θ(K2/ρ2) suffices to get the result.
D NAIVE APPLICATION OF ALGORITHM 3 WITH INFINITE ACTION SPACE
We use a 1/T 1/(4d+2)−net that has size at most (3T ) d 4d+2 . Let A′ be the new set of arms. We then run Algorithm 3 using A′. This gives us the following result, that is proved right after. Corollary 12. Let T ∈ N, ρ ∈ (0, 1]. There is a ρ-replicable algorithm for the stochastic ddimensional linear bandit problem with infinite arms whose expected regret is at most
E[RT ] ≤ C · T
4d+1 4d+2
ρ2
√ d log(T ) ,
where C > 0 is an absolute numerical constant.
Proof. Since K ≤ (3T ) d 4d+2 , we have that
T sup a∈A′ ⟨a, θ∗⟩ −E
[ T∑
i=1
⟨at, θ∗⟩ ] ≤ O ( (3T ) 2d 4d+2
ρ2
√ dT log ( T (3T ) d 4d+2 )) = O ( T 4d+1 4d+2
ρ2
√ d log(T ) ) Comparing to the best arm in A, we have that:
T sup a∈A ⟨a, θ∗⟩ −E
[ T∑
i=1
⟨at, θ∗⟩ ] = ( T sup
a∈A ⟨a, θ∗⟩ − T sup a∈A′ ⟨a, θ∗⟩
) + ( T sup
a∈A′ ⟨a, θ∗⟩ −E
[ T∑
i=1
⟨at, θ∗⟩ ]) Our choice of the 1/T 1/(4d+2)-net implies that for every a ∈ A there exists some a′ ∈ A′ such that ||a − a′||2 ≤ 1/T 1/(4d+2). Thus, supa∈A⟨a, θ∗⟩ − supa′∈A′⟨a′, θ∗⟩ ≤ ||a − a′||2||θ∗||2 ≤ 1/T 1/(4d+2). Thus, the total regret is at most
T · 1/T 1/(4d+2) +O
( T 4d+1 4d+2
ρ2
√ d log(T ) ) = O ( T 4d+1 4d+2
ρ2
√ d log(T ) ) .
E THE PROOF OF THEOREM 10
Theorem. Let T ∈ N, ρ ∈ (0, 1]. There exists a ρ-replicable algorithm (presented in Algorithm 4) for the stochastic d-dimensional linear bandit problem with infinite action set whose expected regret is
E[RT ] ≤ C · d4 log(d) log2 log(d) log log log(d)
ρ2
√ T log3/2(T ) ,
for some absolute numerical constant C > 0, and its running time is polynomial in T d and 1/ρ.
Proof. First, the algorithm is ρ-replicable since in each batch we use a replicable LSE sub-routine with parameter ρ′ = ρ/B. This implies that
Pr[(a1, ..., aT ) ̸= (a′1, ..., a′T )] = Pr[∃i ∈ [B] : θ̂i was not replicable] ≤ ρ . Let us fix a batch iteration i ∈ [B− 1]. Set Ci be the core set computed by Lemma 7. The algorithm first pulls ni =
Cd4 log(d/δ) log2 log(d) log log log(d) ε2iρ
′2 times each one of the arms of the i-th core set Ci, as indicated by Lemma 9 and computes the LSE θ̂i in a replicable way using the algorithm of Lemma 9. Let E be the event that over all batches the estimations are correct. We pick δ = 1/(2|A′|T 2) so that this good event does hold with probability at least 1− 1/T . Our goal is to control the expected regret which can be written as
E[RT ] = T sup a∈A ⟨a, θ⋆⟩ −E T∑ t=1 ⟨at, θ⋆⟩ .
We have that T sup
a∈A ⟨a, θ⋆⟩ − T sup a′∈A′ ⟨a′, θ⋆⟩ ≤ 1 ,
since A′ is a deterministic 1/T -net of A. Also, let us set the expected regret of the bounded action sub-problem as
E[R′T ] = T sup a′∈A′
⟨a′, θ⋆⟩ −E T∑
t=1
⟨at, θ⋆⟩ .
We can now employ the analysis of the finite arm case. During batch i, any active arm has gap at most 4εi−1, so the instantaneous regret in any round is not more than 4εi−1. The expected regret conditional on the good event E is upper bounded by
E[R′T |E ] ≤ B∑ i=1 4Miεi−1 ,
where Mi is the total number of pulls in batch i (using the replicability blow-up) and εi−1 is the error one would achieve by drawing qi samples (ignoring the blow-up). Then, for some absolute constant C > 0, we have that
E[R′T |E ] ≤ B∑ i=1 4 ( qi d3 log(d) log2 log(d) log log log(d) log2 T ρ2 ) · √ d2 log(T )/qi−1 ,
which yields that
E[R′T |E ] ≤ C d4 log(d) log2 log(d) log log log(d) log(T )
√ log(T )
ρ2 · S ,
where we set
S := B∑ i=1
qi
q(i−1)/2 = q1/2 B∑ i=1 qi/2 = q(1+B)/2 .
We pick B = log(T ) and get that, if q = T 1/B then S = Θ( √ T ). We remark that this choice of q is valid since B∑ i=1 qi = qB+1 − q q − 1 = Θ(qB)− 1 ≥ Tρ 2 d3 log(d) log2 log(d) log log log(d) .
Hence, we have that E[R′T |E ] ≤ O ( d4 log(d) log2 log(d) log log log(d)
ρ2
√ T log3/2(T ) ) .
Note that when E does not hold, we can bound the expected regret by 1/T · T = 1. This implies that the overall regret E[RT ] ≤ 2 + E[R′T |E ] and so it satisfies the desired bound and the proof is complete.
F DEFERRED LEMMATA
F.1 THE PROOF OF LEMMA 8
Proof. Consider the distribution π that is a 2-approximation to the optimal G-design and has support |C| = O(d log log d). Let C′ be the set of arms in the support such that π(a) ≤ c/d log d. We consider π̃ = (1 − x)π + xa, where a ∈ C′ and x will be specified later. Consider now the matrix V (π̃). Using the Sherman-Morrison formula, we have that
V (π̃)−1 = 1 1− x V (π)−1 − xV (π)
−1aa⊤V (π)−1 (1− x)2 ( 1 + 11−x ||a|| 2 V (π)−1 ) = 1 1− x
( V (π)−1 − xV (π) −1aa⊤V (π)−1
1− x+ ||a||2V (π)−1
) .
Consider any arm a′. Then,
||a′||2V (π̃)−1 = 1
1− x ||a||2V (π)−1 −
x 1− x · (a
⊤V (π)−1a′)2
1− x+ ||a||2V (π)−1 ≤ 1 1− x ||a||2V (π)−1 .
Note that we apply this transformation at most O(d log log d) times. Let π̂ be the distribution we end up with. We see that
||a′||2V (π̂)−1 ≤ ( 1
1− x
)cd log log d ||a||2V (π)−1 ≤ 2 ( 1
1− x
)cd log log d d.
Notice that there is a constant c′ such that when x = c′/d log d we have that (
1 1−x
)cd log log d ≤ 2.
Moreover, notice that the mass of every arm is at least x(1 − x)|C| ≥ x − |C|x2 = c′/(d log(d)) − c′′d log log d/(d2 log2(d)) ≥ c/(d log(d)), for some absolute numerical constant c > 0. This concludes the claim.
F.2 THE PROOF OF LEMMA 9
Proof. The proof works when we can treat Ω(⌈d log(1/δ)π(a)/ε2⌉) as Ω(d log(1/δ)π(a)/ε2), i.e., as long as π(a) = Ω(ε2/d log(1/δ)). In the regime we are in, this point is handled thanks to Lemma 8. Combining the following proof with Lemma 8, we can obtain the desired result.
We underline that we work in the fixed design setting: the arms ai are deterministically chosen independently of the rewards ri. Assume that the core set of Lemma 7 is the set C. Fix the multi-set S = {(ai, ri) : i ∈ [M ]}, where each arm a lies in the core set and is pulled na = Θ(π(a)d log(d) log(|C|/δ)/ε2) times2. Hence, we have that
M = ∑ a∈C na = Θ ( d log(d) log(|C|/δ)/ε2 ) .
Let also V = ∑
i∈[M ] aia ⊤ i . The least-squares estimator can be written as
θ (ε) LSE = V −1 ∑ i∈[M ] airi = V −1 ∑ a∈C a ∑ i∈[na] ri(a) ,
where each a lies in the core set (deterministically) and ri(a) is the i-th reward generated independently by the linear regression process ⟨θ⋆, a⟩+ξ, where ξ is a fresh zero mean sub-gaussian random variable. Our goal is to reproducibly estimate the value ∑ i∈[na] ri(a) for any a. This is sufficient since two independent executions of the algorithm share the set C and na for any a. Note that the above sum is a random variable. In the following, we condition on the high-probability event that the average reward of the arm a is ε-close to the expected one, i.e., the value ⟨θ⋆, a⟩. This happens with probability at least 1− δ/(2|C|), given Ω(π(a)d log(d) log(|C|/δ)/ε2) samples from arm a ∈ C. In order to guarantee replicability, we will apply a result from Impagliazzo et al. (2022). Since we will union bound over all arms in the core set and |C| = O(d log log(d)) (via Lemma 7), we will make use of a (ρ/|C|)-replicable algorithm that gives an estimate v(a) ∈ R such that
|⟨θ⋆, a⟩ − v(a)| ≤ τ ,
with probability at least 1− δ/(2|C|). For δ < ρ, the algorithm uses Sa = Ω ( d2 log(d/δ) log2 log(d) log log log(d)/(ρ2τ2) ) many samples from the linear regression with fixed arm a ∈ C. Since we have conditioned on the randomness of ri(a) for any i, we get∣∣∣∣∣∣ 1na ∑ i∈[na] ri(a)− v(a) ∣∣∣∣∣∣ ≤ ∣∣∣∣∣∣ 1na ∑ i∈[na] ri(a)− ⟨θ∗, a⟩
∣∣∣∣∣∣+ |⟨θ∗, a⟩ − v(a)| ≤ ε+ τ , with probability at least 1− δ/(2|C|). Hence, by repeating this approach for all arms in the core set, we set θSQ = V −1 ∑ a∈C a na v(a). Let us condition on the randomness of the estimate θ (ε) LSE. We have that
sup a′∈A |⟨a′, θSQ − θ⋆⟩| ≤ sup a′∈A |⟨a′, θSQ − θ(ε)LSE⟩|+ sup a′∈A |⟨a′, θ(ε)LSE − θ ⋆⟩| .
2Recall that π(a) ≥ c/(d log(d)), for some constant c > 0, so the previous expression is Ω(log(δ/|C|)/ε2).
Note that the second term is ε with probability at least 1− δ via Lemma 5. Our next goal is to tune the accuracy τ ∈ (0, 1) so that the first term yields another ε error. For the first term, we have that
sup a′∈A |⟨a′, θSQ − θ(ε)LSE⟩| ≤ sup a′∈A ∣∣∣∣∣⟨a′, V −1∑ a∈C a na (ε+ τ)⟩ ∣∣∣∣∣ Note that V = Cd log(d) log(|C|/δ)ε2 ∑ a∈C π(a)aa ⊤ and so V −1 = ε 2 Cd log(d) log(|C|/δ)V (π) −1, for some absolute constant C > 0. This implies that
sup a′∈A |⟨a′, θSQ−θ(ε)LSE⟩| ≤ (ε+τ) sup a′∈A ∣∣∣∣∣ 〈 a′,
ε2
Cd log(d) log(|C|/δ) V (π)−1 ∑ a∈C Cd log(d) log(|C|/δ)π(a) ε2 a 〉∣∣∣∣∣ . Hence, we get that
sup a′∈A |⟨a′, θSQ − θ(ε)LSE⟩| ≤ (ε+ τ) sup a′∈A ∣∣∣∣∣ 〈 a′, V (π)−1 ∑ a∈C π(a)a 〉∣∣∣∣∣ . Consider a fixed arm a′ ∈ A. Then,∣∣∣∣∣ 〈 a′, V (π)−1 ∑ a∈C π(a)a 〉∣∣∣∣∣ ≤∑ a∈C π(a) ∣∣⟨a′, V (π)−1a⟩∣∣
≤ ∑ a∈C π(a) ( 1 + ∣∣⟨a′, V (π)−1a⟩∣∣2) = 1 +
∑ a∈C π(a) ∣∣⟨a′, V (π)−1a⟩∣∣ | 1. What is the main contribution of the paper regarding reproducible policies for multi-armed bandit models?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its novelty and motivation?
3. Do you have any concerns or questions regarding the regret upper bounds and their tightness?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper designs reproducible policies for classic multi-armed bandit model and linear bandit model. In the classic multi-armed bandit setting, the authors first design an algorithm that directly applies existing reproducible algorithms for estimating the means of random distributions, and propose an regret upper bound that is
O
(
K
2
log
2
T
ρ
2
)
larger than non-reproducible algorithms. Here
K
is the number of arms,
T
is the time horizon, and
ρ
is the reproducible rate (i.e., with probability at least
1
−
ρ
, the algorithm will output the same sequence). Then they improve their algorithm by some specific properties of bandits setting, and reduce the
O
(
log
2
T
)
factor in the regret upper bound. As for the linear bandit case, the authors first consider the finite arm set case, and propose an algorithm that achieves regret upper bound of
O
~
(
K
2
ρ
2
d
T
)
, which is also
O
~
(
K
2
ρ
2
)
higher than existing non-reproducible algorithms. When the arm set is infinite, the authors also provide a reproducible algorithm that achieves
O
~
(
d
3
ρ
2
)
higher regret than non-reproducible ones.
Strengths And Weaknesses
Strength
Applying reproducity to bandits problems in an interesting and novel idea.
I check some of the proofs in appendix, and they seem to be correct.
Weaknesses
About the motivation.
Though the reproducity of scientific findings are very important, I do not really understand why we need to design reproducible algorithms for bandit problems. Maybe one can claim that we can reproduce others' experimental results easily when the designed algorithms are reproducible, but in this case, I think i) the definition of reproducible (Definition 1) is too strict; and ii) the cost is too large. And I guess that ii) maybe a consequence of i). In fact, to reproduce others' experimental results, we do not really need the sequence to be exact the same.
About the regret upper bounds.
The regret upper bound seems to be not very tight. For example, when we choose
ρ
→
1
, the algorithms does not have any reproducible properties, but we still suffer an extra
K
2
factor in the regret upper bound. What is the reason of this?
Lack of regret lower bounds.
I guess that the regret lower bound analysis could be difficult, especially for the linear bandit case. However, I think some discussions would be very helpful here. After I read the paper, I cannot really understand the difficulty of designing reproducible algorithms, and also do not know which kinds of instances could be very hard to solve in this case. Besides, it is mentioned in the comclusion that the factor of
1
ρ
2
is tight (according to (Impagliazzo et al., 2022)). However, after I read that paper, I do not think its proof can be directly used for the bandit case. Maybe I miss some important things, but I think there should be more detailed explanations.
Clarity, Quality, Novelty And Reproducibility
Clarity
The section 2.1 is not very helpful for understanding. In fact, after I read this part, I guess there is a
log
1
ρ
factor in the regret upper bound, while it should be
1
ρ
2
.
I suggest the authors to give some insights about the
1
ρ
2
factor, so that the readers could understand the real hardness of this problem. For example, in section 2.2, the authors could explain about where the
1
ρ
2
factor in the complexity of ReprMeanEstimation comes from (i.e., from the random offset used in ReprMeanEstimation). After these explanations, one can understand the reason of using random
U
¯
i
(or
ϵ
¯
i
) in Algorithm 2 and 3 as well.
Some other minor typos.
In the last sentence of the first paragraph, there are two "to the".
In related works, the regret upper bound in (Bubeck et al., 2012a) when there are
K
arms is
O
(
d
T
log
K
)
, but not
O
(
d
T
log
K
)
.
In Proposition 2, why using
Ω
but not
O
? |
ICLR | Title
Neural Parameter Allocation Search
Abstract
Training neural networks requires increasing amounts of memory. Parameter sharing can reduce memory and communication costs, but existing methods assume networks have many identical layers and utilize hand-crafted sharing strategies that fail to generalize. We introduce Neural Parameter Allocation Search (NPAS), a novel task where the goal is to train a neural network given an arbitrary, fixed parameter budget. NPAS covers both low-budget regimes, which produce compact networks, as well as a novel high-budget regime, where additional capacity can be added to boost performance without increasing inference FLOPs. To address NPAS, we introduce Shapeshifter Networks (SSNs), which automatically learn where and how to share parameters in a network to support any parameter budget without requiring any changes to the architecture or loss function. NPAS and SSNs provide a complete framework for addressing generalized parameter sharing, and can also be combined with prior work for additional performance gains. We demonstrate the effectiveness of our approach using nine network architectures across four diverse tasks, including ImageNet classification and transformers.
1 INTRODUCTION
Training neural networks requires ever more computational resources, with GPU memory being a significant limitation (Rajbhandari et al., 2021). Method such as checkpointing (e.g., Chen et al., 2016; Gomez et al., 2017; Jain et al., 2020) and out-of-core algorithms (e.g., Ren et al., 2021) have been developed to reduce memory from activations and improve training efficiency. Yet even with such techniques, Rajbhandari et al. (2021) find that model parameters require significantly greater memory bandwidth than activations during training, indicating parameters are a key limit on future growth. One solution is cross-layer parameter sharing, which reduces the memory needed to store parameters, which can also reduce the cost of communicating model updates in distributed training (Lan et al., 2020; Jaegle et al., 2021) and federated learning (Konečný et al., 2016; McMahan et al., 2017), as the model is smaller, and can help avoid overfitting (Jaegle et al., 2021). However, prior work in parameter sharing (e.g., Dehghani et al., 2019; Savarese & Maire, 2019; Lan et al., 2020; Jaegle et al., 2021) has two significant limitations. First, they rely on suboptimal hand-crafted techniques for deciding where and how sharing occurs. Second, they rely on models having many identical layers. This limits the network architectures they apply to (e.g., DenseNets (Huang et al., 2017) have few such layers) and their parameter savings is only proportional to the number of identical layers.
To move beyond these limits, we introduce Neural Parameter Allocation Search (NPAS), a novel task which generalizes existing parameter sharing approaches. In NPAS, the goal is to identify where and how to distribute parameters in a neural network to produce a high-performing model using an arbitrary, fixed parameter budget and no architectural assumptions. Searching for good sharing strategies is challenging in many neural networks due to different layers requiring different numbers of parameters or weight dimensionalities, multiple layer types (e.g., convolutional, fully-connected, recurrent), and/or multiple modalities (e.g., text and images). Hand-crafted sharing approaches, as in prior work, can be seen as one implementation of NPAS, but they can be complicated to create for complex networks and have no guarantee that the sharing strategy is good. Trying all possible permutations of sharing across layers is computationally infeasible even for small networks. To our knowledge, we are the first to consider automatically searching for good parameter sharing strategies.
*indicates equal contribution
By supporting arbitrary parameter budgets, NPAS explores two novel regimes. First, while prior work considered using sharing to reduce the number of parameters (which we refer to as low-budget NPAS, LB-NPAS), we can also increase the number of parameters beyond what an architecture typically uses (high-budget NPAS, HB-NPAS). HB-NPAS can be thought of as adding capacity to the network in order to improve its performance without changing its architecture (e.g., without increasing the number of channels that would also increase computational time). Second, we consider cases where there are fewer parameters available to a layer than needed to implement the layer’s operations. For such low-budget cases, we investigate parameter upsampling methods to generate the layer’s weights.
A vast array of other techniques, including pruning (Hoefler et al., 2021), quantization (Gholami et al., 2021), knowledge distillation (Gou et al., 2021), and low-rank approximations (e.g., Wu, 2019; Phan et al., 2020) are used to reduce memory and/or FLOP requirements for a model. However, such methods typically only apply at test/inference time, and actually are more expensive to train due to requiring a fully-trained large network, in contrast to NPAS. Nevertheless, these are also orthogonal to NPAS and can be applied jointly. Indeed, we show that NPAS can be combined with pruning or distillation to produce improved networks. Figure 1 compares NPAS to closely related tasks.
To implement NPAS, we propose Shapeshifter Networks (SSNs), which can morph a given parameter budget to fit any architecture by learning where and how to share parameters. SSNs begin by learning which layers can effectively share parameters using a short pretraining step, where all layers are generated from a single shared set of parameters. Layers that use parameters in a similar way are then good candidates for sharing during the main training step. When training, SSNs generate weights for each layer by down- or upsampling the associated parameters as needed.
We demonstrate SSN’s effectiveness in high- and low-budget NPAS on a variety of networks, including vision, text, and vision-language tasks. E.g., a LB-NPAS SSN implements a WRN-50-2 (Zagoruyko & Komodakis, 2016) using 19M parameters (69M in the original) and achieves an Error@5 on ImageNet (Deng et al., 2009) 3% lower than a WRN with the same budget. Similarity, we achieve a 1% boost to SQuAD v2.0 (Rajpurkar et al., 2016) with 18M parameters (334M in the original) over ALBERT (Lan et al., 2020), prior work for parameter sharing in Transformers (Vaswani et al., 2017). For HB-NPAS, we achieve a 1–1.5% improvement in Error@1 on CIFAR (Krizhevsky, 2009) by adding capacity to a traditional network. In summary, our key contributions are:
• We introduce Neural Parameter Allocation Search (NPAS), a novel task in which the goal is to implement a given network architecture using any parameter budget. • To solve NPAS, we propose Shapeshifter Networks (SSNs), which automate parameter sharing. To our knowledge, SSNs are the first approach to automatically learn where and how to share parameters and to share parameters between layers of different sizes or types. • We benchmark SSNs for LB- and HB-NPAS and show they create high-performing networks when either using few parameters or adding network capacity. • We also show that SSNs can be combined with knowledge distillation and parameter pruning to boost performance over such methods alone.
2 NEURAL PARAMETER ALLOCATION SEARCH (NPAS)
In NPAS, the goal is to implement a neural network given a fixed parameter budget. More formally:
Neural Parameter Allocation Search (NPAS): Given a neural network architecture with layers `1, . . . , `L, which each require weights w1, . . . , wL, and a fixed parameter budget θ, train a high-performing neural network using the given architecture and parameter budget.
Any general solution to NPAS (i.e., that works for arbitrary θ or network) must solve two subtasks:
1. Parameter mapping: Assign to each layer `i a subset of the available parameters. 2. Weight generation: Generate `i’s weights wi from its assigned parameters, which may be any size.
Prior work, such as Savarese & Maire (2019) and Ha et al. (2016), are examples of weight generation methods, but in limited cases, e.g., Savarese & Maire (2019) does not support there being fewer parameters than weights. To our knowledge, no prior work has automated parameter mapping, instead relying on hand-crafted heuristics that do not generalize to many architectures. Note weight generation must be differentiable so gradients can be backpropagated to the underlying parameters.
NPAS naturally decomposes into two different regimes based on the parameter budget relative to what would be required by a traditional neural network (i.e., ∑ L i |wi| versus |θ|):
• Low-budget (LB-NPAS), with fewer parameters than standard networks ( ∑
L i |wi| < |θ|). This
regime has traditionally been the goal of cross-layer parameter sharing, and reduces memory at training and test time, and consequentially reduces communication for distributed training. • High-budget (HB-NPAS), with more parameters than standard networks ( ∑
L i |wi| > |θ|). This is,
to our knowledge, a novel regime, and can be thought of as adding capacity to a network without changing the underlying architecture by allowing a layer to access more parameters.
Note, in both cases, the FLOPs required of the network do not significantly increase. Thus, HB-NPAS can significantly reduce FLOP overhead compared to larger networks.
The closest work to ours are Shared WideResNets (SWRN) (Savarese & Maire, 2019), Hypernetworks (HN) (Ha et al., 2016), and Lookup-based Convolutional Networks (LCNN) (Bagherinezhad et al., 2017). Each method demonstrated improved low-budget performance, with LCNN and SWRN focused on improving sharing across layers and HN learning to directly generate parameters. However, all require adaptation for new networks and make architectural assumptions. E.g., LCNN was designed specifically for convolutional networks, while HN and SWRN’s benefits are proportional to the number of identical layers (see Figure 3). Thus, each method supports limited architectures and parameter budgets, making them unsuited for NPAS. LCNN and HN also both come with significant computational overhead. E.g., the CNN used by Ha et al. requires 26.7M FLOPs for a forward pass on a 32×32 image, but weight generation with HN requires an additional 108.5M FLOPs (135.2M total). In contrast, our SSNs require 0.8M extra FLOPs (27.5M total, 5× fewer than HN). Across networks we consider, SSN overhead for a single image is typically 0.5–2% of total FLOPs. Note both methods generate weights once per forward pass, amortizing overhead across a batch (e.g., SSN overhead is reduced to 0.008–0.03% for batch size 64). HB-NPAS is also reminiscent of mixture-of-experts (e.g., Shazeer et al., 2017); both increase capacity without significantly increasing FLOPs, but NPAS allows this overparameterization to be learned without architectural changes required by prior work.
NPAS can be thought of as searching for efficient and effective underlying representations for a neural network. Methods have been developed for other tasks that focus on directly searching for more effective architectures (as opposed to their underlying representations). These include neural architecture search (e.g., Bashivan et al., 2019; Dong & Yang, 2019; Tan et al., 2019; Xiong et al., 2019; Zoph & Le, 2017) and modular/self-assembling networks (e.g., Alet et al., 2019; Ferran Alet, 2018; Devin et al., 2017). While these tasks create computationally efficient architectures, they do not reduce the number of parameters in a network during training like NPAS (i.e., they cannot be used to train very large networks or for federated or distributed learning applications), and indeed are computationally expensive. NPAS methods can also provide additional flexibility to architecture search by enabling them to train larger and/or deeper architectures while keeping within a fixed parameter budget. In addition, the performance of any architectures these methods create could be improved by leveraging the added capacity from excess parameters when addressing HB-NPAS.
3 SHAPESHIFTER NETWORKS FOR NPAS
We now present Shapeshifter Networks (SSNs), a framework for addressing NPAS using generalized parameter sharing to implement a neural network with an arbitrary, fixed parameter budget. Figure 2 provides an overview and example of SSNs, and we detail each aspect below. An SSN consists of a provided network architecture with layers `1,...,L, and a fixed budget of parameters θ, which are partitioned into P parameter groups (both hyperparameters) containing parameters θ1,...,P . Each layer is associated with a single parameter group, which will provide the parameters used to implement it. This mapping is learned in a preliminary training step by training a specialized SSN and clustering its layer representations (Section 3.2). To implement each layer, an SSN morphs the parameters in its associated group to generate the necessary weights; this uses downsampling (Section 3.1.1) when the group has more parameters than needed, or upsampling (Section 3.1.2) when the group has fewer parameters than needed. SSNs allow any number of parameters to “shapeshift” into a network without necessitating changes to the model’s loss, architecture, or hyperparameters, and the process can be applied automatically. Finally, we note that SSNs are simply one approach to NPAS. Appendices B-D contain ablation studies and discussion of variants we found to be less successful.
3.1 WEIGHT GENERATION
Weight generation implements a layer `i, which requires weights wi, using the fixed set of parameters in its associated parameter group θj . (We assume the mapping between layers and parameter groups has already been established; see Section 3.2.) There are three cases to handle:
1. |wi| = |θj | (exactly enough parameters): The parameters are used as-is. 2. |wi| < |θj | (excess parameters): We perform parameter downsampling (Section 3.1.1). 3. |wi| > |θj | (insufficient parameters): We perform parameter upsampling (Section 3.1.2). We emphasize that, depending on how layers are mapped to parameter groups, both down- and upsampling may be required in an LB- or HB-NPAS model.
3.1.1 PARAMETER DOWNSAMPLING
When a parameter group θj provides more parameters than needed to implement a layer `i, we perform template-based downsampling to generate wi. To do this, we first split θj into up to K (a hyperparameter) templates T 1,...,Ki , where each template T k i is the same dimension as wi. If θj does not evenly divide into templates, we ignore excess parameters. To avoid uneven sharing of parameters between layers, the templates for each layer are constructed from θj in a round-robin fashion. These templates are then combined to produce wi; if only one template can be produced we instead use it directly. We present two different methods of learning to combine templates. To simplify presentation, we will assume there are exactly K templates used.
WAvg (Savarese & Maire, 2019) This learns a vector αi ∈ RK which is used to produce a weighted average of the templates: wi = ∑ K k=1α k i T k i . The αi are initialized orthogonally to the αs of all other
layers in the same parameter group. While efficient, this only implicitly learns similarities between layers. Empirically, we find that different layers often converge to similar αs, limiting sharing.
Emb To address this, we can instead more directly learn a representation of the layer using a layer embedding. We use a learnable vector φi ∈ RE , where E is the size of the layer representation; we useE = 24 throughout, as we found it to work well. A linear layer, which is shared between all layers in the parameter group and parameterized by Wj ∈ RK×E and bj ∈ RK , is then used to construct an αi for the layer, which is used as in WAvg. That is, αi =Wjφi + bj and wi = ∑ K k=1α k i T k i . We considered more complex methods (e.g., MLPs, nonlinearities), but they did not improve performance.
While both methods require additional parameters, this is quite small in practice. WAvg requires K additional parameters per layer. Emb requires E = 24 additional parameters per layer and KE +K = 24K +K parameters per parameter group.
3.1.2 PARAMETER UPSAMPLING
If instead a parameter group θj provides fewer parameters than needed to implement a layer `i, we upsample θj to be the same size as wi. As a layer will use all of the parameters in θj , we do not use templates. We consider two methods for upsampling below.
Inter As a naı̈ve baseline, we use bilinear interpolation to directly upsample θj . However, this could alter the patterns captured by parameters, as it effectively stretches the receptive field. In practice, we found fully-connected and recurrent layers could compensate for this warping, but it degraded convolutional layers compared to simpler approaches such as tiling θj .
Mask To address this, and avoid redundancies created by directly repeating parameters, we propose instead to use a learned mask to modify repeated parameters. For this, we first use n = d|wi|/|θj |e tiles of θj to be the same size as wi (discarding excess in the last tile). We then apply a separate learned mask to each tile after the first (i.e., there are n− 1 masks). All masks are a fixed “window” size, which we take to be 9 by default (to match the size of commonly-used 3× 3 kernels in CNNs), and are shared within each parameter group. To apply, masks are multiplied element-wise over sequential windows of their respective tile. While the number of additional parameters depends on the amount of upsampling required, as the masks are small, this is negligible.
3.2 MAPPING LAYERS TO PARAMETER GROUPS
We now discuss how SSNs can automatically learn to assign layers to parameter groups in such a way that parameters can be efficiently shared. This is in contrast to prior work on parameter sharing (e.g., Ha et al., 2016; Savarese & Maire, 2019; Jaegle et al., 2021), which required layers to be manually assigned to parameter groups. Finding an optimal mapping of layers to parameter groups is challenging, and a brute-force approach is computationally infeasible. We rely instead on SSNs learning a representation for each layer as part of the template-based parameter downsampling process, and then use this representation to identify similar layers which can effectively share parameters.
To do this, we perform a short preliminary training step in which we train a small (i.e., low parameter budget) SSN version of the model using a single parameter group and a modified means of generating templates for parameter downsampling. Specifically, for a layer `i, we split θ into K ′ evenly-sized templates T 1,...,K ′
i . Since we wish to use downsampling-based weight generation, each T k′
i is then resized with bilinear interpolation to be the same size as wi. Next, we train the SSN as usual, using WAvg or Emb downsampling with the modified templates for weight generation (there is no upsampling). By using a small parameter budget and template-based weight generation where each template comes from the same underlying parameters, we encourage significant sharing between layers so we can measure the effectiveness of sharing. We found that using a budget equal to the number of weights of the largest single layer in the network to work well. Further, this preliminary training step is short, and requires only 10–15% of the typical network training time.
Finally, we construct the parameter groups by clustering the learned layer representations into P groups. As the layer representation, we take the αi or φi learned for each layer by WAvg or Emb downsampling (resp.). We then use k-means clustering to group these representations into P groups, which become the parameter groups used by the full SSN.
4 EXPERIMENTS
Our experiments include a wide variety of tasks and networks in order to demonstrate the broad applicability of NPAS and SSNs. We adapt code and data splits made available by the authors and report the average of five runs for all comparisons except ImageNet and ALBERT, which average three runs. A more detailed discussion on SSN hyperparameter settings can be found in Appendices B-D. In our paper we primarily evaluate methods based on task performance, but we demonstrate that SSNs improve reduce training time and memory in distributed learning settings in Appendix G.
Compared Tasks. We briefly describe each task, datasets, and evaluation metrics. For each model, we use the authors’ implementation and hyperparameters, unless noted (more details in Appendix A).
Image Classification. For image classification the goal is to recognize if an object is present in an image. This is evaluated using Error@k, i.e., the portion of times that the correct category does not appear in the top k most likely objects. We evaluate SSNs on CIFAR-10 and CIFAR100 (Krizhevsky, 2009), which are composed of 60K images of 10 and 100 categories, respectively, and ImageNet (Deng et al., 2009), which is composed of 1.2M images containing 1,000 categories. We report Error@1 on CIFAR and Error@5 for ImageNet.
Image-Sentence Retrieval. In image-sentence retrieval the goal is match across modalities (sentences and images). This task is evaluated using Recall@K={1, 5, 10} for both cross-modal directions (six numbers), which we average for simplicity. We benchmark on Flickr30K (Young et al., 2014) which contains 30K/1K/1K images for training/testing/validation, and COCO (Lin et al., 2014), which contains 123K/1K/1K images for training/testing/validation. For both datasets each image has about five descriptive captions. We evaluate SSNs using EmbNet (Wang et al., 2016) and ADAPTT2I (Wehrmann et al., 2020). Note that ADAPT-T2I has identical parallel layers (i.e., they need different outputs despite having the same input), which makes sharing parameters challenging.
Phrase Grounding. Given a phrase the task is to find its associated image region. Performance is measured by how often the predicted box for a phrase has at least 0.5 intersection over union with its ground truth box. We evaluate on Flickr30K Entities (Plummer et al., 2017) which augments Flickr30K with 276K bounding boxes for phrases in image captions, and ReferIt (Kazemzadeh et al., 2014), which contains 20K images that are evenly split between training/testing and 120K region descriptions. We evaluate our SSNs with SimNet (Wang et al., 2018) using the implementation from Plummer et al. (2020) that reports state-of-the-art results on this task.
Question Answering. For this task the goal is to answer a question about a textual passage. We use SQuAD v1.1 (Rajpurkar et al., 2016), which has 100K+ question/answer pairs on 500+ articles, and SQuAD v2.0 (Rajpurkar et al., 2018), which adds 50K unanswerable questions. We report F1 and EM scores on the development split. We compare our SSNs with ALBERT (Lan et al., 2020), a recent transformer architecture that incorporates extensive, manually-designed parameter sharing.
4.1 RESULTS
We begin our evaluation in low-budget (LB-NPAS) settings. Figure 3 reports results on image classification, including WRNs (Zagoruyko & Komodakis, 2016), DenseNets (Huang et al., 2017), and EfficientNets (Tan & Le, 2019); Table 1 contains results on image-sentence retrieval and phrase grounding. For each task and architecture we compare SSNs to same parameter-sized networks without sharing. In image classification, we also report results for SWRN (Savarese & Maire, 2019) sharing; but note it cannot train a WRN-28-10 or WRN-50-2 with fewer than 12M or 40M parameters, resp. We show that SSNs can create high-performing models with fewer parameters than SWRN is capable of, and actually outperform it using 25% and 60% fewer parameters on C-100 and ImageNet, resp. Table 1 demonstrates that these benefits generalize to vision-language tasks. In Table 2 we also compare SSNs with ALBERT (Lan et al., 2020), which applies manually-designed parameter sharing to BERT (Devlin et al., 2019), and find that SSN’s learned parameter sharing outperforms ALBERT. This demonstrates that SSNs can implement large networks with lower memory requirements than is possible with current methods by effectively sharing parameters.
We discuss the runtime and memory performance implications of SSNs extensively in Appendix G. In short, by reducing parameter counts, SSNs reduce communication costs and memory. For example,
our SSN-ALBERT-Large trains about 1.4× faster using 128 GPUs than BERT-Large (in line with results for ALBERT), and reduces memory requirements by about 5 GB (1/3 of total).
As mentioned before, knowledge distillation and parameter pruning can help create more efficient models at test time, although they cannot reduce memory requirements during training like SSNs. Tables 3 and 4 show our approach can be used to accomplish a similar goal as these tasks. Comparing our LB-NPAS results in Table 4 and the lowest parameter setting of HRank, we report a 1.5% gain over pruning methods even when using less than half the parameters. We note that one can think of our SSNs in the high budget setting (HP-NPAS) as mixing together a set of random initializations of a network by learning to combine the different templates. This setting’s benefit is illustrated in Table 3 and Table 4, where our HB-NPAS models report a 1-1.5% gain over training a traditional network. As a reminder, in this setting we precompute the weights of each layer once training is complete so they require no additional overhead at test time. That said, best performance on both tasks comes from combining our SSNs with prior work on both tasks.
4.2 ANALYSIS OF SHAPESHIFTER NETWORKS
In this section we present ablations of the primary components of our approach. A complete ablation study, including, but not limited to comparing the number of parameter groups (Section 3.2) and number of templates (K in Section 3.1.1) can be found in Appendices B-D.
Table 5 compares the strategies generating weights from templates described in Section 3.1.1 when using a single parameter group. In these experiments we set the number of parameters as the amount required to implement the largest layer in a network. For example, the ADAPT-T2I model requires 14M parameters, but its bidirectional GRU accounts for 10M of those parameters, so all SSN variants in this experiment allocate 10M parameters. Comparing to the baseline, which involves modifying the original model’s number and/or size of filters so they have the same number of parameters as our
Table 5: Parameter downsampling comparison (Section 3.1.1) using WRN-28-10 and WRN-50-2 for C-10/100 and ImageNet, resp. Baseline adjusts the number and/or size of filters rather than share parameters. See Appendix B for additional details.
Dataset % orig Reduced SSNs (ours)params Baseline WAvg Emb C-10 11.3% 4.22 4.00 3.84C-100 22.34 21.78 21.92 ImageNet 27.5% 10.08 7.38 6.69
SSNs, we see that the variants of our SSNs perform better, especially on ImageNet where we reduce Error@5 by 3%. Generally we see a slight boost to performance using Emb over WAvg. Also note that prior work in parameter sharing, e.g., SWRN (Savarese & Maire, 2019), can not be applied to the settings in Table 5, since they require parameter sharing between layers of different types and different operations, or have too few parameters. E.g., the WRN-28-10 results use just under 4M parameters, but, as shown in Figure 3(a), SWRN requires a minimum of 12M parameters.
In Table 6 we investigate one of the new challenges in this work: how to upsample parameters so a large layer operation can be implemented with relatively few parameters (Section 3.1.2). For example, our SSN-WRN-28-10 results use about 0.5M parameters, but the largest layer defined in the network requires just under 4M weights. We find using our simple learned Mask upsampling method performs well in most settings, especially when using convolutional networks. For example, on CIFAR-100 it improves Error@1 by 2.5% over the baseline, and 1.5% over using bilinear interpolation (inter). While more complex methods of upsampling may seem like they would improve performance (e.g., using an MLP, or learning combinations of basis filters), we found such approaches had two significant drawbacks. First, they can slow down training time significantly due to their complexity, so only a limited number of settings are viable. Second, in our experiments we found many had numerical stability issues for some datasets/tasks. We believe this may be due to trying to learn the local parameter patterns and the weights to combine them concurrently. Related work suggests this can be resolved by leveraging prior knowledge about what these local parameter patterns should represent (Denil et al., 2013), i.e., you define and freeze what they represent. However, prior knowledge is not available in the general case, and data-driven methods of training these local filter patterns often rely on pretraining steps of the fully-parameterized network (e.g., Denil et al., 2013). Thus, they are not suited for NPAS since they can not training large networks with low memory requirements, but addressing this issue would be a good direction for future work.
Table 7 compares approaches for mapping layers to parameter groups using the same number of parameters as the original model. We see a small, but largely consistent improvement over using a traditional (baseline) network using SSNs. Notably, our automatically learned mappings (auto) perform on par with manual groups. This demonstrates that our automated approach can be used without loss in performance, while being applicable to any architecture, making them more flexible than hand-crafted methods. This flexibility does come with a computational cost, as our preliminary step that learns to map layers to parameter groups resulted in a 10-15% longer training time for equivalent epochs. That said, parameter sharing methods have demonstrated an ability to converge
faster (Bagherinezhad et al., 2017; Lan et al., 2020). Thus, exploring more efficient training strategies using NPAS methods like SSNs will be a good direction for future work.
Figure 4(a) compares the 3× 3 kernel filters at the early, middle, and final convolutional layers of a WRN-16-2 of a traditional neural network (no parameter sharing) and our SSNs where all layers belong to the same parameter group. We observe a correspondence between filters in the early layers, but this diverges in deeper layers. This suggests that sharing becomes more difficult in the final layers, which is consistent with two observations we made about Figure 4(b), which visualizes parameter groups used for SSN-WRN-28-10 to create 14 parameter group mappings. First, we found the learned parameter group mappings tended to share parameters between layers early in the network, opting for later layers to share no parameters. Second, the early layers tended to group layers into 3–4 parameter stores across different runs, with the remaining 10–11 parameter stores each containing a single layer. Note that these observations were consistent across different random initializations.
5 CONCLUSION
We propose NPAS, a novel task in which the goal is to implement a given, arbitrary network architecture given a fixed parameter budget. This involves identifying how to assign parameters to layers and implementing layers with their assigned parameters (which may be of any size). To address NPAS, we introduce SSNs, which automatically learn how to share parameters. SSNs benefit from parameter sharing in the low-budget regime—reducing memory and communication requirements when training—and enable a novel high-budget regime that can improve model performance. We show that SSNs boost results on ImageNet by 3% improvement in Error@5 over a same-sized network without parameter sharing. Surprisingly, we also find that parameters can be shared among very different layers. Further, we show that SSNs can be combined with knowledge distillation and parameter pruning to achieve state-of-the-art results that also reduce FLOPs at test time. One could think of SSNs as spreading the same number of parameters across more layers, increasing effective depth, which benefits generalization (Telgarsky, 2016), although this requires further exploration.
Acknowledgements. This work is funded in part by grants from the National Science Foundation and DARPA. This project received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 program (grant agreement MAELSTROM, No. 955513). N.D. is supported by the ETH Postdoctoral Fellowship. We thank the Livermore Computing facility for the use of their GPUs for some experiments.
ETHICS STATEMENT
Neural Parameter Allocation Search and Shapeshifter Networks are broad and general-purpose, and therefore applicable to many downstream tasks. It is thus challenging to identify specific cases of benefit or harm, but we note that reducing memory requirements can have broad implications. In particular, this could allow potentially harmful applications to be cheaply and widely deployed (e.g., facial recognition, surveillance) where it would otherwise be technically or economically infeasible.
REPRODUCIBILITY STATEMENT
NPAS is a task which can be implemented in many different ways; we define it formally in Section 2. SSNs, our proposed solution to NPAS, are presented in detail in Section 3, and Figure 2 provides an illustration and example for weight generation methods. AppendixA also provides a thorough discussion of the implementation details. To further aid reproducibility, we publicly release our SSN code at https://github.com/BryanPlummer/SSN.
A DESCRIPTION OF COMPARED TASKS
A.1 IMAGE-SENTENCE RETRIEVAL
In bidirectional image-sentence retrieval when a model is provided with an image the goal is to retrieve a relevant sentence and vice versa. This task is evaluated using Recall@K={1, 5, 10} for both directions (resulting in 6 numbers), which we average for simplicity. We benchmark methods on two common datasets: Flickr30K (Young et al., 2014) which contains 30K/1K/1K images for training/testing/validation, each with five descriptive captions, and MSCOCO (Lin et al., 2014), which contains 123K/1K/1K images for training/testing/validation, each image having roughly five descriptive captions.
EmbNet (Wang et al., 2016). This network learns to embed visual features for each image computed using a 152-layer Deep Residual Network (ResNet) (He et al., 2016) that has been trained on ImageNet (Deng et al., 2009) and the average of MT GrOVLE (Burns et al., 2019) language features representing each word into a shared semantic space using a triplet loss. The network consists of two branches, one for each modality, and each branch contains two fully connected layers (for a total of four layers). We adapted the implementation of Burns et al. (Burns et al., 2019)1, and left all hyperparameters at the default settings. Specifically, we train using a batch size of 500 with an initial learning rate of 1e-4 which we decay exponentially with a gamma of 0.794 and use weight decay of 0.001. The model is trained until it has not improved performance on the validation set over the last 10 epochs. This architectures provides a simple baseline for parameter sharing with our Shapeshifter Networks (SSNs), where layers operate on two different modalities.
ADAPT-T2I (Wehrmann et al., 2020). In this approach word embeddings are aggregated using a bidirectional GRU (Cho et al., 2014) and its hidden state at each timestep is averaged to obtain a fullsentence representation. Images are represented using 36 bottom-up image region features (Anderson et al., 2018) that are passed through a fully connected layer. Then, each sentence calculates scaling and shifting parameters for the image regions using a pair of fully connected layers that both take the full-sentence representation as input. The image regions are then averaged, and a similarity score is computed between the sentence-adapted image features and the fully sentence representation. Thus, there are four layers total (3 fully connected, 1 GRU) that can share parameters, including the two parallel fully connected layers (i.e., they both take the full sentence features as input, but are expected to have different outputs). We adapted the author’s implementation and kept the default hyperparameters2. Specifically, we use a latent dimension of 1024 for our features and train with a batch size of 105 using a learning rate of 0.001. This method was selected since it achieves high performance and also included fully connected and recurrent layers, as well as having a set of parallel layers that make effectively performing cross-layer parameter sharing more challenging.
A.2 PHRASE GROUNDING
Given a phrase the goal of a phrase grounding model is to identify the image region described by the phrase. Success is achieved if the predicted box has at least 0.5 intersection over union with the ground truth box. Performance is measured using the percent of the time a phrase is accurately localized. We evaluate on two datasets: Flickr30K Entities (Plummer et al., 2017) which augments the Flickr30K dataset with 276K bounding boxes associated with phrases in the descriptive captions, and ReferIt (Kazemzadeh et al., 2014) which contains 20K images that are evenly split between training/testing and 120K region descriptions.
SimNet (Wang et al., 2018). This network contains three branches that each operate on different types of features. One branch passes image regions features computed with a 101-layer ResNet that have been fine-tuned for phrase grounding using two fully connected layers. A second branch passes MT GrOVLE features through two fully connected layers. Then, a joint representation is computed for all region-phrase pairs using an elementwise product. Finally, the third branch passes these joint features through three fully connected layers (7 total), where the final layer acts as a classifier indicating the likelihood that phrase is present in the image region. We adapt the code
1https://github.com/BryanPlummer/Two_branch_network 2https://github.com/jwehrmann/retrieval.pytorch
from Plummer et al. (2020)3 and keep all hyperameters at their default settings. Specifically, we use a pretrained Faster R-CNN model (Ren et al., 2015) fine-tuned for phrase grounding by Plummer et al. (2020) on each dataset to extract region features. Then we encode each phrase by averaging MT GrOVLE features (Burns et al., 2019) and provide the image and phrase features as input to our model. We train our model using a learning rate of 5e-5 and a final embedding dimension of 256 until it no longer improves on the validation set for 5 epochs (typically resulting in training times of 15-20 epochs). Performing experiments on this model enables us to test how well our SSNs generalize to another task and how well it can adapt to sharing parameters with layers operating on three types of features (just vision, just language, and a joint representation).
A.3 IMAGE CLASSIFICATION
For image classification the goal is to be able to recognize if an object is present in an image. Typically this task is evaluated using Error@K, or the portion of times that the correct category doesn’t appear in the top k most likely objects. We evaluate our Shapeshifter Networks on three datasets: CIFAR-10 and CIFAR-100 (Krizhevsky, 2009), which are comprised of 60K images of 10 and 100 categories, respectively, and ImageNet (Deng et al., 2009), which is comprised of 1.2M images containing 1,000 categories. We report Error@1 for both CIFAR datasets and Error@5 for ImageNet. In these appendices, we also report Error@1 for ImageNet.
Wide Residual Network (WRN) (Zagoruyko & Komodakis, 2016). WRN modified the traditional ResNets by increasing the width k of each layer while also decreasing the depth d, which they found improved performance. Different variants are identified using WRN-d-k. Following Savarese et al. (Savarese & Maire, 2019), we evaluate our Shapeshifter Networks using WRN-28-10 for CIFAR and WRN-50-2 for ImageNet. We adapt the implementation of Savarese et al.4 and use cutout (DeVries & Taylor, 2017) for data augmentation. Specifically, on CIFAR we train our model using a batch size of 128 for 200 epochs with weight decay set at 5e-4 and an initial learning rate of 0.1 which we decay using a gamma of 0.2 at 60, 120, and 160 epochs. Unlike the vision-language models discussed earlier, these architecture include convolutional layers in addition to a fully connected layer used to implement a classifier, and also have many more layers than the shallow vision-language models.
DenseNet (Huang et al., 2017). Unlike traditional neural networks where each layer in the network is computed in sequence, every layer in a DenseNet using feature maps from every layer which came before it. We adapt PyTorch’s official implementation5 using the hyperparameters as set in Huang et al. (Huang et al., 2017). Specifically, on CIFAR we train our model using a batch size of 96 for 300 epochs with weight decay set at 1e-4 and an initial learning rate of 0.1 which we decay using a gamma of 0.1 at 150 and 225. These networks provide insight into the effect depth has on learning SSNs, as we use a 190-layer DenseNet-BC configuration for CIFAR. However, due to their high computational cost we provide limited results testing only some settings.
EfficientNet (Tan & Le, 2019). EfficientNets are a class of model designed to balance depth, width, and input resolution in order to produce very parameter-efficient models. For ImageNet, we adapt an existing PyTorch implementation and its hyperparameters6, which are derived from the official TensorFlow version. We use the EfficientNet-B0 architecture to illustrate the impact of SSNs on very parameter-efficient, state-of-the-art models. On CIFAR-100 we use an EfficientNet with Network Deconvolution (ND) (Ye et al., 2020), which results in improved results with similar numbers of epochs for training. We use the author’s implementation7, and train each model for 100 epochs (their best performing setting). Note that our best error running different configurations of their model (35.88) is better than those in their paper (37.63), so despite the relatively low performance it is comparable to results from their paper.
3https://github.com/BryanPlummer/phrase_detection 4https://github.com/lolemacs/soft-sharing 5https://pytorch.org/hub/pytorch_vision_densenet/ 6https://rwightman.github.io/pytorch-image-models/ 7https://github.com/yechengxi/deconvolution
A.4 QUESTION ANSWERING
In question answering, a model is given a question and an associated textual passage which may contain the answer, and the goal is to predict the span of text in the passage that contains the answer. We use two versions of the Stanford Question Answering Dataset (SQuAD), SQuAD v1.1 (Rajpurkar et al., 2016), which contains 100K+ question/answer pairs on 500+ Wikipedia particles, and SQuAD v2.0, which augments SQuAD v1.1 with 50K unanswerable questions designed adversarially to be similar to standard SQuAD questions. For both datasets, we report both the F1 score, which captures the precision and recall of the chosen text span, and the Exact Match (EM) score.
ALBERT (Lan et al., 2020) ALBERT is a version of the BERT (Devlin et al., 2019) transformer architecture that applies cross-layer parameter sharing. Specifically, the parameters for all components of a transformer layer are shared among all the transformer layers in the network. ALBERT also includes a factorized embedding to further reduce parameters. We follow the methodology of BERT and ALBERT for reporting results on SQuAD, and our baseline ALBERT scores closely match those reported in the original work. This illustrates the ability of NPAS and SSNs to develop better parameter sharing methods than manually-designed systems for extremely large models.
B EXTENDED RESULTS WITH ADDITIONAL BASELINES
Below we provide additional results with more baseline methods for the three components of our SSNs: weight generator (Section B.1), parameter upsampling (Section B.4), and mapping layers to parameter groups (Section B.3). We provide ablations on the number of parameter groups and templates used by our SSNs in Section C and Section D, respectively.
B.1 ADDITIONAL METHODS THAT GENERATE LAYER WEIGHTS FROM TEMPLATES
Parameter downsampling uses the selected templates T ki for a layer `i to produce its weights wi. In Section 3.1.1 of the paper we discuss two methods of learning a combination of the T ki to generate wi. Below in Section B.2 we provide two simple baseline methods that directly use the candidates. Table 8 compares the baselines to the methods in the main paper that learn weighted combinations of templates, where the learned methods typically perform better than the baselines.
B.2 DIRECT TEMPLATE COMBINATION
Here we describe the strategies we employ that require no parameters to be learned by weight generator, i.e., they operate directly on the templates T ki .
Round Robin (RR) reuses parameters of each template set as few times as possible. The scheme simply returns the weights at index k mod K in the (ordered) template set Ti at the kth query of a parameter group.
Candidate averaging (Avg) averages all candidates in Ti to provide a naive baseline for using multiple candidates. A significant drawback of this approach is that, if K is large, this can result in reusing parameters (across combiners) many times with no way to adapt to a specific layer, especially when the size of the parameter group is small.
B.3 ADDITIONAL PARAMETER MAPPING RESULTS
Table 9 compares approaches that map layers to parameter groups using the same number of parameters as the original model. We see a small, but largely consistent improvement over using a traditional (baseline) network. Notably, our learned grouping methods (WAvg, Emb) perform on par, and sometimes better than using manual mappings. However, our approach can be applied to any architecture to create a selected number of parameter groups, making them more flexible than hand-crafted methods. For example, in Table 10, we see using two groups often helps to improve performance when using very few parameters, but it is not clear how to effectively create two groups by hand for many networks.
B.4 EXTENDED PARAMETER UPSAMPLING
In Table 10 we provide extended results comparing the parameter upsamping methods. We additionally compare with a further naı̈ve baseline of simply repeating parameters until they are the appropriate size. We find that Mask upsampling is always competitive, and typically moreso when two parameter groups are used.
B.5 COMPARISON WITH HYPERNETWORKS
In Table 11 we compare our SSNs on Wide ResNets (Ha et al., 2016) to the same networks implemented using Hypernetworks (Ha et al., 2016) for CIFAR-10, using the results reported in their paper. We can see that, for the same parameter budget, SSNs outperform Hypernetworks.
C EFFECT OF THE NUMBER OF PARAMETER GROUPS P
A significant advantage of using learned mappings of layers to parameter groups, described in Section 3.2, is that our approach can support any number of parameter groups, unlike prior work that required manual grouping and/or heuristics to determine which layers shared parameters (e.g., Lan et al., 2020; Savarese & Maire, 2019). In this section we explore how the number of parameter groups
effects performance on the image classification task. We do not benchmark bidirectional retrieval and phrase grounding since networks addressing these tasks have few layers, so parameter groups are less important (as shown in Table 7).
Table 12 reports the performance of our SSNs when using different numbers P parameter groups. We find that when training with few parameters (first line) low numbers of parameter groups work best, while when more parameters are available larger numbers of groups work better (second line). In fact, there is a significant drop in performance going from 4 to 8 groups when training with few parameters as seen in the first line of Table 12. This is due to the fact that starting at 8 groups some parameter groups had too few weights to implement their layers, resulting in extensive parameter upsampling. This suggests that we may be able to further improve performance when there are few parameters by developing better methods of implementing layers when too few parameters are available.
D EFFECT OF THE NUMBER OF TEMPLATES K
Table 13 reports the results using different numbers of templates. We find that varying the number of templates only has a minor impact on performance most of the time. We note that more templstes tends to lead to reduced variability between runs, making results more stable. As a reminder, however, the number of templates does not guarantee that each layer will have enough parameters to construct them. Thus, parameter groups only use this hyperparameter when many weights are available to it (i.e., it can form multiple templates for the layers it implements). This occurs for the phrase grounding and bidirectional retrieval results at the higher maximum numbers of templates.
E SCALING SSNS TO LARGER NETWORKS
Table 14 demonstrates the ability of our SSNs to significantly reduce the parameters required, and thus the memory required to implement large Wide ResNets so they fall within specific bounds. For example, Table 14(b) shows larger and deeper configurations continue to improve performance even when the number of parameters remains largely constant. Comparing the first line of Table 14(a) and the last line of Table 14(c) we see that SSN-WRN-76-12 outperforms the fully-parameterized WRN28-10 network by 0.6% on CIFAR-100 while only using just over half the parameters, and comes within 0.5% of WRN-76-12 while only using 13.0% of its parameters. We do note that using a SSN does not reduce the number of floating point operations, so although our SSN-WRN-76-12 model uses fewer parameters than the WRN-28-10, it is still slower at both test and train time. However, our results help demonstrate that SSNs can be used to implement very large networks with lower memory
requirements by effectively sharing parameters. This enables us to train larger, better-performing networks than is possible with traditional neural networks on comparable computational resources.
F IMAGE CLASSIFICATION NUMBERS
We provide raw numbers for the results in Figure 3 in Table 15 (CIFAR-100) and Table 16 (ImageNet).
G PERFORMANCE IMPLICATIONS OF NPAS AND SSNS
Our SSNs can offer several performance benefits by reducing parameter counts; notably, they can reduce memory requirements storing a model and can reduce communication costs for distributed training. We emphasize that LB-NPAS does not reduce FLOPs, as the same layer operations are implemented using fewer parameters. Should fewer FLOPs also be desired, SSNs can be combined
with other techniques, such as pruning. Additionally, we note that our implementation has not been extensively optimized, and further performance improvements could likely be achieved with additional engineering.
G.1 COMMUNICATION COSTS FOR DISTRIBUTED TRAINING
Communication for distributed data-parallel training is typically bandwidth-bound, and thus employs bandwidth-optimal allreduces, which are linear in message length (Chan et al., 2007). Thus, we expect communication time to be reduced by a factor proportional to the parameter savings achieved by NPAS, all else being equal. However, frameworks will typically execute allreduces layer-wise as soon as gradient buffers are ready to promote communication/computation overlap in backpropagation; reducing communication that is already fully overlapped is of little benefit. Performance benefits are thus sensitive to the model, implementation details, and the system being used for training.
For CNNs, we indeed observe minor performance improvements, as the number of parameters is typically small. When using 64 V100 GPUs for training WRN-50-2 on ImageNet, we see a 1.04× performance improvement in runtime per epoch when using SSNs with 10.5M parameters (15% of the original model). This is limited because most communication is overlapped. We also observe small performance improvements in some cases because we launch fewer allreduces, resulting in less demand for SMs and memory bandwidth on the GPU. These performance results are in line with prior work on communication compression for CNNs (e.g., Renggli et al., 2019).
For large transformers, however, we observe more significant performance improvements. The SSN-ALBERT-Large is about 1.4× faster using 128 GPUs than the corresponding BERT-Large model. This is in line with the original ALBERT work (Lan et al., 2020), which reported that training ALBERT-Large was 1.7× faster than BERT-Large when using 128 TPUs. Note that due to the differences in the systems for these results, they are not directly comparable.
We would also reiterate that for some applications where communication is more costly, say, for federated learning applications (e.g. McMahan et al. (2017); Konečný et al. (2016)), our approach would be even more beneficial due to the decreased message length.
G.2 MEMORY SAVINGS
LB-NPAS and SSNs reduce the number of parameters, which consequentially reduces the size of the gradients and optimizer state (e.g., momentum) by the same amount. It does not reduce the storage requirements for activations, but note there is much work on recomputation to address this (e.g., Chen et al., 2016; Jain et al., 2020). Thus, the memory savings from SSNs is independent of batch size. For SSN-ALBERT-Large, we use 18M parameters (5% of BERT-Large, which contains 334M parameters). Assuming FP32 is used to store data, we save about 5 GB of memory in this case (about 1/3 of the memory used) | 1. What is the focus of the paper regarding neural networks, and what are the proposed approaches?
2. What are the strengths of the paper, particularly in its methodology and broad applicability?
3. What are the weaknesses of the paper, especially regarding its practical impact and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the paper's analysis, experiments, and conclusions? | Summary Of The Paper
Review | Summary Of The Paper
Parameter sharing can reduce memory footprint of neural networks and memory bandwidth requirements, but existing methods require manually tuning the sharing strategy. This paper uses a small phase of training to cluster the learned layer representations by groups. This allows networks to be scaled from small to large parameter counts (to be clear, number of trainable parameters) without changing the model architecture. Importantly, this procedure does not change the number of FLOPs in the model.
Experiments across a wide set of tasks and networks compare this approach with either (1) SWRN from Savarasese & Maire, 2019), or (2) existing hand-tuned parameter scaling from families of networks such as EfficientNet, DenseNet, or ALBERT.
Review
Strengths
I commend the authors on extensive evaluation across multiple datasets, models, and tasks are convincing of the broad applicability of this approach. Not an expert in the literature, but during my investigation, the PNAS approach seems like a reasonably novel mechanism for parameter allocation and automatically learning which weights to share. Methodologically interesting approach, with implications for studies on generalizability.
Weaknesses
The practical impact of this approach is overclaimed in this paper. For example, while the SSN reach lower error compared to the EfficientNet family as the same # of trainable parameters, the EfficientNet family (and ALBERT, WRN, etc.) are designed to have lower inference time, which SSN does not achieve.
Figure 1 marks both the author's method (NPAS) and previous work in cross-layer sharing (presumably Savarasee & Maire, 2019) as "make training more efficient", but the effects are dramatically different. NPAS's efficiency is in lower memory footprint, whereas prior work reduces the FLOPs and therefore the training time (Table 4, Savarase & Maire, 2019).
While shared parameters does reduces the communication during data parallel training, the authors provide no data or reasoning for the magnitude of that effect on training time. Given that CNNs already have relatively low parameter count, I suspect that the effect on the model sync time during distributed training is minimal.
A meta-comment: parameter counts may be the wrong metric to compare model capacity in these studies, as the methods you compare against (e.g. the EfficientNet_B0 family) have reduce parameters by removing from network, whereas NPAS reduced by sharing weights. For example, would path norm be more appropriate from a generalization studies perspective, as hinted in your conclusion? This is admittedly out of scope for this paper, as the paper's main claims are on efficiency.
My score could be improved by:
Figure 1: breaking out "makes training more efficient" into "Reduces FLOPs" and "reduces memory footprint" rows, and marking the methods appropriately. Citing the paper for prior work directly in the column or the caption.
In the discussion of results, acknowledging that these other methods reduce FLOps and training time, whereas the NPAS approach does not.
Providing data on the actual effect of this reduced number of trainable parameter counts on (1) model training times in a distributed training setting, and (2) memory usage on the GPU. Are these savings practically realizable? |
ICLR | Title
Neural Parameter Allocation Search
Abstract
Training neural networks requires increasing amounts of memory. Parameter sharing can reduce memory and communication costs, but existing methods assume networks have many identical layers and utilize hand-crafted sharing strategies that fail to generalize. We introduce Neural Parameter Allocation Search (NPAS), a novel task where the goal is to train a neural network given an arbitrary, fixed parameter budget. NPAS covers both low-budget regimes, which produce compact networks, as well as a novel high-budget regime, where additional capacity can be added to boost performance without increasing inference FLOPs. To address NPAS, we introduce Shapeshifter Networks (SSNs), which automatically learn where and how to share parameters in a network to support any parameter budget without requiring any changes to the architecture or loss function. NPAS and SSNs provide a complete framework for addressing generalized parameter sharing, and can also be combined with prior work for additional performance gains. We demonstrate the effectiveness of our approach using nine network architectures across four diverse tasks, including ImageNet classification and transformers.
1 INTRODUCTION
Training neural networks requires ever more computational resources, with GPU memory being a significant limitation (Rajbhandari et al., 2021). Method such as checkpointing (e.g., Chen et al., 2016; Gomez et al., 2017; Jain et al., 2020) and out-of-core algorithms (e.g., Ren et al., 2021) have been developed to reduce memory from activations and improve training efficiency. Yet even with such techniques, Rajbhandari et al. (2021) find that model parameters require significantly greater memory bandwidth than activations during training, indicating parameters are a key limit on future growth. One solution is cross-layer parameter sharing, which reduces the memory needed to store parameters, which can also reduce the cost of communicating model updates in distributed training (Lan et al., 2020; Jaegle et al., 2021) and federated learning (Konečný et al., 2016; McMahan et al., 2017), as the model is smaller, and can help avoid overfitting (Jaegle et al., 2021). However, prior work in parameter sharing (e.g., Dehghani et al., 2019; Savarese & Maire, 2019; Lan et al., 2020; Jaegle et al., 2021) has two significant limitations. First, they rely on suboptimal hand-crafted techniques for deciding where and how sharing occurs. Second, they rely on models having many identical layers. This limits the network architectures they apply to (e.g., DenseNets (Huang et al., 2017) have few such layers) and their parameter savings is only proportional to the number of identical layers.
To move beyond these limits, we introduce Neural Parameter Allocation Search (NPAS), a novel task which generalizes existing parameter sharing approaches. In NPAS, the goal is to identify where and how to distribute parameters in a neural network to produce a high-performing model using an arbitrary, fixed parameter budget and no architectural assumptions. Searching for good sharing strategies is challenging in many neural networks due to different layers requiring different numbers of parameters or weight dimensionalities, multiple layer types (e.g., convolutional, fully-connected, recurrent), and/or multiple modalities (e.g., text and images). Hand-crafted sharing approaches, as in prior work, can be seen as one implementation of NPAS, but they can be complicated to create for complex networks and have no guarantee that the sharing strategy is good. Trying all possible permutations of sharing across layers is computationally infeasible even for small networks. To our knowledge, we are the first to consider automatically searching for good parameter sharing strategies.
*indicates equal contribution
By supporting arbitrary parameter budgets, NPAS explores two novel regimes. First, while prior work considered using sharing to reduce the number of parameters (which we refer to as low-budget NPAS, LB-NPAS), we can also increase the number of parameters beyond what an architecture typically uses (high-budget NPAS, HB-NPAS). HB-NPAS can be thought of as adding capacity to the network in order to improve its performance without changing its architecture (e.g., without increasing the number of channels that would also increase computational time). Second, we consider cases where there are fewer parameters available to a layer than needed to implement the layer’s operations. For such low-budget cases, we investigate parameter upsampling methods to generate the layer’s weights.
A vast array of other techniques, including pruning (Hoefler et al., 2021), quantization (Gholami et al., 2021), knowledge distillation (Gou et al., 2021), and low-rank approximations (e.g., Wu, 2019; Phan et al., 2020) are used to reduce memory and/or FLOP requirements for a model. However, such methods typically only apply at test/inference time, and actually are more expensive to train due to requiring a fully-trained large network, in contrast to NPAS. Nevertheless, these are also orthogonal to NPAS and can be applied jointly. Indeed, we show that NPAS can be combined with pruning or distillation to produce improved networks. Figure 1 compares NPAS to closely related tasks.
To implement NPAS, we propose Shapeshifter Networks (SSNs), which can morph a given parameter budget to fit any architecture by learning where and how to share parameters. SSNs begin by learning which layers can effectively share parameters using a short pretraining step, where all layers are generated from a single shared set of parameters. Layers that use parameters in a similar way are then good candidates for sharing during the main training step. When training, SSNs generate weights for each layer by down- or upsampling the associated parameters as needed.
We demonstrate SSN’s effectiveness in high- and low-budget NPAS on a variety of networks, including vision, text, and vision-language tasks. E.g., a LB-NPAS SSN implements a WRN-50-2 (Zagoruyko & Komodakis, 2016) using 19M parameters (69M in the original) and achieves an Error@5 on ImageNet (Deng et al., 2009) 3% lower than a WRN with the same budget. Similarity, we achieve a 1% boost to SQuAD v2.0 (Rajpurkar et al., 2016) with 18M parameters (334M in the original) over ALBERT (Lan et al., 2020), prior work for parameter sharing in Transformers (Vaswani et al., 2017). For HB-NPAS, we achieve a 1–1.5% improvement in Error@1 on CIFAR (Krizhevsky, 2009) by adding capacity to a traditional network. In summary, our key contributions are:
• We introduce Neural Parameter Allocation Search (NPAS), a novel task in which the goal is to implement a given network architecture using any parameter budget. • To solve NPAS, we propose Shapeshifter Networks (SSNs), which automate parameter sharing. To our knowledge, SSNs are the first approach to automatically learn where and how to share parameters and to share parameters between layers of different sizes or types. • We benchmark SSNs for LB- and HB-NPAS and show they create high-performing networks when either using few parameters or adding network capacity. • We also show that SSNs can be combined with knowledge distillation and parameter pruning to boost performance over such methods alone.
2 NEURAL PARAMETER ALLOCATION SEARCH (NPAS)
In NPAS, the goal is to implement a neural network given a fixed parameter budget. More formally:
Neural Parameter Allocation Search (NPAS): Given a neural network architecture with layers `1, . . . , `L, which each require weights w1, . . . , wL, and a fixed parameter budget θ, train a high-performing neural network using the given architecture and parameter budget.
Any general solution to NPAS (i.e., that works for arbitrary θ or network) must solve two subtasks:
1. Parameter mapping: Assign to each layer `i a subset of the available parameters. 2. Weight generation: Generate `i’s weights wi from its assigned parameters, which may be any size.
Prior work, such as Savarese & Maire (2019) and Ha et al. (2016), are examples of weight generation methods, but in limited cases, e.g., Savarese & Maire (2019) does not support there being fewer parameters than weights. To our knowledge, no prior work has automated parameter mapping, instead relying on hand-crafted heuristics that do not generalize to many architectures. Note weight generation must be differentiable so gradients can be backpropagated to the underlying parameters.
NPAS naturally decomposes into two different regimes based on the parameter budget relative to what would be required by a traditional neural network (i.e., ∑ L i |wi| versus |θ|):
• Low-budget (LB-NPAS), with fewer parameters than standard networks ( ∑
L i |wi| < |θ|). This
regime has traditionally been the goal of cross-layer parameter sharing, and reduces memory at training and test time, and consequentially reduces communication for distributed training. • High-budget (HB-NPAS), with more parameters than standard networks ( ∑
L i |wi| > |θ|). This is,
to our knowledge, a novel regime, and can be thought of as adding capacity to a network without changing the underlying architecture by allowing a layer to access more parameters.
Note, in both cases, the FLOPs required of the network do not significantly increase. Thus, HB-NPAS can significantly reduce FLOP overhead compared to larger networks.
The closest work to ours are Shared WideResNets (SWRN) (Savarese & Maire, 2019), Hypernetworks (HN) (Ha et al., 2016), and Lookup-based Convolutional Networks (LCNN) (Bagherinezhad et al., 2017). Each method demonstrated improved low-budget performance, with LCNN and SWRN focused on improving sharing across layers and HN learning to directly generate parameters. However, all require adaptation for new networks and make architectural assumptions. E.g., LCNN was designed specifically for convolutional networks, while HN and SWRN’s benefits are proportional to the number of identical layers (see Figure 3). Thus, each method supports limited architectures and parameter budgets, making them unsuited for NPAS. LCNN and HN also both come with significant computational overhead. E.g., the CNN used by Ha et al. requires 26.7M FLOPs for a forward pass on a 32×32 image, but weight generation with HN requires an additional 108.5M FLOPs (135.2M total). In contrast, our SSNs require 0.8M extra FLOPs (27.5M total, 5× fewer than HN). Across networks we consider, SSN overhead for a single image is typically 0.5–2% of total FLOPs. Note both methods generate weights once per forward pass, amortizing overhead across a batch (e.g., SSN overhead is reduced to 0.008–0.03% for batch size 64). HB-NPAS is also reminiscent of mixture-of-experts (e.g., Shazeer et al., 2017); both increase capacity without significantly increasing FLOPs, but NPAS allows this overparameterization to be learned without architectural changes required by prior work.
NPAS can be thought of as searching for efficient and effective underlying representations for a neural network. Methods have been developed for other tasks that focus on directly searching for more effective architectures (as opposed to their underlying representations). These include neural architecture search (e.g., Bashivan et al., 2019; Dong & Yang, 2019; Tan et al., 2019; Xiong et al., 2019; Zoph & Le, 2017) and modular/self-assembling networks (e.g., Alet et al., 2019; Ferran Alet, 2018; Devin et al., 2017). While these tasks create computationally efficient architectures, they do not reduce the number of parameters in a network during training like NPAS (i.e., they cannot be used to train very large networks or for federated or distributed learning applications), and indeed are computationally expensive. NPAS methods can also provide additional flexibility to architecture search by enabling them to train larger and/or deeper architectures while keeping within a fixed parameter budget. In addition, the performance of any architectures these methods create could be improved by leveraging the added capacity from excess parameters when addressing HB-NPAS.
3 SHAPESHIFTER NETWORKS FOR NPAS
We now present Shapeshifter Networks (SSNs), a framework for addressing NPAS using generalized parameter sharing to implement a neural network with an arbitrary, fixed parameter budget. Figure 2 provides an overview and example of SSNs, and we detail each aspect below. An SSN consists of a provided network architecture with layers `1,...,L, and a fixed budget of parameters θ, which are partitioned into P parameter groups (both hyperparameters) containing parameters θ1,...,P . Each layer is associated with a single parameter group, which will provide the parameters used to implement it. This mapping is learned in a preliminary training step by training a specialized SSN and clustering its layer representations (Section 3.2). To implement each layer, an SSN morphs the parameters in its associated group to generate the necessary weights; this uses downsampling (Section 3.1.1) when the group has more parameters than needed, or upsampling (Section 3.1.2) when the group has fewer parameters than needed. SSNs allow any number of parameters to “shapeshift” into a network without necessitating changes to the model’s loss, architecture, or hyperparameters, and the process can be applied automatically. Finally, we note that SSNs are simply one approach to NPAS. Appendices B-D contain ablation studies and discussion of variants we found to be less successful.
3.1 WEIGHT GENERATION
Weight generation implements a layer `i, which requires weights wi, using the fixed set of parameters in its associated parameter group θj . (We assume the mapping between layers and parameter groups has already been established; see Section 3.2.) There are three cases to handle:
1. |wi| = |θj | (exactly enough parameters): The parameters are used as-is. 2. |wi| < |θj | (excess parameters): We perform parameter downsampling (Section 3.1.1). 3. |wi| > |θj | (insufficient parameters): We perform parameter upsampling (Section 3.1.2). We emphasize that, depending on how layers are mapped to parameter groups, both down- and upsampling may be required in an LB- or HB-NPAS model.
3.1.1 PARAMETER DOWNSAMPLING
When a parameter group θj provides more parameters than needed to implement a layer `i, we perform template-based downsampling to generate wi. To do this, we first split θj into up to K (a hyperparameter) templates T 1,...,Ki , where each template T k i is the same dimension as wi. If θj does not evenly divide into templates, we ignore excess parameters. To avoid uneven sharing of parameters between layers, the templates for each layer are constructed from θj in a round-robin fashion. These templates are then combined to produce wi; if only one template can be produced we instead use it directly. We present two different methods of learning to combine templates. To simplify presentation, we will assume there are exactly K templates used.
WAvg (Savarese & Maire, 2019) This learns a vector αi ∈ RK which is used to produce a weighted average of the templates: wi = ∑ K k=1α k i T k i . The αi are initialized orthogonally to the αs of all other
layers in the same parameter group. While efficient, this only implicitly learns similarities between layers. Empirically, we find that different layers often converge to similar αs, limiting sharing.
Emb To address this, we can instead more directly learn a representation of the layer using a layer embedding. We use a learnable vector φi ∈ RE , where E is the size of the layer representation; we useE = 24 throughout, as we found it to work well. A linear layer, which is shared between all layers in the parameter group and parameterized by Wj ∈ RK×E and bj ∈ RK , is then used to construct an αi for the layer, which is used as in WAvg. That is, αi =Wjφi + bj and wi = ∑ K k=1α k i T k i . We considered more complex methods (e.g., MLPs, nonlinearities), but they did not improve performance.
While both methods require additional parameters, this is quite small in practice. WAvg requires K additional parameters per layer. Emb requires E = 24 additional parameters per layer and KE +K = 24K +K parameters per parameter group.
3.1.2 PARAMETER UPSAMPLING
If instead a parameter group θj provides fewer parameters than needed to implement a layer `i, we upsample θj to be the same size as wi. As a layer will use all of the parameters in θj , we do not use templates. We consider two methods for upsampling below.
Inter As a naı̈ve baseline, we use bilinear interpolation to directly upsample θj . However, this could alter the patterns captured by parameters, as it effectively stretches the receptive field. In practice, we found fully-connected and recurrent layers could compensate for this warping, but it degraded convolutional layers compared to simpler approaches such as tiling θj .
Mask To address this, and avoid redundancies created by directly repeating parameters, we propose instead to use a learned mask to modify repeated parameters. For this, we first use n = d|wi|/|θj |e tiles of θj to be the same size as wi (discarding excess in the last tile). We then apply a separate learned mask to each tile after the first (i.e., there are n− 1 masks). All masks are a fixed “window” size, which we take to be 9 by default (to match the size of commonly-used 3× 3 kernels in CNNs), and are shared within each parameter group. To apply, masks are multiplied element-wise over sequential windows of their respective tile. While the number of additional parameters depends on the amount of upsampling required, as the masks are small, this is negligible.
3.2 MAPPING LAYERS TO PARAMETER GROUPS
We now discuss how SSNs can automatically learn to assign layers to parameter groups in such a way that parameters can be efficiently shared. This is in contrast to prior work on parameter sharing (e.g., Ha et al., 2016; Savarese & Maire, 2019; Jaegle et al., 2021), which required layers to be manually assigned to parameter groups. Finding an optimal mapping of layers to parameter groups is challenging, and a brute-force approach is computationally infeasible. We rely instead on SSNs learning a representation for each layer as part of the template-based parameter downsampling process, and then use this representation to identify similar layers which can effectively share parameters.
To do this, we perform a short preliminary training step in which we train a small (i.e., low parameter budget) SSN version of the model using a single parameter group and a modified means of generating templates for parameter downsampling. Specifically, for a layer `i, we split θ into K ′ evenly-sized templates T 1,...,K ′
i . Since we wish to use downsampling-based weight generation, each T k′
i is then resized with bilinear interpolation to be the same size as wi. Next, we train the SSN as usual, using WAvg or Emb downsampling with the modified templates for weight generation (there is no upsampling). By using a small parameter budget and template-based weight generation where each template comes from the same underlying parameters, we encourage significant sharing between layers so we can measure the effectiveness of sharing. We found that using a budget equal to the number of weights of the largest single layer in the network to work well. Further, this preliminary training step is short, and requires only 10–15% of the typical network training time.
Finally, we construct the parameter groups by clustering the learned layer representations into P groups. As the layer representation, we take the αi or φi learned for each layer by WAvg or Emb downsampling (resp.). We then use k-means clustering to group these representations into P groups, which become the parameter groups used by the full SSN.
4 EXPERIMENTS
Our experiments include a wide variety of tasks and networks in order to demonstrate the broad applicability of NPAS and SSNs. We adapt code and data splits made available by the authors and report the average of five runs for all comparisons except ImageNet and ALBERT, which average three runs. A more detailed discussion on SSN hyperparameter settings can be found in Appendices B-D. In our paper we primarily evaluate methods based on task performance, but we demonstrate that SSNs improve reduce training time and memory in distributed learning settings in Appendix G.
Compared Tasks. We briefly describe each task, datasets, and evaluation metrics. For each model, we use the authors’ implementation and hyperparameters, unless noted (more details in Appendix A).
Image Classification. For image classification the goal is to recognize if an object is present in an image. This is evaluated using Error@k, i.e., the portion of times that the correct category does not appear in the top k most likely objects. We evaluate SSNs on CIFAR-10 and CIFAR100 (Krizhevsky, 2009), which are composed of 60K images of 10 and 100 categories, respectively, and ImageNet (Deng et al., 2009), which is composed of 1.2M images containing 1,000 categories. We report Error@1 on CIFAR and Error@5 for ImageNet.
Image-Sentence Retrieval. In image-sentence retrieval the goal is match across modalities (sentences and images). This task is evaluated using Recall@K={1, 5, 10} for both cross-modal directions (six numbers), which we average for simplicity. We benchmark on Flickr30K (Young et al., 2014) which contains 30K/1K/1K images for training/testing/validation, and COCO (Lin et al., 2014), which contains 123K/1K/1K images for training/testing/validation. For both datasets each image has about five descriptive captions. We evaluate SSNs using EmbNet (Wang et al., 2016) and ADAPTT2I (Wehrmann et al., 2020). Note that ADAPT-T2I has identical parallel layers (i.e., they need different outputs despite having the same input), which makes sharing parameters challenging.
Phrase Grounding. Given a phrase the task is to find its associated image region. Performance is measured by how often the predicted box for a phrase has at least 0.5 intersection over union with its ground truth box. We evaluate on Flickr30K Entities (Plummer et al., 2017) which augments Flickr30K with 276K bounding boxes for phrases in image captions, and ReferIt (Kazemzadeh et al., 2014), which contains 20K images that are evenly split between training/testing and 120K region descriptions. We evaluate our SSNs with SimNet (Wang et al., 2018) using the implementation from Plummer et al. (2020) that reports state-of-the-art results on this task.
Question Answering. For this task the goal is to answer a question about a textual passage. We use SQuAD v1.1 (Rajpurkar et al., 2016), which has 100K+ question/answer pairs on 500+ articles, and SQuAD v2.0 (Rajpurkar et al., 2018), which adds 50K unanswerable questions. We report F1 and EM scores on the development split. We compare our SSNs with ALBERT (Lan et al., 2020), a recent transformer architecture that incorporates extensive, manually-designed parameter sharing.
4.1 RESULTS
We begin our evaluation in low-budget (LB-NPAS) settings. Figure 3 reports results on image classification, including WRNs (Zagoruyko & Komodakis, 2016), DenseNets (Huang et al., 2017), and EfficientNets (Tan & Le, 2019); Table 1 contains results on image-sentence retrieval and phrase grounding. For each task and architecture we compare SSNs to same parameter-sized networks without sharing. In image classification, we also report results for SWRN (Savarese & Maire, 2019) sharing; but note it cannot train a WRN-28-10 or WRN-50-2 with fewer than 12M or 40M parameters, resp. We show that SSNs can create high-performing models with fewer parameters than SWRN is capable of, and actually outperform it using 25% and 60% fewer parameters on C-100 and ImageNet, resp. Table 1 demonstrates that these benefits generalize to vision-language tasks. In Table 2 we also compare SSNs with ALBERT (Lan et al., 2020), which applies manually-designed parameter sharing to BERT (Devlin et al., 2019), and find that SSN’s learned parameter sharing outperforms ALBERT. This demonstrates that SSNs can implement large networks with lower memory requirements than is possible with current methods by effectively sharing parameters.
We discuss the runtime and memory performance implications of SSNs extensively in Appendix G. In short, by reducing parameter counts, SSNs reduce communication costs and memory. For example,
our SSN-ALBERT-Large trains about 1.4× faster using 128 GPUs than BERT-Large (in line with results for ALBERT), and reduces memory requirements by about 5 GB (1/3 of total).
As mentioned before, knowledge distillation and parameter pruning can help create more efficient models at test time, although they cannot reduce memory requirements during training like SSNs. Tables 3 and 4 show our approach can be used to accomplish a similar goal as these tasks. Comparing our LB-NPAS results in Table 4 and the lowest parameter setting of HRank, we report a 1.5% gain over pruning methods even when using less than half the parameters. We note that one can think of our SSNs in the high budget setting (HP-NPAS) as mixing together a set of random initializations of a network by learning to combine the different templates. This setting’s benefit is illustrated in Table 3 and Table 4, where our HB-NPAS models report a 1-1.5% gain over training a traditional network. As a reminder, in this setting we precompute the weights of each layer once training is complete so they require no additional overhead at test time. That said, best performance on both tasks comes from combining our SSNs with prior work on both tasks.
4.2 ANALYSIS OF SHAPESHIFTER NETWORKS
In this section we present ablations of the primary components of our approach. A complete ablation study, including, but not limited to comparing the number of parameter groups (Section 3.2) and number of templates (K in Section 3.1.1) can be found in Appendices B-D.
Table 5 compares the strategies generating weights from templates described in Section 3.1.1 when using a single parameter group. In these experiments we set the number of parameters as the amount required to implement the largest layer in a network. For example, the ADAPT-T2I model requires 14M parameters, but its bidirectional GRU accounts for 10M of those parameters, so all SSN variants in this experiment allocate 10M parameters. Comparing to the baseline, which involves modifying the original model’s number and/or size of filters so they have the same number of parameters as our
Table 5: Parameter downsampling comparison (Section 3.1.1) using WRN-28-10 and WRN-50-2 for C-10/100 and ImageNet, resp. Baseline adjusts the number and/or size of filters rather than share parameters. See Appendix B for additional details.
Dataset % orig Reduced SSNs (ours)params Baseline WAvg Emb C-10 11.3% 4.22 4.00 3.84C-100 22.34 21.78 21.92 ImageNet 27.5% 10.08 7.38 6.69
SSNs, we see that the variants of our SSNs perform better, especially on ImageNet where we reduce Error@5 by 3%. Generally we see a slight boost to performance using Emb over WAvg. Also note that prior work in parameter sharing, e.g., SWRN (Savarese & Maire, 2019), can not be applied to the settings in Table 5, since they require parameter sharing between layers of different types and different operations, or have too few parameters. E.g., the WRN-28-10 results use just under 4M parameters, but, as shown in Figure 3(a), SWRN requires a minimum of 12M parameters.
In Table 6 we investigate one of the new challenges in this work: how to upsample parameters so a large layer operation can be implemented with relatively few parameters (Section 3.1.2). For example, our SSN-WRN-28-10 results use about 0.5M parameters, but the largest layer defined in the network requires just under 4M weights. We find using our simple learned Mask upsampling method performs well in most settings, especially when using convolutional networks. For example, on CIFAR-100 it improves Error@1 by 2.5% over the baseline, and 1.5% over using bilinear interpolation (inter). While more complex methods of upsampling may seem like they would improve performance (e.g., using an MLP, or learning combinations of basis filters), we found such approaches had two significant drawbacks. First, they can slow down training time significantly due to their complexity, so only a limited number of settings are viable. Second, in our experiments we found many had numerical stability issues for some datasets/tasks. We believe this may be due to trying to learn the local parameter patterns and the weights to combine them concurrently. Related work suggests this can be resolved by leveraging prior knowledge about what these local parameter patterns should represent (Denil et al., 2013), i.e., you define and freeze what they represent. However, prior knowledge is not available in the general case, and data-driven methods of training these local filter patterns often rely on pretraining steps of the fully-parameterized network (e.g., Denil et al., 2013). Thus, they are not suited for NPAS since they can not training large networks with low memory requirements, but addressing this issue would be a good direction for future work.
Table 7 compares approaches for mapping layers to parameter groups using the same number of parameters as the original model. We see a small, but largely consistent improvement over using a traditional (baseline) network using SSNs. Notably, our automatically learned mappings (auto) perform on par with manual groups. This demonstrates that our automated approach can be used without loss in performance, while being applicable to any architecture, making them more flexible than hand-crafted methods. This flexibility does come with a computational cost, as our preliminary step that learns to map layers to parameter groups resulted in a 10-15% longer training time for equivalent epochs. That said, parameter sharing methods have demonstrated an ability to converge
faster (Bagherinezhad et al., 2017; Lan et al., 2020). Thus, exploring more efficient training strategies using NPAS methods like SSNs will be a good direction for future work.
Figure 4(a) compares the 3× 3 kernel filters at the early, middle, and final convolutional layers of a WRN-16-2 of a traditional neural network (no parameter sharing) and our SSNs where all layers belong to the same parameter group. We observe a correspondence between filters in the early layers, but this diverges in deeper layers. This suggests that sharing becomes more difficult in the final layers, which is consistent with two observations we made about Figure 4(b), which visualizes parameter groups used for SSN-WRN-28-10 to create 14 parameter group mappings. First, we found the learned parameter group mappings tended to share parameters between layers early in the network, opting for later layers to share no parameters. Second, the early layers tended to group layers into 3–4 parameter stores across different runs, with the remaining 10–11 parameter stores each containing a single layer. Note that these observations were consistent across different random initializations.
5 CONCLUSION
We propose NPAS, a novel task in which the goal is to implement a given, arbitrary network architecture given a fixed parameter budget. This involves identifying how to assign parameters to layers and implementing layers with their assigned parameters (which may be of any size). To address NPAS, we introduce SSNs, which automatically learn how to share parameters. SSNs benefit from parameter sharing in the low-budget regime—reducing memory and communication requirements when training—and enable a novel high-budget regime that can improve model performance. We show that SSNs boost results on ImageNet by 3% improvement in Error@5 over a same-sized network without parameter sharing. Surprisingly, we also find that parameters can be shared among very different layers. Further, we show that SSNs can be combined with knowledge distillation and parameter pruning to achieve state-of-the-art results that also reduce FLOPs at test time. One could think of SSNs as spreading the same number of parameters across more layers, increasing effective depth, which benefits generalization (Telgarsky, 2016), although this requires further exploration.
Acknowledgements. This work is funded in part by grants from the National Science Foundation and DARPA. This project received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 program (grant agreement MAELSTROM, No. 955513). N.D. is supported by the ETH Postdoctoral Fellowship. We thank the Livermore Computing facility for the use of their GPUs for some experiments.
ETHICS STATEMENT
Neural Parameter Allocation Search and Shapeshifter Networks are broad and general-purpose, and therefore applicable to many downstream tasks. It is thus challenging to identify specific cases of benefit or harm, but we note that reducing memory requirements can have broad implications. In particular, this could allow potentially harmful applications to be cheaply and widely deployed (e.g., facial recognition, surveillance) where it would otherwise be technically or economically infeasible.
REPRODUCIBILITY STATEMENT
NPAS is a task which can be implemented in many different ways; we define it formally in Section 2. SSNs, our proposed solution to NPAS, are presented in detail in Section 3, and Figure 2 provides an illustration and example for weight generation methods. AppendixA also provides a thorough discussion of the implementation details. To further aid reproducibility, we publicly release our SSN code at https://github.com/BryanPlummer/SSN.
A DESCRIPTION OF COMPARED TASKS
A.1 IMAGE-SENTENCE RETRIEVAL
In bidirectional image-sentence retrieval when a model is provided with an image the goal is to retrieve a relevant sentence and vice versa. This task is evaluated using Recall@K={1, 5, 10} for both directions (resulting in 6 numbers), which we average for simplicity. We benchmark methods on two common datasets: Flickr30K (Young et al., 2014) which contains 30K/1K/1K images for training/testing/validation, each with five descriptive captions, and MSCOCO (Lin et al., 2014), which contains 123K/1K/1K images for training/testing/validation, each image having roughly five descriptive captions.
EmbNet (Wang et al., 2016). This network learns to embed visual features for each image computed using a 152-layer Deep Residual Network (ResNet) (He et al., 2016) that has been trained on ImageNet (Deng et al., 2009) and the average of MT GrOVLE (Burns et al., 2019) language features representing each word into a shared semantic space using a triplet loss. The network consists of two branches, one for each modality, and each branch contains two fully connected layers (for a total of four layers). We adapted the implementation of Burns et al. (Burns et al., 2019)1, and left all hyperparameters at the default settings. Specifically, we train using a batch size of 500 with an initial learning rate of 1e-4 which we decay exponentially with a gamma of 0.794 and use weight decay of 0.001. The model is trained until it has not improved performance on the validation set over the last 10 epochs. This architectures provides a simple baseline for parameter sharing with our Shapeshifter Networks (SSNs), where layers operate on two different modalities.
ADAPT-T2I (Wehrmann et al., 2020). In this approach word embeddings are aggregated using a bidirectional GRU (Cho et al., 2014) and its hidden state at each timestep is averaged to obtain a fullsentence representation. Images are represented using 36 bottom-up image region features (Anderson et al., 2018) that are passed through a fully connected layer. Then, each sentence calculates scaling and shifting parameters for the image regions using a pair of fully connected layers that both take the full-sentence representation as input. The image regions are then averaged, and a similarity score is computed between the sentence-adapted image features and the fully sentence representation. Thus, there are four layers total (3 fully connected, 1 GRU) that can share parameters, including the two parallel fully connected layers (i.e., they both take the full sentence features as input, but are expected to have different outputs). We adapted the author’s implementation and kept the default hyperparameters2. Specifically, we use a latent dimension of 1024 for our features and train with a batch size of 105 using a learning rate of 0.001. This method was selected since it achieves high performance and also included fully connected and recurrent layers, as well as having a set of parallel layers that make effectively performing cross-layer parameter sharing more challenging.
A.2 PHRASE GROUNDING
Given a phrase the goal of a phrase grounding model is to identify the image region described by the phrase. Success is achieved if the predicted box has at least 0.5 intersection over union with the ground truth box. Performance is measured using the percent of the time a phrase is accurately localized. We evaluate on two datasets: Flickr30K Entities (Plummer et al., 2017) which augments the Flickr30K dataset with 276K bounding boxes associated with phrases in the descriptive captions, and ReferIt (Kazemzadeh et al., 2014) which contains 20K images that are evenly split between training/testing and 120K region descriptions.
SimNet (Wang et al., 2018). This network contains three branches that each operate on different types of features. One branch passes image regions features computed with a 101-layer ResNet that have been fine-tuned for phrase grounding using two fully connected layers. A second branch passes MT GrOVLE features through two fully connected layers. Then, a joint representation is computed for all region-phrase pairs using an elementwise product. Finally, the third branch passes these joint features through three fully connected layers (7 total), where the final layer acts as a classifier indicating the likelihood that phrase is present in the image region. We adapt the code
1https://github.com/BryanPlummer/Two_branch_network 2https://github.com/jwehrmann/retrieval.pytorch
from Plummer et al. (2020)3 and keep all hyperameters at their default settings. Specifically, we use a pretrained Faster R-CNN model (Ren et al., 2015) fine-tuned for phrase grounding by Plummer et al. (2020) on each dataset to extract region features. Then we encode each phrase by averaging MT GrOVLE features (Burns et al., 2019) and provide the image and phrase features as input to our model. We train our model using a learning rate of 5e-5 and a final embedding dimension of 256 until it no longer improves on the validation set for 5 epochs (typically resulting in training times of 15-20 epochs). Performing experiments on this model enables us to test how well our SSNs generalize to another task and how well it can adapt to sharing parameters with layers operating on three types of features (just vision, just language, and a joint representation).
A.3 IMAGE CLASSIFICATION
For image classification the goal is to be able to recognize if an object is present in an image. Typically this task is evaluated using Error@K, or the portion of times that the correct category doesn’t appear in the top k most likely objects. We evaluate our Shapeshifter Networks on three datasets: CIFAR-10 and CIFAR-100 (Krizhevsky, 2009), which are comprised of 60K images of 10 and 100 categories, respectively, and ImageNet (Deng et al., 2009), which is comprised of 1.2M images containing 1,000 categories. We report Error@1 for both CIFAR datasets and Error@5 for ImageNet. In these appendices, we also report Error@1 for ImageNet.
Wide Residual Network (WRN) (Zagoruyko & Komodakis, 2016). WRN modified the traditional ResNets by increasing the width k of each layer while also decreasing the depth d, which they found improved performance. Different variants are identified using WRN-d-k. Following Savarese et al. (Savarese & Maire, 2019), we evaluate our Shapeshifter Networks using WRN-28-10 for CIFAR and WRN-50-2 for ImageNet. We adapt the implementation of Savarese et al.4 and use cutout (DeVries & Taylor, 2017) for data augmentation. Specifically, on CIFAR we train our model using a batch size of 128 for 200 epochs with weight decay set at 5e-4 and an initial learning rate of 0.1 which we decay using a gamma of 0.2 at 60, 120, and 160 epochs. Unlike the vision-language models discussed earlier, these architecture include convolutional layers in addition to a fully connected layer used to implement a classifier, and also have many more layers than the shallow vision-language models.
DenseNet (Huang et al., 2017). Unlike traditional neural networks where each layer in the network is computed in sequence, every layer in a DenseNet using feature maps from every layer which came before it. We adapt PyTorch’s official implementation5 using the hyperparameters as set in Huang et al. (Huang et al., 2017). Specifically, on CIFAR we train our model using a batch size of 96 for 300 epochs with weight decay set at 1e-4 and an initial learning rate of 0.1 which we decay using a gamma of 0.1 at 150 and 225. These networks provide insight into the effect depth has on learning SSNs, as we use a 190-layer DenseNet-BC configuration for CIFAR. However, due to their high computational cost we provide limited results testing only some settings.
EfficientNet (Tan & Le, 2019). EfficientNets are a class of model designed to balance depth, width, and input resolution in order to produce very parameter-efficient models. For ImageNet, we adapt an existing PyTorch implementation and its hyperparameters6, which are derived from the official TensorFlow version. We use the EfficientNet-B0 architecture to illustrate the impact of SSNs on very parameter-efficient, state-of-the-art models. On CIFAR-100 we use an EfficientNet with Network Deconvolution (ND) (Ye et al., 2020), which results in improved results with similar numbers of epochs for training. We use the author’s implementation7, and train each model for 100 epochs (their best performing setting). Note that our best error running different configurations of their model (35.88) is better than those in their paper (37.63), so despite the relatively low performance it is comparable to results from their paper.
3https://github.com/BryanPlummer/phrase_detection 4https://github.com/lolemacs/soft-sharing 5https://pytorch.org/hub/pytorch_vision_densenet/ 6https://rwightman.github.io/pytorch-image-models/ 7https://github.com/yechengxi/deconvolution
A.4 QUESTION ANSWERING
In question answering, a model is given a question and an associated textual passage which may contain the answer, and the goal is to predict the span of text in the passage that contains the answer. We use two versions of the Stanford Question Answering Dataset (SQuAD), SQuAD v1.1 (Rajpurkar et al., 2016), which contains 100K+ question/answer pairs on 500+ Wikipedia particles, and SQuAD v2.0, which augments SQuAD v1.1 with 50K unanswerable questions designed adversarially to be similar to standard SQuAD questions. For both datasets, we report both the F1 score, which captures the precision and recall of the chosen text span, and the Exact Match (EM) score.
ALBERT (Lan et al., 2020) ALBERT is a version of the BERT (Devlin et al., 2019) transformer architecture that applies cross-layer parameter sharing. Specifically, the parameters for all components of a transformer layer are shared among all the transformer layers in the network. ALBERT also includes a factorized embedding to further reduce parameters. We follow the methodology of BERT and ALBERT for reporting results on SQuAD, and our baseline ALBERT scores closely match those reported in the original work. This illustrates the ability of NPAS and SSNs to develop better parameter sharing methods than manually-designed systems for extremely large models.
B EXTENDED RESULTS WITH ADDITIONAL BASELINES
Below we provide additional results with more baseline methods for the three components of our SSNs: weight generator (Section B.1), parameter upsampling (Section B.4), and mapping layers to parameter groups (Section B.3). We provide ablations on the number of parameter groups and templates used by our SSNs in Section C and Section D, respectively.
B.1 ADDITIONAL METHODS THAT GENERATE LAYER WEIGHTS FROM TEMPLATES
Parameter downsampling uses the selected templates T ki for a layer `i to produce its weights wi. In Section 3.1.1 of the paper we discuss two methods of learning a combination of the T ki to generate wi. Below in Section B.2 we provide two simple baseline methods that directly use the candidates. Table 8 compares the baselines to the methods in the main paper that learn weighted combinations of templates, where the learned methods typically perform better than the baselines.
B.2 DIRECT TEMPLATE COMBINATION
Here we describe the strategies we employ that require no parameters to be learned by weight generator, i.e., they operate directly on the templates T ki .
Round Robin (RR) reuses parameters of each template set as few times as possible. The scheme simply returns the weights at index k mod K in the (ordered) template set Ti at the kth query of a parameter group.
Candidate averaging (Avg) averages all candidates in Ti to provide a naive baseline for using multiple candidates. A significant drawback of this approach is that, if K is large, this can result in reusing parameters (across combiners) many times with no way to adapt to a specific layer, especially when the size of the parameter group is small.
B.3 ADDITIONAL PARAMETER MAPPING RESULTS
Table 9 compares approaches that map layers to parameter groups using the same number of parameters as the original model. We see a small, but largely consistent improvement over using a traditional (baseline) network. Notably, our learned grouping methods (WAvg, Emb) perform on par, and sometimes better than using manual mappings. However, our approach can be applied to any architecture to create a selected number of parameter groups, making them more flexible than hand-crafted methods. For example, in Table 10, we see using two groups often helps to improve performance when using very few parameters, but it is not clear how to effectively create two groups by hand for many networks.
B.4 EXTENDED PARAMETER UPSAMPLING
In Table 10 we provide extended results comparing the parameter upsamping methods. We additionally compare with a further naı̈ve baseline of simply repeating parameters until they are the appropriate size. We find that Mask upsampling is always competitive, and typically moreso when two parameter groups are used.
B.5 COMPARISON WITH HYPERNETWORKS
In Table 11 we compare our SSNs on Wide ResNets (Ha et al., 2016) to the same networks implemented using Hypernetworks (Ha et al., 2016) for CIFAR-10, using the results reported in their paper. We can see that, for the same parameter budget, SSNs outperform Hypernetworks.
C EFFECT OF THE NUMBER OF PARAMETER GROUPS P
A significant advantage of using learned mappings of layers to parameter groups, described in Section 3.2, is that our approach can support any number of parameter groups, unlike prior work that required manual grouping and/or heuristics to determine which layers shared parameters (e.g., Lan et al., 2020; Savarese & Maire, 2019). In this section we explore how the number of parameter groups
effects performance on the image classification task. We do not benchmark bidirectional retrieval and phrase grounding since networks addressing these tasks have few layers, so parameter groups are less important (as shown in Table 7).
Table 12 reports the performance of our SSNs when using different numbers P parameter groups. We find that when training with few parameters (first line) low numbers of parameter groups work best, while when more parameters are available larger numbers of groups work better (second line). In fact, there is a significant drop in performance going from 4 to 8 groups when training with few parameters as seen in the first line of Table 12. This is due to the fact that starting at 8 groups some parameter groups had too few weights to implement their layers, resulting in extensive parameter upsampling. This suggests that we may be able to further improve performance when there are few parameters by developing better methods of implementing layers when too few parameters are available.
D EFFECT OF THE NUMBER OF TEMPLATES K
Table 13 reports the results using different numbers of templates. We find that varying the number of templates only has a minor impact on performance most of the time. We note that more templstes tends to lead to reduced variability between runs, making results more stable. As a reminder, however, the number of templates does not guarantee that each layer will have enough parameters to construct them. Thus, parameter groups only use this hyperparameter when many weights are available to it (i.e., it can form multiple templates for the layers it implements). This occurs for the phrase grounding and bidirectional retrieval results at the higher maximum numbers of templates.
E SCALING SSNS TO LARGER NETWORKS
Table 14 demonstrates the ability of our SSNs to significantly reduce the parameters required, and thus the memory required to implement large Wide ResNets so they fall within specific bounds. For example, Table 14(b) shows larger and deeper configurations continue to improve performance even when the number of parameters remains largely constant. Comparing the first line of Table 14(a) and the last line of Table 14(c) we see that SSN-WRN-76-12 outperforms the fully-parameterized WRN28-10 network by 0.6% on CIFAR-100 while only using just over half the parameters, and comes within 0.5% of WRN-76-12 while only using 13.0% of its parameters. We do note that using a SSN does not reduce the number of floating point operations, so although our SSN-WRN-76-12 model uses fewer parameters than the WRN-28-10, it is still slower at both test and train time. However, our results help demonstrate that SSNs can be used to implement very large networks with lower memory
requirements by effectively sharing parameters. This enables us to train larger, better-performing networks than is possible with traditional neural networks on comparable computational resources.
F IMAGE CLASSIFICATION NUMBERS
We provide raw numbers for the results in Figure 3 in Table 15 (CIFAR-100) and Table 16 (ImageNet).
G PERFORMANCE IMPLICATIONS OF NPAS AND SSNS
Our SSNs can offer several performance benefits by reducing parameter counts; notably, they can reduce memory requirements storing a model and can reduce communication costs for distributed training. We emphasize that LB-NPAS does not reduce FLOPs, as the same layer operations are implemented using fewer parameters. Should fewer FLOPs also be desired, SSNs can be combined
with other techniques, such as pruning. Additionally, we note that our implementation has not been extensively optimized, and further performance improvements could likely be achieved with additional engineering.
G.1 COMMUNICATION COSTS FOR DISTRIBUTED TRAINING
Communication for distributed data-parallel training is typically bandwidth-bound, and thus employs bandwidth-optimal allreduces, which are linear in message length (Chan et al., 2007). Thus, we expect communication time to be reduced by a factor proportional to the parameter savings achieved by NPAS, all else being equal. However, frameworks will typically execute allreduces layer-wise as soon as gradient buffers are ready to promote communication/computation overlap in backpropagation; reducing communication that is already fully overlapped is of little benefit. Performance benefits are thus sensitive to the model, implementation details, and the system being used for training.
For CNNs, we indeed observe minor performance improvements, as the number of parameters is typically small. When using 64 V100 GPUs for training WRN-50-2 on ImageNet, we see a 1.04× performance improvement in runtime per epoch when using SSNs with 10.5M parameters (15% of the original model). This is limited because most communication is overlapped. We also observe small performance improvements in some cases because we launch fewer allreduces, resulting in less demand for SMs and memory bandwidth on the GPU. These performance results are in line with prior work on communication compression for CNNs (e.g., Renggli et al., 2019).
For large transformers, however, we observe more significant performance improvements. The SSN-ALBERT-Large is about 1.4× faster using 128 GPUs than the corresponding BERT-Large model. This is in line with the original ALBERT work (Lan et al., 2020), which reported that training ALBERT-Large was 1.7× faster than BERT-Large when using 128 TPUs. Note that due to the differences in the systems for these results, they are not directly comparable.
We would also reiterate that for some applications where communication is more costly, say, for federated learning applications (e.g. McMahan et al. (2017); Konečný et al. (2016)), our approach would be even more beneficial due to the decreased message length.
G.2 MEMORY SAVINGS
LB-NPAS and SSNs reduce the number of parameters, which consequentially reduces the size of the gradients and optimizer state (e.g., momentum) by the same amount. It does not reduce the storage requirements for activations, but note there is much work on recomputation to address this (e.g., Chen et al., 2016; Jain et al., 2020). Thus, the memory savings from SSNs is independent of batch size. For SSN-ALBERT-Large, we use 18M parameters (5% of BERT-Large, which contains 334M parameters). Assuming FP32 is used to store data, we save about 5 GB of memory in this case (about 1/3 of the memory used) | 1. What is the main contribution of the paper regarding neural networks?
2. What are the strengths of the proposed approach, particularly in terms of parameter sharing and saving?
3. Do you have concerns or questions regarding the effectiveness and performance of the method, especially in terms of hardware platforms and task-specificity? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes to solve an interesting but meaningful task, i.e., learning to allocate parameters between layers; in other words, searching for the parameter sharing strategy in a neural network. Parameter sharing between serial layers is useful for saving parameter numbers, which is beneficial for issues including memory consumption, bandwidth, etc.
Review
The topic introduced in this paper is interesting and meaningful, which provides a novel idea for saving parameters. The proposed parameter upsampling and downsampling methods are promising.
Here are some questions to be answered.
The parameter number is indeed saved. However, how about the real performance on the hardware platforms, e.g., bandwidth, memory consumption, training speed? It is essential to report these results for evaluating the real value of the method.
How much benefit does the NPAS method bring? It is necessary to perform an experiment similar to "random search" in NAS to validate the effectiveness. Noting that in Fig. 4 (b), differences between the manual and auto one are not so evident, where it seems only the downsampling layers require unique parameters as in the manual one.
How are the parameters allocated in depthwise layers, e.g., in EfficientNet? Is there any difference between depthwise and plain convolutions?
Is the proposed method specified to tasks? It is interesting to explore whether parameters are allocated differently in different tasks. |
ICLR | Title
Neural Parameter Allocation Search
Abstract
Training neural networks requires increasing amounts of memory. Parameter sharing can reduce memory and communication costs, but existing methods assume networks have many identical layers and utilize hand-crafted sharing strategies that fail to generalize. We introduce Neural Parameter Allocation Search (NPAS), a novel task where the goal is to train a neural network given an arbitrary, fixed parameter budget. NPAS covers both low-budget regimes, which produce compact networks, as well as a novel high-budget regime, where additional capacity can be added to boost performance without increasing inference FLOPs. To address NPAS, we introduce Shapeshifter Networks (SSNs), which automatically learn where and how to share parameters in a network to support any parameter budget without requiring any changes to the architecture or loss function. NPAS and SSNs provide a complete framework for addressing generalized parameter sharing, and can also be combined with prior work for additional performance gains. We demonstrate the effectiveness of our approach using nine network architectures across four diverse tasks, including ImageNet classification and transformers.
1 INTRODUCTION
Training neural networks requires ever more computational resources, with GPU memory being a significant limitation (Rajbhandari et al., 2021). Method such as checkpointing (e.g., Chen et al., 2016; Gomez et al., 2017; Jain et al., 2020) and out-of-core algorithms (e.g., Ren et al., 2021) have been developed to reduce memory from activations and improve training efficiency. Yet even with such techniques, Rajbhandari et al. (2021) find that model parameters require significantly greater memory bandwidth than activations during training, indicating parameters are a key limit on future growth. One solution is cross-layer parameter sharing, which reduces the memory needed to store parameters, which can also reduce the cost of communicating model updates in distributed training (Lan et al., 2020; Jaegle et al., 2021) and federated learning (Konečný et al., 2016; McMahan et al., 2017), as the model is smaller, and can help avoid overfitting (Jaegle et al., 2021). However, prior work in parameter sharing (e.g., Dehghani et al., 2019; Savarese & Maire, 2019; Lan et al., 2020; Jaegle et al., 2021) has two significant limitations. First, they rely on suboptimal hand-crafted techniques for deciding where and how sharing occurs. Second, they rely on models having many identical layers. This limits the network architectures they apply to (e.g., DenseNets (Huang et al., 2017) have few such layers) and their parameter savings is only proportional to the number of identical layers.
To move beyond these limits, we introduce Neural Parameter Allocation Search (NPAS), a novel task which generalizes existing parameter sharing approaches. In NPAS, the goal is to identify where and how to distribute parameters in a neural network to produce a high-performing model using an arbitrary, fixed parameter budget and no architectural assumptions. Searching for good sharing strategies is challenging in many neural networks due to different layers requiring different numbers of parameters or weight dimensionalities, multiple layer types (e.g., convolutional, fully-connected, recurrent), and/or multiple modalities (e.g., text and images). Hand-crafted sharing approaches, as in prior work, can be seen as one implementation of NPAS, but they can be complicated to create for complex networks and have no guarantee that the sharing strategy is good. Trying all possible permutations of sharing across layers is computationally infeasible even for small networks. To our knowledge, we are the first to consider automatically searching for good parameter sharing strategies.
*indicates equal contribution
By supporting arbitrary parameter budgets, NPAS explores two novel regimes. First, while prior work considered using sharing to reduce the number of parameters (which we refer to as low-budget NPAS, LB-NPAS), we can also increase the number of parameters beyond what an architecture typically uses (high-budget NPAS, HB-NPAS). HB-NPAS can be thought of as adding capacity to the network in order to improve its performance without changing its architecture (e.g., without increasing the number of channels that would also increase computational time). Second, we consider cases where there are fewer parameters available to a layer than needed to implement the layer’s operations. For such low-budget cases, we investigate parameter upsampling methods to generate the layer’s weights.
A vast array of other techniques, including pruning (Hoefler et al., 2021), quantization (Gholami et al., 2021), knowledge distillation (Gou et al., 2021), and low-rank approximations (e.g., Wu, 2019; Phan et al., 2020) are used to reduce memory and/or FLOP requirements for a model. However, such methods typically only apply at test/inference time, and actually are more expensive to train due to requiring a fully-trained large network, in contrast to NPAS. Nevertheless, these are also orthogonal to NPAS and can be applied jointly. Indeed, we show that NPAS can be combined with pruning or distillation to produce improved networks. Figure 1 compares NPAS to closely related tasks.
To implement NPAS, we propose Shapeshifter Networks (SSNs), which can morph a given parameter budget to fit any architecture by learning where and how to share parameters. SSNs begin by learning which layers can effectively share parameters using a short pretraining step, where all layers are generated from a single shared set of parameters. Layers that use parameters in a similar way are then good candidates for sharing during the main training step. When training, SSNs generate weights for each layer by down- or upsampling the associated parameters as needed.
We demonstrate SSN’s effectiveness in high- and low-budget NPAS on a variety of networks, including vision, text, and vision-language tasks. E.g., a LB-NPAS SSN implements a WRN-50-2 (Zagoruyko & Komodakis, 2016) using 19M parameters (69M in the original) and achieves an Error@5 on ImageNet (Deng et al., 2009) 3% lower than a WRN with the same budget. Similarity, we achieve a 1% boost to SQuAD v2.0 (Rajpurkar et al., 2016) with 18M parameters (334M in the original) over ALBERT (Lan et al., 2020), prior work for parameter sharing in Transformers (Vaswani et al., 2017). For HB-NPAS, we achieve a 1–1.5% improvement in Error@1 on CIFAR (Krizhevsky, 2009) by adding capacity to a traditional network. In summary, our key contributions are:
• We introduce Neural Parameter Allocation Search (NPAS), a novel task in which the goal is to implement a given network architecture using any parameter budget. • To solve NPAS, we propose Shapeshifter Networks (SSNs), which automate parameter sharing. To our knowledge, SSNs are the first approach to automatically learn where and how to share parameters and to share parameters between layers of different sizes or types. • We benchmark SSNs for LB- and HB-NPAS and show they create high-performing networks when either using few parameters or adding network capacity. • We also show that SSNs can be combined with knowledge distillation and parameter pruning to boost performance over such methods alone.
2 NEURAL PARAMETER ALLOCATION SEARCH (NPAS)
In NPAS, the goal is to implement a neural network given a fixed parameter budget. More formally:
Neural Parameter Allocation Search (NPAS): Given a neural network architecture with layers `1, . . . , `L, which each require weights w1, . . . , wL, and a fixed parameter budget θ, train a high-performing neural network using the given architecture and parameter budget.
Any general solution to NPAS (i.e., that works for arbitrary θ or network) must solve two subtasks:
1. Parameter mapping: Assign to each layer `i a subset of the available parameters. 2. Weight generation: Generate `i’s weights wi from its assigned parameters, which may be any size.
Prior work, such as Savarese & Maire (2019) and Ha et al. (2016), are examples of weight generation methods, but in limited cases, e.g., Savarese & Maire (2019) does not support there being fewer parameters than weights. To our knowledge, no prior work has automated parameter mapping, instead relying on hand-crafted heuristics that do not generalize to many architectures. Note weight generation must be differentiable so gradients can be backpropagated to the underlying parameters.
NPAS naturally decomposes into two different regimes based on the parameter budget relative to what would be required by a traditional neural network (i.e., ∑ L i |wi| versus |θ|):
• Low-budget (LB-NPAS), with fewer parameters than standard networks ( ∑
L i |wi| < |θ|). This
regime has traditionally been the goal of cross-layer parameter sharing, and reduces memory at training and test time, and consequentially reduces communication for distributed training. • High-budget (HB-NPAS), with more parameters than standard networks ( ∑
L i |wi| > |θ|). This is,
to our knowledge, a novel regime, and can be thought of as adding capacity to a network without changing the underlying architecture by allowing a layer to access more parameters.
Note, in both cases, the FLOPs required of the network do not significantly increase. Thus, HB-NPAS can significantly reduce FLOP overhead compared to larger networks.
The closest work to ours are Shared WideResNets (SWRN) (Savarese & Maire, 2019), Hypernetworks (HN) (Ha et al., 2016), and Lookup-based Convolutional Networks (LCNN) (Bagherinezhad et al., 2017). Each method demonstrated improved low-budget performance, with LCNN and SWRN focused on improving sharing across layers and HN learning to directly generate parameters. However, all require adaptation for new networks and make architectural assumptions. E.g., LCNN was designed specifically for convolutional networks, while HN and SWRN’s benefits are proportional to the number of identical layers (see Figure 3). Thus, each method supports limited architectures and parameter budgets, making them unsuited for NPAS. LCNN and HN also both come with significant computational overhead. E.g., the CNN used by Ha et al. requires 26.7M FLOPs for a forward pass on a 32×32 image, but weight generation with HN requires an additional 108.5M FLOPs (135.2M total). In contrast, our SSNs require 0.8M extra FLOPs (27.5M total, 5× fewer than HN). Across networks we consider, SSN overhead for a single image is typically 0.5–2% of total FLOPs. Note both methods generate weights once per forward pass, amortizing overhead across a batch (e.g., SSN overhead is reduced to 0.008–0.03% for batch size 64). HB-NPAS is also reminiscent of mixture-of-experts (e.g., Shazeer et al., 2017); both increase capacity without significantly increasing FLOPs, but NPAS allows this overparameterization to be learned without architectural changes required by prior work.
NPAS can be thought of as searching for efficient and effective underlying representations for a neural network. Methods have been developed for other tasks that focus on directly searching for more effective architectures (as opposed to their underlying representations). These include neural architecture search (e.g., Bashivan et al., 2019; Dong & Yang, 2019; Tan et al., 2019; Xiong et al., 2019; Zoph & Le, 2017) and modular/self-assembling networks (e.g., Alet et al., 2019; Ferran Alet, 2018; Devin et al., 2017). While these tasks create computationally efficient architectures, they do not reduce the number of parameters in a network during training like NPAS (i.e., they cannot be used to train very large networks or for federated or distributed learning applications), and indeed are computationally expensive. NPAS methods can also provide additional flexibility to architecture search by enabling them to train larger and/or deeper architectures while keeping within a fixed parameter budget. In addition, the performance of any architectures these methods create could be improved by leveraging the added capacity from excess parameters when addressing HB-NPAS.
3 SHAPESHIFTER NETWORKS FOR NPAS
We now present Shapeshifter Networks (SSNs), a framework for addressing NPAS using generalized parameter sharing to implement a neural network with an arbitrary, fixed parameter budget. Figure 2 provides an overview and example of SSNs, and we detail each aspect below. An SSN consists of a provided network architecture with layers `1,...,L, and a fixed budget of parameters θ, which are partitioned into P parameter groups (both hyperparameters) containing parameters θ1,...,P . Each layer is associated with a single parameter group, which will provide the parameters used to implement it. This mapping is learned in a preliminary training step by training a specialized SSN and clustering its layer representations (Section 3.2). To implement each layer, an SSN morphs the parameters in its associated group to generate the necessary weights; this uses downsampling (Section 3.1.1) when the group has more parameters than needed, or upsampling (Section 3.1.2) when the group has fewer parameters than needed. SSNs allow any number of parameters to “shapeshift” into a network without necessitating changes to the model’s loss, architecture, or hyperparameters, and the process can be applied automatically. Finally, we note that SSNs are simply one approach to NPAS. Appendices B-D contain ablation studies and discussion of variants we found to be less successful.
3.1 WEIGHT GENERATION
Weight generation implements a layer `i, which requires weights wi, using the fixed set of parameters in its associated parameter group θj . (We assume the mapping between layers and parameter groups has already been established; see Section 3.2.) There are three cases to handle:
1. |wi| = |θj | (exactly enough parameters): The parameters are used as-is. 2. |wi| < |θj | (excess parameters): We perform parameter downsampling (Section 3.1.1). 3. |wi| > |θj | (insufficient parameters): We perform parameter upsampling (Section 3.1.2). We emphasize that, depending on how layers are mapped to parameter groups, both down- and upsampling may be required in an LB- or HB-NPAS model.
3.1.1 PARAMETER DOWNSAMPLING
When a parameter group θj provides more parameters than needed to implement a layer `i, we perform template-based downsampling to generate wi. To do this, we first split θj into up to K (a hyperparameter) templates T 1,...,Ki , where each template T k i is the same dimension as wi. If θj does not evenly divide into templates, we ignore excess parameters. To avoid uneven sharing of parameters between layers, the templates for each layer are constructed from θj in a round-robin fashion. These templates are then combined to produce wi; if only one template can be produced we instead use it directly. We present two different methods of learning to combine templates. To simplify presentation, we will assume there are exactly K templates used.
WAvg (Savarese & Maire, 2019) This learns a vector αi ∈ RK which is used to produce a weighted average of the templates: wi = ∑ K k=1α k i T k i . The αi are initialized orthogonally to the αs of all other
layers in the same parameter group. While efficient, this only implicitly learns similarities between layers. Empirically, we find that different layers often converge to similar αs, limiting sharing.
Emb To address this, we can instead more directly learn a representation of the layer using a layer embedding. We use a learnable vector φi ∈ RE , where E is the size of the layer representation; we useE = 24 throughout, as we found it to work well. A linear layer, which is shared between all layers in the parameter group and parameterized by Wj ∈ RK×E and bj ∈ RK , is then used to construct an αi for the layer, which is used as in WAvg. That is, αi =Wjφi + bj and wi = ∑ K k=1α k i T k i . We considered more complex methods (e.g., MLPs, nonlinearities), but they did not improve performance.
While both methods require additional parameters, this is quite small in practice. WAvg requires K additional parameters per layer. Emb requires E = 24 additional parameters per layer and KE +K = 24K +K parameters per parameter group.
3.1.2 PARAMETER UPSAMPLING
If instead a parameter group θj provides fewer parameters than needed to implement a layer `i, we upsample θj to be the same size as wi. As a layer will use all of the parameters in θj , we do not use templates. We consider two methods for upsampling below.
Inter As a naı̈ve baseline, we use bilinear interpolation to directly upsample θj . However, this could alter the patterns captured by parameters, as it effectively stretches the receptive field. In practice, we found fully-connected and recurrent layers could compensate for this warping, but it degraded convolutional layers compared to simpler approaches such as tiling θj .
Mask To address this, and avoid redundancies created by directly repeating parameters, we propose instead to use a learned mask to modify repeated parameters. For this, we first use n = d|wi|/|θj |e tiles of θj to be the same size as wi (discarding excess in the last tile). We then apply a separate learned mask to each tile after the first (i.e., there are n− 1 masks). All masks are a fixed “window” size, which we take to be 9 by default (to match the size of commonly-used 3× 3 kernels in CNNs), and are shared within each parameter group. To apply, masks are multiplied element-wise over sequential windows of their respective tile. While the number of additional parameters depends on the amount of upsampling required, as the masks are small, this is negligible.
3.2 MAPPING LAYERS TO PARAMETER GROUPS
We now discuss how SSNs can automatically learn to assign layers to parameter groups in such a way that parameters can be efficiently shared. This is in contrast to prior work on parameter sharing (e.g., Ha et al., 2016; Savarese & Maire, 2019; Jaegle et al., 2021), which required layers to be manually assigned to parameter groups. Finding an optimal mapping of layers to parameter groups is challenging, and a brute-force approach is computationally infeasible. We rely instead on SSNs learning a representation for each layer as part of the template-based parameter downsampling process, and then use this representation to identify similar layers which can effectively share parameters.
To do this, we perform a short preliminary training step in which we train a small (i.e., low parameter budget) SSN version of the model using a single parameter group and a modified means of generating templates for parameter downsampling. Specifically, for a layer `i, we split θ into K ′ evenly-sized templates T 1,...,K ′
i . Since we wish to use downsampling-based weight generation, each T k′
i is then resized with bilinear interpolation to be the same size as wi. Next, we train the SSN as usual, using WAvg or Emb downsampling with the modified templates for weight generation (there is no upsampling). By using a small parameter budget and template-based weight generation where each template comes from the same underlying parameters, we encourage significant sharing between layers so we can measure the effectiveness of sharing. We found that using a budget equal to the number of weights of the largest single layer in the network to work well. Further, this preliminary training step is short, and requires only 10–15% of the typical network training time.
Finally, we construct the parameter groups by clustering the learned layer representations into P groups. As the layer representation, we take the αi or φi learned for each layer by WAvg or Emb downsampling (resp.). We then use k-means clustering to group these representations into P groups, which become the parameter groups used by the full SSN.
4 EXPERIMENTS
Our experiments include a wide variety of tasks and networks in order to demonstrate the broad applicability of NPAS and SSNs. We adapt code and data splits made available by the authors and report the average of five runs for all comparisons except ImageNet and ALBERT, which average three runs. A more detailed discussion on SSN hyperparameter settings can be found in Appendices B-D. In our paper we primarily evaluate methods based on task performance, but we demonstrate that SSNs improve reduce training time and memory in distributed learning settings in Appendix G.
Compared Tasks. We briefly describe each task, datasets, and evaluation metrics. For each model, we use the authors’ implementation and hyperparameters, unless noted (more details in Appendix A).
Image Classification. For image classification the goal is to recognize if an object is present in an image. This is evaluated using Error@k, i.e., the portion of times that the correct category does not appear in the top k most likely objects. We evaluate SSNs on CIFAR-10 and CIFAR100 (Krizhevsky, 2009), which are composed of 60K images of 10 and 100 categories, respectively, and ImageNet (Deng et al., 2009), which is composed of 1.2M images containing 1,000 categories. We report Error@1 on CIFAR and Error@5 for ImageNet.
Image-Sentence Retrieval. In image-sentence retrieval the goal is match across modalities (sentences and images). This task is evaluated using Recall@K={1, 5, 10} for both cross-modal directions (six numbers), which we average for simplicity. We benchmark on Flickr30K (Young et al., 2014) which contains 30K/1K/1K images for training/testing/validation, and COCO (Lin et al., 2014), which contains 123K/1K/1K images for training/testing/validation. For both datasets each image has about five descriptive captions. We evaluate SSNs using EmbNet (Wang et al., 2016) and ADAPTT2I (Wehrmann et al., 2020). Note that ADAPT-T2I has identical parallel layers (i.e., they need different outputs despite having the same input), which makes sharing parameters challenging.
Phrase Grounding. Given a phrase the task is to find its associated image region. Performance is measured by how often the predicted box for a phrase has at least 0.5 intersection over union with its ground truth box. We evaluate on Flickr30K Entities (Plummer et al., 2017) which augments Flickr30K with 276K bounding boxes for phrases in image captions, and ReferIt (Kazemzadeh et al., 2014), which contains 20K images that are evenly split between training/testing and 120K region descriptions. We evaluate our SSNs with SimNet (Wang et al., 2018) using the implementation from Plummer et al. (2020) that reports state-of-the-art results on this task.
Question Answering. For this task the goal is to answer a question about a textual passage. We use SQuAD v1.1 (Rajpurkar et al., 2016), which has 100K+ question/answer pairs on 500+ articles, and SQuAD v2.0 (Rajpurkar et al., 2018), which adds 50K unanswerable questions. We report F1 and EM scores on the development split. We compare our SSNs with ALBERT (Lan et al., 2020), a recent transformer architecture that incorporates extensive, manually-designed parameter sharing.
4.1 RESULTS
We begin our evaluation in low-budget (LB-NPAS) settings. Figure 3 reports results on image classification, including WRNs (Zagoruyko & Komodakis, 2016), DenseNets (Huang et al., 2017), and EfficientNets (Tan & Le, 2019); Table 1 contains results on image-sentence retrieval and phrase grounding. For each task and architecture we compare SSNs to same parameter-sized networks without sharing. In image classification, we also report results for SWRN (Savarese & Maire, 2019) sharing; but note it cannot train a WRN-28-10 or WRN-50-2 with fewer than 12M or 40M parameters, resp. We show that SSNs can create high-performing models with fewer parameters than SWRN is capable of, and actually outperform it using 25% and 60% fewer parameters on C-100 and ImageNet, resp. Table 1 demonstrates that these benefits generalize to vision-language tasks. In Table 2 we also compare SSNs with ALBERT (Lan et al., 2020), which applies manually-designed parameter sharing to BERT (Devlin et al., 2019), and find that SSN’s learned parameter sharing outperforms ALBERT. This demonstrates that SSNs can implement large networks with lower memory requirements than is possible with current methods by effectively sharing parameters.
We discuss the runtime and memory performance implications of SSNs extensively in Appendix G. In short, by reducing parameter counts, SSNs reduce communication costs and memory. For example,
our SSN-ALBERT-Large trains about 1.4× faster using 128 GPUs than BERT-Large (in line with results for ALBERT), and reduces memory requirements by about 5 GB (1/3 of total).
As mentioned before, knowledge distillation and parameter pruning can help create more efficient models at test time, although they cannot reduce memory requirements during training like SSNs. Tables 3 and 4 show our approach can be used to accomplish a similar goal as these tasks. Comparing our LB-NPAS results in Table 4 and the lowest parameter setting of HRank, we report a 1.5% gain over pruning methods even when using less than half the parameters. We note that one can think of our SSNs in the high budget setting (HP-NPAS) as mixing together a set of random initializations of a network by learning to combine the different templates. This setting’s benefit is illustrated in Table 3 and Table 4, where our HB-NPAS models report a 1-1.5% gain over training a traditional network. As a reminder, in this setting we precompute the weights of each layer once training is complete so they require no additional overhead at test time. That said, best performance on both tasks comes from combining our SSNs with prior work on both tasks.
4.2 ANALYSIS OF SHAPESHIFTER NETWORKS
In this section we present ablations of the primary components of our approach. A complete ablation study, including, but not limited to comparing the number of parameter groups (Section 3.2) and number of templates (K in Section 3.1.1) can be found in Appendices B-D.
Table 5 compares the strategies generating weights from templates described in Section 3.1.1 when using a single parameter group. In these experiments we set the number of parameters as the amount required to implement the largest layer in a network. For example, the ADAPT-T2I model requires 14M parameters, but its bidirectional GRU accounts for 10M of those parameters, so all SSN variants in this experiment allocate 10M parameters. Comparing to the baseline, which involves modifying the original model’s number and/or size of filters so they have the same number of parameters as our
Table 5: Parameter downsampling comparison (Section 3.1.1) using WRN-28-10 and WRN-50-2 for C-10/100 and ImageNet, resp. Baseline adjusts the number and/or size of filters rather than share parameters. See Appendix B for additional details.
Dataset % orig Reduced SSNs (ours)params Baseline WAvg Emb C-10 11.3% 4.22 4.00 3.84C-100 22.34 21.78 21.92 ImageNet 27.5% 10.08 7.38 6.69
SSNs, we see that the variants of our SSNs perform better, especially on ImageNet where we reduce Error@5 by 3%. Generally we see a slight boost to performance using Emb over WAvg. Also note that prior work in parameter sharing, e.g., SWRN (Savarese & Maire, 2019), can not be applied to the settings in Table 5, since they require parameter sharing between layers of different types and different operations, or have too few parameters. E.g., the WRN-28-10 results use just under 4M parameters, but, as shown in Figure 3(a), SWRN requires a minimum of 12M parameters.
In Table 6 we investigate one of the new challenges in this work: how to upsample parameters so a large layer operation can be implemented with relatively few parameters (Section 3.1.2). For example, our SSN-WRN-28-10 results use about 0.5M parameters, but the largest layer defined in the network requires just under 4M weights. We find using our simple learned Mask upsampling method performs well in most settings, especially when using convolutional networks. For example, on CIFAR-100 it improves Error@1 by 2.5% over the baseline, and 1.5% over using bilinear interpolation (inter). While more complex methods of upsampling may seem like they would improve performance (e.g., using an MLP, or learning combinations of basis filters), we found such approaches had two significant drawbacks. First, they can slow down training time significantly due to their complexity, so only a limited number of settings are viable. Second, in our experiments we found many had numerical stability issues for some datasets/tasks. We believe this may be due to trying to learn the local parameter patterns and the weights to combine them concurrently. Related work suggests this can be resolved by leveraging prior knowledge about what these local parameter patterns should represent (Denil et al., 2013), i.e., you define and freeze what they represent. However, prior knowledge is not available in the general case, and data-driven methods of training these local filter patterns often rely on pretraining steps of the fully-parameterized network (e.g., Denil et al., 2013). Thus, they are not suited for NPAS since they can not training large networks with low memory requirements, but addressing this issue would be a good direction for future work.
Table 7 compares approaches for mapping layers to parameter groups using the same number of parameters as the original model. We see a small, but largely consistent improvement over using a traditional (baseline) network using SSNs. Notably, our automatically learned mappings (auto) perform on par with manual groups. This demonstrates that our automated approach can be used without loss in performance, while being applicable to any architecture, making them more flexible than hand-crafted methods. This flexibility does come with a computational cost, as our preliminary step that learns to map layers to parameter groups resulted in a 10-15% longer training time for equivalent epochs. That said, parameter sharing methods have demonstrated an ability to converge
faster (Bagherinezhad et al., 2017; Lan et al., 2020). Thus, exploring more efficient training strategies using NPAS methods like SSNs will be a good direction for future work.
Figure 4(a) compares the 3× 3 kernel filters at the early, middle, and final convolutional layers of a WRN-16-2 of a traditional neural network (no parameter sharing) and our SSNs where all layers belong to the same parameter group. We observe a correspondence between filters in the early layers, but this diverges in deeper layers. This suggests that sharing becomes more difficult in the final layers, which is consistent with two observations we made about Figure 4(b), which visualizes parameter groups used for SSN-WRN-28-10 to create 14 parameter group mappings. First, we found the learned parameter group mappings tended to share parameters between layers early in the network, opting for later layers to share no parameters. Second, the early layers tended to group layers into 3–4 parameter stores across different runs, with the remaining 10–11 parameter stores each containing a single layer. Note that these observations were consistent across different random initializations.
5 CONCLUSION
We propose NPAS, a novel task in which the goal is to implement a given, arbitrary network architecture given a fixed parameter budget. This involves identifying how to assign parameters to layers and implementing layers with their assigned parameters (which may be of any size). To address NPAS, we introduce SSNs, which automatically learn how to share parameters. SSNs benefit from parameter sharing in the low-budget regime—reducing memory and communication requirements when training—and enable a novel high-budget regime that can improve model performance. We show that SSNs boost results on ImageNet by 3% improvement in Error@5 over a same-sized network without parameter sharing. Surprisingly, we also find that parameters can be shared among very different layers. Further, we show that SSNs can be combined with knowledge distillation and parameter pruning to achieve state-of-the-art results that also reduce FLOPs at test time. One could think of SSNs as spreading the same number of parameters across more layers, increasing effective depth, which benefits generalization (Telgarsky, 2016), although this requires further exploration.
Acknowledgements. This work is funded in part by grants from the National Science Foundation and DARPA. This project received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 program (grant agreement MAELSTROM, No. 955513). N.D. is supported by the ETH Postdoctoral Fellowship. We thank the Livermore Computing facility for the use of their GPUs for some experiments.
ETHICS STATEMENT
Neural Parameter Allocation Search and Shapeshifter Networks are broad and general-purpose, and therefore applicable to many downstream tasks. It is thus challenging to identify specific cases of benefit or harm, but we note that reducing memory requirements can have broad implications. In particular, this could allow potentially harmful applications to be cheaply and widely deployed (e.g., facial recognition, surveillance) where it would otherwise be technically or economically infeasible.
REPRODUCIBILITY STATEMENT
NPAS is a task which can be implemented in many different ways; we define it formally in Section 2. SSNs, our proposed solution to NPAS, are presented in detail in Section 3, and Figure 2 provides an illustration and example for weight generation methods. AppendixA also provides a thorough discussion of the implementation details. To further aid reproducibility, we publicly release our SSN code at https://github.com/BryanPlummer/SSN.
A DESCRIPTION OF COMPARED TASKS
A.1 IMAGE-SENTENCE RETRIEVAL
In bidirectional image-sentence retrieval when a model is provided with an image the goal is to retrieve a relevant sentence and vice versa. This task is evaluated using Recall@K={1, 5, 10} for both directions (resulting in 6 numbers), which we average for simplicity. We benchmark methods on two common datasets: Flickr30K (Young et al., 2014) which contains 30K/1K/1K images for training/testing/validation, each with five descriptive captions, and MSCOCO (Lin et al., 2014), which contains 123K/1K/1K images for training/testing/validation, each image having roughly five descriptive captions.
EmbNet (Wang et al., 2016). This network learns to embed visual features for each image computed using a 152-layer Deep Residual Network (ResNet) (He et al., 2016) that has been trained on ImageNet (Deng et al., 2009) and the average of MT GrOVLE (Burns et al., 2019) language features representing each word into a shared semantic space using a triplet loss. The network consists of two branches, one for each modality, and each branch contains two fully connected layers (for a total of four layers). We adapted the implementation of Burns et al. (Burns et al., 2019)1, and left all hyperparameters at the default settings. Specifically, we train using a batch size of 500 with an initial learning rate of 1e-4 which we decay exponentially with a gamma of 0.794 and use weight decay of 0.001. The model is trained until it has not improved performance on the validation set over the last 10 epochs. This architectures provides a simple baseline for parameter sharing with our Shapeshifter Networks (SSNs), where layers operate on two different modalities.
ADAPT-T2I (Wehrmann et al., 2020). In this approach word embeddings are aggregated using a bidirectional GRU (Cho et al., 2014) and its hidden state at each timestep is averaged to obtain a fullsentence representation. Images are represented using 36 bottom-up image region features (Anderson et al., 2018) that are passed through a fully connected layer. Then, each sentence calculates scaling and shifting parameters for the image regions using a pair of fully connected layers that both take the full-sentence representation as input. The image regions are then averaged, and a similarity score is computed between the sentence-adapted image features and the fully sentence representation. Thus, there are four layers total (3 fully connected, 1 GRU) that can share parameters, including the two parallel fully connected layers (i.e., they both take the full sentence features as input, but are expected to have different outputs). We adapted the author’s implementation and kept the default hyperparameters2. Specifically, we use a latent dimension of 1024 for our features and train with a batch size of 105 using a learning rate of 0.001. This method was selected since it achieves high performance and also included fully connected and recurrent layers, as well as having a set of parallel layers that make effectively performing cross-layer parameter sharing more challenging.
A.2 PHRASE GROUNDING
Given a phrase the goal of a phrase grounding model is to identify the image region described by the phrase. Success is achieved if the predicted box has at least 0.5 intersection over union with the ground truth box. Performance is measured using the percent of the time a phrase is accurately localized. We evaluate on two datasets: Flickr30K Entities (Plummer et al., 2017) which augments the Flickr30K dataset with 276K bounding boxes associated with phrases in the descriptive captions, and ReferIt (Kazemzadeh et al., 2014) which contains 20K images that are evenly split between training/testing and 120K region descriptions.
SimNet (Wang et al., 2018). This network contains three branches that each operate on different types of features. One branch passes image regions features computed with a 101-layer ResNet that have been fine-tuned for phrase grounding using two fully connected layers. A second branch passes MT GrOVLE features through two fully connected layers. Then, a joint representation is computed for all region-phrase pairs using an elementwise product. Finally, the third branch passes these joint features through three fully connected layers (7 total), where the final layer acts as a classifier indicating the likelihood that phrase is present in the image region. We adapt the code
1https://github.com/BryanPlummer/Two_branch_network 2https://github.com/jwehrmann/retrieval.pytorch
from Plummer et al. (2020)3 and keep all hyperameters at their default settings. Specifically, we use a pretrained Faster R-CNN model (Ren et al., 2015) fine-tuned for phrase grounding by Plummer et al. (2020) on each dataset to extract region features. Then we encode each phrase by averaging MT GrOVLE features (Burns et al., 2019) and provide the image and phrase features as input to our model. We train our model using a learning rate of 5e-5 and a final embedding dimension of 256 until it no longer improves on the validation set for 5 epochs (typically resulting in training times of 15-20 epochs). Performing experiments on this model enables us to test how well our SSNs generalize to another task and how well it can adapt to sharing parameters with layers operating on three types of features (just vision, just language, and a joint representation).
A.3 IMAGE CLASSIFICATION
For image classification the goal is to be able to recognize if an object is present in an image. Typically this task is evaluated using Error@K, or the portion of times that the correct category doesn’t appear in the top k most likely objects. We evaluate our Shapeshifter Networks on three datasets: CIFAR-10 and CIFAR-100 (Krizhevsky, 2009), which are comprised of 60K images of 10 and 100 categories, respectively, and ImageNet (Deng et al., 2009), which is comprised of 1.2M images containing 1,000 categories. We report Error@1 for both CIFAR datasets and Error@5 for ImageNet. In these appendices, we also report Error@1 for ImageNet.
Wide Residual Network (WRN) (Zagoruyko & Komodakis, 2016). WRN modified the traditional ResNets by increasing the width k of each layer while also decreasing the depth d, which they found improved performance. Different variants are identified using WRN-d-k. Following Savarese et al. (Savarese & Maire, 2019), we evaluate our Shapeshifter Networks using WRN-28-10 for CIFAR and WRN-50-2 for ImageNet. We adapt the implementation of Savarese et al.4 and use cutout (DeVries & Taylor, 2017) for data augmentation. Specifically, on CIFAR we train our model using a batch size of 128 for 200 epochs with weight decay set at 5e-4 and an initial learning rate of 0.1 which we decay using a gamma of 0.2 at 60, 120, and 160 epochs. Unlike the vision-language models discussed earlier, these architecture include convolutional layers in addition to a fully connected layer used to implement a classifier, and also have many more layers than the shallow vision-language models.
DenseNet (Huang et al., 2017). Unlike traditional neural networks where each layer in the network is computed in sequence, every layer in a DenseNet using feature maps from every layer which came before it. We adapt PyTorch’s official implementation5 using the hyperparameters as set in Huang et al. (Huang et al., 2017). Specifically, on CIFAR we train our model using a batch size of 96 for 300 epochs with weight decay set at 1e-4 and an initial learning rate of 0.1 which we decay using a gamma of 0.1 at 150 and 225. These networks provide insight into the effect depth has on learning SSNs, as we use a 190-layer DenseNet-BC configuration for CIFAR. However, due to their high computational cost we provide limited results testing only some settings.
EfficientNet (Tan & Le, 2019). EfficientNets are a class of model designed to balance depth, width, and input resolution in order to produce very parameter-efficient models. For ImageNet, we adapt an existing PyTorch implementation and its hyperparameters6, which are derived from the official TensorFlow version. We use the EfficientNet-B0 architecture to illustrate the impact of SSNs on very parameter-efficient, state-of-the-art models. On CIFAR-100 we use an EfficientNet with Network Deconvolution (ND) (Ye et al., 2020), which results in improved results with similar numbers of epochs for training. We use the author’s implementation7, and train each model for 100 epochs (their best performing setting). Note that our best error running different configurations of their model (35.88) is better than those in their paper (37.63), so despite the relatively low performance it is comparable to results from their paper.
3https://github.com/BryanPlummer/phrase_detection 4https://github.com/lolemacs/soft-sharing 5https://pytorch.org/hub/pytorch_vision_densenet/ 6https://rwightman.github.io/pytorch-image-models/ 7https://github.com/yechengxi/deconvolution
A.4 QUESTION ANSWERING
In question answering, a model is given a question and an associated textual passage which may contain the answer, and the goal is to predict the span of text in the passage that contains the answer. We use two versions of the Stanford Question Answering Dataset (SQuAD), SQuAD v1.1 (Rajpurkar et al., 2016), which contains 100K+ question/answer pairs on 500+ Wikipedia particles, and SQuAD v2.0, which augments SQuAD v1.1 with 50K unanswerable questions designed adversarially to be similar to standard SQuAD questions. For both datasets, we report both the F1 score, which captures the precision and recall of the chosen text span, and the Exact Match (EM) score.
ALBERT (Lan et al., 2020) ALBERT is a version of the BERT (Devlin et al., 2019) transformer architecture that applies cross-layer parameter sharing. Specifically, the parameters for all components of a transformer layer are shared among all the transformer layers in the network. ALBERT also includes a factorized embedding to further reduce parameters. We follow the methodology of BERT and ALBERT for reporting results on SQuAD, and our baseline ALBERT scores closely match those reported in the original work. This illustrates the ability of NPAS and SSNs to develop better parameter sharing methods than manually-designed systems for extremely large models.
B EXTENDED RESULTS WITH ADDITIONAL BASELINES
Below we provide additional results with more baseline methods for the three components of our SSNs: weight generator (Section B.1), parameter upsampling (Section B.4), and mapping layers to parameter groups (Section B.3). We provide ablations on the number of parameter groups and templates used by our SSNs in Section C and Section D, respectively.
B.1 ADDITIONAL METHODS THAT GENERATE LAYER WEIGHTS FROM TEMPLATES
Parameter downsampling uses the selected templates T ki for a layer `i to produce its weights wi. In Section 3.1.1 of the paper we discuss two methods of learning a combination of the T ki to generate wi. Below in Section B.2 we provide two simple baseline methods that directly use the candidates. Table 8 compares the baselines to the methods in the main paper that learn weighted combinations of templates, where the learned methods typically perform better than the baselines.
B.2 DIRECT TEMPLATE COMBINATION
Here we describe the strategies we employ that require no parameters to be learned by weight generator, i.e., they operate directly on the templates T ki .
Round Robin (RR) reuses parameters of each template set as few times as possible. The scheme simply returns the weights at index k mod K in the (ordered) template set Ti at the kth query of a parameter group.
Candidate averaging (Avg) averages all candidates in Ti to provide a naive baseline for using multiple candidates. A significant drawback of this approach is that, if K is large, this can result in reusing parameters (across combiners) many times with no way to adapt to a specific layer, especially when the size of the parameter group is small.
B.3 ADDITIONAL PARAMETER MAPPING RESULTS
Table 9 compares approaches that map layers to parameter groups using the same number of parameters as the original model. We see a small, but largely consistent improvement over using a traditional (baseline) network. Notably, our learned grouping methods (WAvg, Emb) perform on par, and sometimes better than using manual mappings. However, our approach can be applied to any architecture to create a selected number of parameter groups, making them more flexible than hand-crafted methods. For example, in Table 10, we see using two groups often helps to improve performance when using very few parameters, but it is not clear how to effectively create two groups by hand for many networks.
B.4 EXTENDED PARAMETER UPSAMPLING
In Table 10 we provide extended results comparing the parameter upsamping methods. We additionally compare with a further naı̈ve baseline of simply repeating parameters until they are the appropriate size. We find that Mask upsampling is always competitive, and typically moreso when two parameter groups are used.
B.5 COMPARISON WITH HYPERNETWORKS
In Table 11 we compare our SSNs on Wide ResNets (Ha et al., 2016) to the same networks implemented using Hypernetworks (Ha et al., 2016) for CIFAR-10, using the results reported in their paper. We can see that, for the same parameter budget, SSNs outperform Hypernetworks.
C EFFECT OF THE NUMBER OF PARAMETER GROUPS P
A significant advantage of using learned mappings of layers to parameter groups, described in Section 3.2, is that our approach can support any number of parameter groups, unlike prior work that required manual grouping and/or heuristics to determine which layers shared parameters (e.g., Lan et al., 2020; Savarese & Maire, 2019). In this section we explore how the number of parameter groups
effects performance on the image classification task. We do not benchmark bidirectional retrieval and phrase grounding since networks addressing these tasks have few layers, so parameter groups are less important (as shown in Table 7).
Table 12 reports the performance of our SSNs when using different numbers P parameter groups. We find that when training with few parameters (first line) low numbers of parameter groups work best, while when more parameters are available larger numbers of groups work better (second line). In fact, there is a significant drop in performance going from 4 to 8 groups when training with few parameters as seen in the first line of Table 12. This is due to the fact that starting at 8 groups some parameter groups had too few weights to implement their layers, resulting in extensive parameter upsampling. This suggests that we may be able to further improve performance when there are few parameters by developing better methods of implementing layers when too few parameters are available.
D EFFECT OF THE NUMBER OF TEMPLATES K
Table 13 reports the results using different numbers of templates. We find that varying the number of templates only has a minor impact on performance most of the time. We note that more templstes tends to lead to reduced variability between runs, making results more stable. As a reminder, however, the number of templates does not guarantee that each layer will have enough parameters to construct them. Thus, parameter groups only use this hyperparameter when many weights are available to it (i.e., it can form multiple templates for the layers it implements). This occurs for the phrase grounding and bidirectional retrieval results at the higher maximum numbers of templates.
E SCALING SSNS TO LARGER NETWORKS
Table 14 demonstrates the ability of our SSNs to significantly reduce the parameters required, and thus the memory required to implement large Wide ResNets so they fall within specific bounds. For example, Table 14(b) shows larger and deeper configurations continue to improve performance even when the number of parameters remains largely constant. Comparing the first line of Table 14(a) and the last line of Table 14(c) we see that SSN-WRN-76-12 outperforms the fully-parameterized WRN28-10 network by 0.6% on CIFAR-100 while only using just over half the parameters, and comes within 0.5% of WRN-76-12 while only using 13.0% of its parameters. We do note that using a SSN does not reduce the number of floating point operations, so although our SSN-WRN-76-12 model uses fewer parameters than the WRN-28-10, it is still slower at both test and train time. However, our results help demonstrate that SSNs can be used to implement very large networks with lower memory
requirements by effectively sharing parameters. This enables us to train larger, better-performing networks than is possible with traditional neural networks on comparable computational resources.
F IMAGE CLASSIFICATION NUMBERS
We provide raw numbers for the results in Figure 3 in Table 15 (CIFAR-100) and Table 16 (ImageNet).
G PERFORMANCE IMPLICATIONS OF NPAS AND SSNS
Our SSNs can offer several performance benefits by reducing parameter counts; notably, they can reduce memory requirements storing a model and can reduce communication costs for distributed training. We emphasize that LB-NPAS does not reduce FLOPs, as the same layer operations are implemented using fewer parameters. Should fewer FLOPs also be desired, SSNs can be combined
with other techniques, such as pruning. Additionally, we note that our implementation has not been extensively optimized, and further performance improvements could likely be achieved with additional engineering.
G.1 COMMUNICATION COSTS FOR DISTRIBUTED TRAINING
Communication for distributed data-parallel training is typically bandwidth-bound, and thus employs bandwidth-optimal allreduces, which are linear in message length (Chan et al., 2007). Thus, we expect communication time to be reduced by a factor proportional to the parameter savings achieved by NPAS, all else being equal. However, frameworks will typically execute allreduces layer-wise as soon as gradient buffers are ready to promote communication/computation overlap in backpropagation; reducing communication that is already fully overlapped is of little benefit. Performance benefits are thus sensitive to the model, implementation details, and the system being used for training.
For CNNs, we indeed observe minor performance improvements, as the number of parameters is typically small. When using 64 V100 GPUs for training WRN-50-2 on ImageNet, we see a 1.04× performance improvement in runtime per epoch when using SSNs with 10.5M parameters (15% of the original model). This is limited because most communication is overlapped. We also observe small performance improvements in some cases because we launch fewer allreduces, resulting in less demand for SMs and memory bandwidth on the GPU. These performance results are in line with prior work on communication compression for CNNs (e.g., Renggli et al., 2019).
For large transformers, however, we observe more significant performance improvements. The SSN-ALBERT-Large is about 1.4× faster using 128 GPUs than the corresponding BERT-Large model. This is in line with the original ALBERT work (Lan et al., 2020), which reported that training ALBERT-Large was 1.7× faster than BERT-Large when using 128 TPUs. Note that due to the differences in the systems for these results, they are not directly comparable.
We would also reiterate that for some applications where communication is more costly, say, for federated learning applications (e.g. McMahan et al. (2017); Konečný et al. (2016)), our approach would be even more beneficial due to the decreased message length.
G.2 MEMORY SAVINGS
LB-NPAS and SSNs reduce the number of parameters, which consequentially reduces the size of the gradients and optimizer state (e.g., momentum) by the same amount. It does not reduce the storage requirements for activations, but note there is much work on recomputation to address this (e.g., Chen et al., 2016; Jain et al., 2020). Thus, the memory savings from SSNs is independent of batch size. For SSN-ALBERT-Large, we use 18M parameters (5% of BERT-Large, which contains 334M parameters). Assuming FP32 is used to store data, we save about 5 GB of memory in this case (about 1/3 of the memory used) | 1. What is the focus and contribution of the paper on automating parameter allocation and weight generation?
2. What are the strengths of the proposed approach, particularly in terms of effectiveness and efficiency?
3. What are the weaknesses of the paper regarding sensitivity to hyperparameters and limited reduction of inference time?
4. Do you have any suggestions for improving the figure illustrating the proposed method? | Summary Of The Paper
Review | Summary Of The Paper
This work investigates how to automate allocating parameters within a fixed budget to each layer and generating its weights with assigned parameters, which is a generalized version of parameter sharing. The proposed method is validated with leading performances and efficiencies on multiple datasets across multiple tasks.
Review
Strong points:
This work is well-motivated. Parameter sharing is one of the most promising directions to reduce training and inference costs, but not addressed in AutoML community. So, the aim of the work is timely.
The proposed method is effective and efficient. Also, the authors validate the proposed method across not only multiple datasets but also multiple tasks. Moreover, the authors show that other techniques to reduce the inference cost are orthogonally applicable.
Weak points:
The proposed method contains many hyperparameters to be tuned, such as the window size, embedding dimension, and # partition groups. How sensitive is the performance of the proposed method to them? Also, do all tasks and datasets share the same set of hyperparameters?
The proposed method (LB-SSN) seems to reduce only the number of parameters. However, in the practitioner’s point of view, reducing inference time may be more critical. Are other pruning methods also applicable to the network configuration obtained by LB-SSN? It would be great to add the result to Table 4.
Detailed comments:
It is not easy to understand the proposed method with Figure 2 at a glance. Simplifying Figure 2 or replacing it with an abstractive algorithm description may be better. |
ICLR | Title
Neural Parameter Allocation Search
Abstract
Training neural networks requires increasing amounts of memory. Parameter sharing can reduce memory and communication costs, but existing methods assume networks have many identical layers and utilize hand-crafted sharing strategies that fail to generalize. We introduce Neural Parameter Allocation Search (NPAS), a novel task where the goal is to train a neural network given an arbitrary, fixed parameter budget. NPAS covers both low-budget regimes, which produce compact networks, as well as a novel high-budget regime, where additional capacity can be added to boost performance without increasing inference FLOPs. To address NPAS, we introduce Shapeshifter Networks (SSNs), which automatically learn where and how to share parameters in a network to support any parameter budget without requiring any changes to the architecture or loss function. NPAS and SSNs provide a complete framework for addressing generalized parameter sharing, and can also be combined with prior work for additional performance gains. We demonstrate the effectiveness of our approach using nine network architectures across four diverse tasks, including ImageNet classification and transformers.
1 INTRODUCTION
Training neural networks requires ever more computational resources, with GPU memory being a significant limitation (Rajbhandari et al., 2021). Method such as checkpointing (e.g., Chen et al., 2016; Gomez et al., 2017; Jain et al., 2020) and out-of-core algorithms (e.g., Ren et al., 2021) have been developed to reduce memory from activations and improve training efficiency. Yet even with such techniques, Rajbhandari et al. (2021) find that model parameters require significantly greater memory bandwidth than activations during training, indicating parameters are a key limit on future growth. One solution is cross-layer parameter sharing, which reduces the memory needed to store parameters, which can also reduce the cost of communicating model updates in distributed training (Lan et al., 2020; Jaegle et al., 2021) and federated learning (Konečný et al., 2016; McMahan et al., 2017), as the model is smaller, and can help avoid overfitting (Jaegle et al., 2021). However, prior work in parameter sharing (e.g., Dehghani et al., 2019; Savarese & Maire, 2019; Lan et al., 2020; Jaegle et al., 2021) has two significant limitations. First, they rely on suboptimal hand-crafted techniques for deciding where and how sharing occurs. Second, they rely on models having many identical layers. This limits the network architectures they apply to (e.g., DenseNets (Huang et al., 2017) have few such layers) and their parameter savings is only proportional to the number of identical layers.
To move beyond these limits, we introduce Neural Parameter Allocation Search (NPAS), a novel task which generalizes existing parameter sharing approaches. In NPAS, the goal is to identify where and how to distribute parameters in a neural network to produce a high-performing model using an arbitrary, fixed parameter budget and no architectural assumptions. Searching for good sharing strategies is challenging in many neural networks due to different layers requiring different numbers of parameters or weight dimensionalities, multiple layer types (e.g., convolutional, fully-connected, recurrent), and/or multiple modalities (e.g., text and images). Hand-crafted sharing approaches, as in prior work, can be seen as one implementation of NPAS, but they can be complicated to create for complex networks and have no guarantee that the sharing strategy is good. Trying all possible permutations of sharing across layers is computationally infeasible even for small networks. To our knowledge, we are the first to consider automatically searching for good parameter sharing strategies.
*indicates equal contribution
By supporting arbitrary parameter budgets, NPAS explores two novel regimes. First, while prior work considered using sharing to reduce the number of parameters (which we refer to as low-budget NPAS, LB-NPAS), we can also increase the number of parameters beyond what an architecture typically uses (high-budget NPAS, HB-NPAS). HB-NPAS can be thought of as adding capacity to the network in order to improve its performance without changing its architecture (e.g., without increasing the number of channels that would also increase computational time). Second, we consider cases where there are fewer parameters available to a layer than needed to implement the layer’s operations. For such low-budget cases, we investigate parameter upsampling methods to generate the layer’s weights.
A vast array of other techniques, including pruning (Hoefler et al., 2021), quantization (Gholami et al., 2021), knowledge distillation (Gou et al., 2021), and low-rank approximations (e.g., Wu, 2019; Phan et al., 2020) are used to reduce memory and/or FLOP requirements for a model. However, such methods typically only apply at test/inference time, and actually are more expensive to train due to requiring a fully-trained large network, in contrast to NPAS. Nevertheless, these are also orthogonal to NPAS and can be applied jointly. Indeed, we show that NPAS can be combined with pruning or distillation to produce improved networks. Figure 1 compares NPAS to closely related tasks.
To implement NPAS, we propose Shapeshifter Networks (SSNs), which can morph a given parameter budget to fit any architecture by learning where and how to share parameters. SSNs begin by learning which layers can effectively share parameters using a short pretraining step, where all layers are generated from a single shared set of parameters. Layers that use parameters in a similar way are then good candidates for sharing during the main training step. When training, SSNs generate weights for each layer by down- or upsampling the associated parameters as needed.
We demonstrate SSN’s effectiveness in high- and low-budget NPAS on a variety of networks, including vision, text, and vision-language tasks. E.g., a LB-NPAS SSN implements a WRN-50-2 (Zagoruyko & Komodakis, 2016) using 19M parameters (69M in the original) and achieves an Error@5 on ImageNet (Deng et al., 2009) 3% lower than a WRN with the same budget. Similarity, we achieve a 1% boost to SQuAD v2.0 (Rajpurkar et al., 2016) with 18M parameters (334M in the original) over ALBERT (Lan et al., 2020), prior work for parameter sharing in Transformers (Vaswani et al., 2017). For HB-NPAS, we achieve a 1–1.5% improvement in Error@1 on CIFAR (Krizhevsky, 2009) by adding capacity to a traditional network. In summary, our key contributions are:
• We introduce Neural Parameter Allocation Search (NPAS), a novel task in which the goal is to implement a given network architecture using any parameter budget. • To solve NPAS, we propose Shapeshifter Networks (SSNs), which automate parameter sharing. To our knowledge, SSNs are the first approach to automatically learn where and how to share parameters and to share parameters between layers of different sizes or types. • We benchmark SSNs for LB- and HB-NPAS and show they create high-performing networks when either using few parameters or adding network capacity. • We also show that SSNs can be combined with knowledge distillation and parameter pruning to boost performance over such methods alone.
2 NEURAL PARAMETER ALLOCATION SEARCH (NPAS)
In NPAS, the goal is to implement a neural network given a fixed parameter budget. More formally:
Neural Parameter Allocation Search (NPAS): Given a neural network architecture with layers `1, . . . , `L, which each require weights w1, . . . , wL, and a fixed parameter budget θ, train a high-performing neural network using the given architecture and parameter budget.
Any general solution to NPAS (i.e., that works for arbitrary θ or network) must solve two subtasks:
1. Parameter mapping: Assign to each layer `i a subset of the available parameters. 2. Weight generation: Generate `i’s weights wi from its assigned parameters, which may be any size.
Prior work, such as Savarese & Maire (2019) and Ha et al. (2016), are examples of weight generation methods, but in limited cases, e.g., Savarese & Maire (2019) does not support there being fewer parameters than weights. To our knowledge, no prior work has automated parameter mapping, instead relying on hand-crafted heuristics that do not generalize to many architectures. Note weight generation must be differentiable so gradients can be backpropagated to the underlying parameters.
NPAS naturally decomposes into two different regimes based on the parameter budget relative to what would be required by a traditional neural network (i.e., ∑ L i |wi| versus |θ|):
• Low-budget (LB-NPAS), with fewer parameters than standard networks ( ∑
L i |wi| < |θ|). This
regime has traditionally been the goal of cross-layer parameter sharing, and reduces memory at training and test time, and consequentially reduces communication for distributed training. • High-budget (HB-NPAS), with more parameters than standard networks ( ∑
L i |wi| > |θ|). This is,
to our knowledge, a novel regime, and can be thought of as adding capacity to a network without changing the underlying architecture by allowing a layer to access more parameters.
Note, in both cases, the FLOPs required of the network do not significantly increase. Thus, HB-NPAS can significantly reduce FLOP overhead compared to larger networks.
The closest work to ours are Shared WideResNets (SWRN) (Savarese & Maire, 2019), Hypernetworks (HN) (Ha et al., 2016), and Lookup-based Convolutional Networks (LCNN) (Bagherinezhad et al., 2017). Each method demonstrated improved low-budget performance, with LCNN and SWRN focused on improving sharing across layers and HN learning to directly generate parameters. However, all require adaptation for new networks and make architectural assumptions. E.g., LCNN was designed specifically for convolutional networks, while HN and SWRN’s benefits are proportional to the number of identical layers (see Figure 3). Thus, each method supports limited architectures and parameter budgets, making them unsuited for NPAS. LCNN and HN also both come with significant computational overhead. E.g., the CNN used by Ha et al. requires 26.7M FLOPs for a forward pass on a 32×32 image, but weight generation with HN requires an additional 108.5M FLOPs (135.2M total). In contrast, our SSNs require 0.8M extra FLOPs (27.5M total, 5× fewer than HN). Across networks we consider, SSN overhead for a single image is typically 0.5–2% of total FLOPs. Note both methods generate weights once per forward pass, amortizing overhead across a batch (e.g., SSN overhead is reduced to 0.008–0.03% for batch size 64). HB-NPAS is also reminiscent of mixture-of-experts (e.g., Shazeer et al., 2017); both increase capacity without significantly increasing FLOPs, but NPAS allows this overparameterization to be learned without architectural changes required by prior work.
NPAS can be thought of as searching for efficient and effective underlying representations for a neural network. Methods have been developed for other tasks that focus on directly searching for more effective architectures (as opposed to their underlying representations). These include neural architecture search (e.g., Bashivan et al., 2019; Dong & Yang, 2019; Tan et al., 2019; Xiong et al., 2019; Zoph & Le, 2017) and modular/self-assembling networks (e.g., Alet et al., 2019; Ferran Alet, 2018; Devin et al., 2017). While these tasks create computationally efficient architectures, they do not reduce the number of parameters in a network during training like NPAS (i.e., they cannot be used to train very large networks or for federated or distributed learning applications), and indeed are computationally expensive. NPAS methods can also provide additional flexibility to architecture search by enabling them to train larger and/or deeper architectures while keeping within a fixed parameter budget. In addition, the performance of any architectures these methods create could be improved by leveraging the added capacity from excess parameters when addressing HB-NPAS.
3 SHAPESHIFTER NETWORKS FOR NPAS
We now present Shapeshifter Networks (SSNs), a framework for addressing NPAS using generalized parameter sharing to implement a neural network with an arbitrary, fixed parameter budget. Figure 2 provides an overview and example of SSNs, and we detail each aspect below. An SSN consists of a provided network architecture with layers `1,...,L, and a fixed budget of parameters θ, which are partitioned into P parameter groups (both hyperparameters) containing parameters θ1,...,P . Each layer is associated with a single parameter group, which will provide the parameters used to implement it. This mapping is learned in a preliminary training step by training a specialized SSN and clustering its layer representations (Section 3.2). To implement each layer, an SSN morphs the parameters in its associated group to generate the necessary weights; this uses downsampling (Section 3.1.1) when the group has more parameters than needed, or upsampling (Section 3.1.2) when the group has fewer parameters than needed. SSNs allow any number of parameters to “shapeshift” into a network without necessitating changes to the model’s loss, architecture, or hyperparameters, and the process can be applied automatically. Finally, we note that SSNs are simply one approach to NPAS. Appendices B-D contain ablation studies and discussion of variants we found to be less successful.
3.1 WEIGHT GENERATION
Weight generation implements a layer `i, which requires weights wi, using the fixed set of parameters in its associated parameter group θj . (We assume the mapping between layers and parameter groups has already been established; see Section 3.2.) There are three cases to handle:
1. |wi| = |θj | (exactly enough parameters): The parameters are used as-is. 2. |wi| < |θj | (excess parameters): We perform parameter downsampling (Section 3.1.1). 3. |wi| > |θj | (insufficient parameters): We perform parameter upsampling (Section 3.1.2). We emphasize that, depending on how layers are mapped to parameter groups, both down- and upsampling may be required in an LB- or HB-NPAS model.
3.1.1 PARAMETER DOWNSAMPLING
When a parameter group θj provides more parameters than needed to implement a layer `i, we perform template-based downsampling to generate wi. To do this, we first split θj into up to K (a hyperparameter) templates T 1,...,Ki , where each template T k i is the same dimension as wi. If θj does not evenly divide into templates, we ignore excess parameters. To avoid uneven sharing of parameters between layers, the templates for each layer are constructed from θj in a round-robin fashion. These templates are then combined to produce wi; if only one template can be produced we instead use it directly. We present two different methods of learning to combine templates. To simplify presentation, we will assume there are exactly K templates used.
WAvg (Savarese & Maire, 2019) This learns a vector αi ∈ RK which is used to produce a weighted average of the templates: wi = ∑ K k=1α k i T k i . The αi are initialized orthogonally to the αs of all other
layers in the same parameter group. While efficient, this only implicitly learns similarities between layers. Empirically, we find that different layers often converge to similar αs, limiting sharing.
Emb To address this, we can instead more directly learn a representation of the layer using a layer embedding. We use a learnable vector φi ∈ RE , where E is the size of the layer representation; we useE = 24 throughout, as we found it to work well. A linear layer, which is shared between all layers in the parameter group and parameterized by Wj ∈ RK×E and bj ∈ RK , is then used to construct an αi for the layer, which is used as in WAvg. That is, αi =Wjφi + bj and wi = ∑ K k=1α k i T k i . We considered more complex methods (e.g., MLPs, nonlinearities), but they did not improve performance.
While both methods require additional parameters, this is quite small in practice. WAvg requires K additional parameters per layer. Emb requires E = 24 additional parameters per layer and KE +K = 24K +K parameters per parameter group.
3.1.2 PARAMETER UPSAMPLING
If instead a parameter group θj provides fewer parameters than needed to implement a layer `i, we upsample θj to be the same size as wi. As a layer will use all of the parameters in θj , we do not use templates. We consider two methods for upsampling below.
Inter As a naı̈ve baseline, we use bilinear interpolation to directly upsample θj . However, this could alter the patterns captured by parameters, as it effectively stretches the receptive field. In practice, we found fully-connected and recurrent layers could compensate for this warping, but it degraded convolutional layers compared to simpler approaches such as tiling θj .
Mask To address this, and avoid redundancies created by directly repeating parameters, we propose instead to use a learned mask to modify repeated parameters. For this, we first use n = d|wi|/|θj |e tiles of θj to be the same size as wi (discarding excess in the last tile). We then apply a separate learned mask to each tile after the first (i.e., there are n− 1 masks). All masks are a fixed “window” size, which we take to be 9 by default (to match the size of commonly-used 3× 3 kernels in CNNs), and are shared within each parameter group. To apply, masks are multiplied element-wise over sequential windows of their respective tile. While the number of additional parameters depends on the amount of upsampling required, as the masks are small, this is negligible.
3.2 MAPPING LAYERS TO PARAMETER GROUPS
We now discuss how SSNs can automatically learn to assign layers to parameter groups in such a way that parameters can be efficiently shared. This is in contrast to prior work on parameter sharing (e.g., Ha et al., 2016; Savarese & Maire, 2019; Jaegle et al., 2021), which required layers to be manually assigned to parameter groups. Finding an optimal mapping of layers to parameter groups is challenging, and a brute-force approach is computationally infeasible. We rely instead on SSNs learning a representation for each layer as part of the template-based parameter downsampling process, and then use this representation to identify similar layers which can effectively share parameters.
To do this, we perform a short preliminary training step in which we train a small (i.e., low parameter budget) SSN version of the model using a single parameter group and a modified means of generating templates for parameter downsampling. Specifically, for a layer `i, we split θ into K ′ evenly-sized templates T 1,...,K ′
i . Since we wish to use downsampling-based weight generation, each T k′
i is then resized with bilinear interpolation to be the same size as wi. Next, we train the SSN as usual, using WAvg or Emb downsampling with the modified templates for weight generation (there is no upsampling). By using a small parameter budget and template-based weight generation where each template comes from the same underlying parameters, we encourage significant sharing between layers so we can measure the effectiveness of sharing. We found that using a budget equal to the number of weights of the largest single layer in the network to work well. Further, this preliminary training step is short, and requires only 10–15% of the typical network training time.
Finally, we construct the parameter groups by clustering the learned layer representations into P groups. As the layer representation, we take the αi or φi learned for each layer by WAvg or Emb downsampling (resp.). We then use k-means clustering to group these representations into P groups, which become the parameter groups used by the full SSN.
4 EXPERIMENTS
Our experiments include a wide variety of tasks and networks in order to demonstrate the broad applicability of NPAS and SSNs. We adapt code and data splits made available by the authors and report the average of five runs for all comparisons except ImageNet and ALBERT, which average three runs. A more detailed discussion on SSN hyperparameter settings can be found in Appendices B-D. In our paper we primarily evaluate methods based on task performance, but we demonstrate that SSNs improve reduce training time and memory in distributed learning settings in Appendix G.
Compared Tasks. We briefly describe each task, datasets, and evaluation metrics. For each model, we use the authors’ implementation and hyperparameters, unless noted (more details in Appendix A).
Image Classification. For image classification the goal is to recognize if an object is present in an image. This is evaluated using Error@k, i.e., the portion of times that the correct category does not appear in the top k most likely objects. We evaluate SSNs on CIFAR-10 and CIFAR100 (Krizhevsky, 2009), which are composed of 60K images of 10 and 100 categories, respectively, and ImageNet (Deng et al., 2009), which is composed of 1.2M images containing 1,000 categories. We report Error@1 on CIFAR and Error@5 for ImageNet.
Image-Sentence Retrieval. In image-sentence retrieval the goal is match across modalities (sentences and images). This task is evaluated using Recall@K={1, 5, 10} for both cross-modal directions (six numbers), which we average for simplicity. We benchmark on Flickr30K (Young et al., 2014) which contains 30K/1K/1K images for training/testing/validation, and COCO (Lin et al., 2014), which contains 123K/1K/1K images for training/testing/validation. For both datasets each image has about five descriptive captions. We evaluate SSNs using EmbNet (Wang et al., 2016) and ADAPTT2I (Wehrmann et al., 2020). Note that ADAPT-T2I has identical parallel layers (i.e., they need different outputs despite having the same input), which makes sharing parameters challenging.
Phrase Grounding. Given a phrase the task is to find its associated image region. Performance is measured by how often the predicted box for a phrase has at least 0.5 intersection over union with its ground truth box. We evaluate on Flickr30K Entities (Plummer et al., 2017) which augments Flickr30K with 276K bounding boxes for phrases in image captions, and ReferIt (Kazemzadeh et al., 2014), which contains 20K images that are evenly split between training/testing and 120K region descriptions. We evaluate our SSNs with SimNet (Wang et al., 2018) using the implementation from Plummer et al. (2020) that reports state-of-the-art results on this task.
Question Answering. For this task the goal is to answer a question about a textual passage. We use SQuAD v1.1 (Rajpurkar et al., 2016), which has 100K+ question/answer pairs on 500+ articles, and SQuAD v2.0 (Rajpurkar et al., 2018), which adds 50K unanswerable questions. We report F1 and EM scores on the development split. We compare our SSNs with ALBERT (Lan et al., 2020), a recent transformer architecture that incorporates extensive, manually-designed parameter sharing.
4.1 RESULTS
We begin our evaluation in low-budget (LB-NPAS) settings. Figure 3 reports results on image classification, including WRNs (Zagoruyko & Komodakis, 2016), DenseNets (Huang et al., 2017), and EfficientNets (Tan & Le, 2019); Table 1 contains results on image-sentence retrieval and phrase grounding. For each task and architecture we compare SSNs to same parameter-sized networks without sharing. In image classification, we also report results for SWRN (Savarese & Maire, 2019) sharing; but note it cannot train a WRN-28-10 or WRN-50-2 with fewer than 12M or 40M parameters, resp. We show that SSNs can create high-performing models with fewer parameters than SWRN is capable of, and actually outperform it using 25% and 60% fewer parameters on C-100 and ImageNet, resp. Table 1 demonstrates that these benefits generalize to vision-language tasks. In Table 2 we also compare SSNs with ALBERT (Lan et al., 2020), which applies manually-designed parameter sharing to BERT (Devlin et al., 2019), and find that SSN’s learned parameter sharing outperforms ALBERT. This demonstrates that SSNs can implement large networks with lower memory requirements than is possible with current methods by effectively sharing parameters.
We discuss the runtime and memory performance implications of SSNs extensively in Appendix G. In short, by reducing parameter counts, SSNs reduce communication costs and memory. For example,
our SSN-ALBERT-Large trains about 1.4× faster using 128 GPUs than BERT-Large (in line with results for ALBERT), and reduces memory requirements by about 5 GB (1/3 of total).
As mentioned before, knowledge distillation and parameter pruning can help create more efficient models at test time, although they cannot reduce memory requirements during training like SSNs. Tables 3 and 4 show our approach can be used to accomplish a similar goal as these tasks. Comparing our LB-NPAS results in Table 4 and the lowest parameter setting of HRank, we report a 1.5% gain over pruning methods even when using less than half the parameters. We note that one can think of our SSNs in the high budget setting (HP-NPAS) as mixing together a set of random initializations of a network by learning to combine the different templates. This setting’s benefit is illustrated in Table 3 and Table 4, where our HB-NPAS models report a 1-1.5% gain over training a traditional network. As a reminder, in this setting we precompute the weights of each layer once training is complete so they require no additional overhead at test time. That said, best performance on both tasks comes from combining our SSNs with prior work on both tasks.
4.2 ANALYSIS OF SHAPESHIFTER NETWORKS
In this section we present ablations of the primary components of our approach. A complete ablation study, including, but not limited to comparing the number of parameter groups (Section 3.2) and number of templates (K in Section 3.1.1) can be found in Appendices B-D.
Table 5 compares the strategies generating weights from templates described in Section 3.1.1 when using a single parameter group. In these experiments we set the number of parameters as the amount required to implement the largest layer in a network. For example, the ADAPT-T2I model requires 14M parameters, but its bidirectional GRU accounts for 10M of those parameters, so all SSN variants in this experiment allocate 10M parameters. Comparing to the baseline, which involves modifying the original model’s number and/or size of filters so they have the same number of parameters as our
Table 5: Parameter downsampling comparison (Section 3.1.1) using WRN-28-10 and WRN-50-2 for C-10/100 and ImageNet, resp. Baseline adjusts the number and/or size of filters rather than share parameters. See Appendix B for additional details.
Dataset % orig Reduced SSNs (ours)params Baseline WAvg Emb C-10 11.3% 4.22 4.00 3.84C-100 22.34 21.78 21.92 ImageNet 27.5% 10.08 7.38 6.69
SSNs, we see that the variants of our SSNs perform better, especially on ImageNet where we reduce Error@5 by 3%. Generally we see a slight boost to performance using Emb over WAvg. Also note that prior work in parameter sharing, e.g., SWRN (Savarese & Maire, 2019), can not be applied to the settings in Table 5, since they require parameter sharing between layers of different types and different operations, or have too few parameters. E.g., the WRN-28-10 results use just under 4M parameters, but, as shown in Figure 3(a), SWRN requires a minimum of 12M parameters.
In Table 6 we investigate one of the new challenges in this work: how to upsample parameters so a large layer operation can be implemented with relatively few parameters (Section 3.1.2). For example, our SSN-WRN-28-10 results use about 0.5M parameters, but the largest layer defined in the network requires just under 4M weights. We find using our simple learned Mask upsampling method performs well in most settings, especially when using convolutional networks. For example, on CIFAR-100 it improves Error@1 by 2.5% over the baseline, and 1.5% over using bilinear interpolation (inter). While more complex methods of upsampling may seem like they would improve performance (e.g., using an MLP, or learning combinations of basis filters), we found such approaches had two significant drawbacks. First, they can slow down training time significantly due to their complexity, so only a limited number of settings are viable. Second, in our experiments we found many had numerical stability issues for some datasets/tasks. We believe this may be due to trying to learn the local parameter patterns and the weights to combine them concurrently. Related work suggests this can be resolved by leveraging prior knowledge about what these local parameter patterns should represent (Denil et al., 2013), i.e., you define and freeze what they represent. However, prior knowledge is not available in the general case, and data-driven methods of training these local filter patterns often rely on pretraining steps of the fully-parameterized network (e.g., Denil et al., 2013). Thus, they are not suited for NPAS since they can not training large networks with low memory requirements, but addressing this issue would be a good direction for future work.
Table 7 compares approaches for mapping layers to parameter groups using the same number of parameters as the original model. We see a small, but largely consistent improvement over using a traditional (baseline) network using SSNs. Notably, our automatically learned mappings (auto) perform on par with manual groups. This demonstrates that our automated approach can be used without loss in performance, while being applicable to any architecture, making them more flexible than hand-crafted methods. This flexibility does come with a computational cost, as our preliminary step that learns to map layers to parameter groups resulted in a 10-15% longer training time for equivalent epochs. That said, parameter sharing methods have demonstrated an ability to converge
faster (Bagherinezhad et al., 2017; Lan et al., 2020). Thus, exploring more efficient training strategies using NPAS methods like SSNs will be a good direction for future work.
Figure 4(a) compares the 3× 3 kernel filters at the early, middle, and final convolutional layers of a WRN-16-2 of a traditional neural network (no parameter sharing) and our SSNs where all layers belong to the same parameter group. We observe a correspondence between filters in the early layers, but this diverges in deeper layers. This suggests that sharing becomes more difficult in the final layers, which is consistent with two observations we made about Figure 4(b), which visualizes parameter groups used for SSN-WRN-28-10 to create 14 parameter group mappings. First, we found the learned parameter group mappings tended to share parameters between layers early in the network, opting for later layers to share no parameters. Second, the early layers tended to group layers into 3–4 parameter stores across different runs, with the remaining 10–11 parameter stores each containing a single layer. Note that these observations were consistent across different random initializations.
5 CONCLUSION
We propose NPAS, a novel task in which the goal is to implement a given, arbitrary network architecture given a fixed parameter budget. This involves identifying how to assign parameters to layers and implementing layers with their assigned parameters (which may be of any size). To address NPAS, we introduce SSNs, which automatically learn how to share parameters. SSNs benefit from parameter sharing in the low-budget regime—reducing memory and communication requirements when training—and enable a novel high-budget regime that can improve model performance. We show that SSNs boost results on ImageNet by 3% improvement in Error@5 over a same-sized network without parameter sharing. Surprisingly, we also find that parameters can be shared among very different layers. Further, we show that SSNs can be combined with knowledge distillation and parameter pruning to achieve state-of-the-art results that also reduce FLOPs at test time. One could think of SSNs as spreading the same number of parameters across more layers, increasing effective depth, which benefits generalization (Telgarsky, 2016), although this requires further exploration.
Acknowledgements. This work is funded in part by grants from the National Science Foundation and DARPA. This project received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 program (grant agreement MAELSTROM, No. 955513). N.D. is supported by the ETH Postdoctoral Fellowship. We thank the Livermore Computing facility for the use of their GPUs for some experiments.
ETHICS STATEMENT
Neural Parameter Allocation Search and Shapeshifter Networks are broad and general-purpose, and therefore applicable to many downstream tasks. It is thus challenging to identify specific cases of benefit or harm, but we note that reducing memory requirements can have broad implications. In particular, this could allow potentially harmful applications to be cheaply and widely deployed (e.g., facial recognition, surveillance) where it would otherwise be technically or economically infeasible.
REPRODUCIBILITY STATEMENT
NPAS is a task which can be implemented in many different ways; we define it formally in Section 2. SSNs, our proposed solution to NPAS, are presented in detail in Section 3, and Figure 2 provides an illustration and example for weight generation methods. AppendixA also provides a thorough discussion of the implementation details. To further aid reproducibility, we publicly release our SSN code at https://github.com/BryanPlummer/SSN.
A DESCRIPTION OF COMPARED TASKS
A.1 IMAGE-SENTENCE RETRIEVAL
In bidirectional image-sentence retrieval when a model is provided with an image the goal is to retrieve a relevant sentence and vice versa. This task is evaluated using Recall@K={1, 5, 10} for both directions (resulting in 6 numbers), which we average for simplicity. We benchmark methods on two common datasets: Flickr30K (Young et al., 2014) which contains 30K/1K/1K images for training/testing/validation, each with five descriptive captions, and MSCOCO (Lin et al., 2014), which contains 123K/1K/1K images for training/testing/validation, each image having roughly five descriptive captions.
EmbNet (Wang et al., 2016). This network learns to embed visual features for each image computed using a 152-layer Deep Residual Network (ResNet) (He et al., 2016) that has been trained on ImageNet (Deng et al., 2009) and the average of MT GrOVLE (Burns et al., 2019) language features representing each word into a shared semantic space using a triplet loss. The network consists of two branches, one for each modality, and each branch contains two fully connected layers (for a total of four layers). We adapted the implementation of Burns et al. (Burns et al., 2019)1, and left all hyperparameters at the default settings. Specifically, we train using a batch size of 500 with an initial learning rate of 1e-4 which we decay exponentially with a gamma of 0.794 and use weight decay of 0.001. The model is trained until it has not improved performance on the validation set over the last 10 epochs. This architectures provides a simple baseline for parameter sharing with our Shapeshifter Networks (SSNs), where layers operate on two different modalities.
ADAPT-T2I (Wehrmann et al., 2020). In this approach word embeddings are aggregated using a bidirectional GRU (Cho et al., 2014) and its hidden state at each timestep is averaged to obtain a fullsentence representation. Images are represented using 36 bottom-up image region features (Anderson et al., 2018) that are passed through a fully connected layer. Then, each sentence calculates scaling and shifting parameters for the image regions using a pair of fully connected layers that both take the full-sentence representation as input. The image regions are then averaged, and a similarity score is computed between the sentence-adapted image features and the fully sentence representation. Thus, there are four layers total (3 fully connected, 1 GRU) that can share parameters, including the two parallel fully connected layers (i.e., they both take the full sentence features as input, but are expected to have different outputs). We adapted the author’s implementation and kept the default hyperparameters2. Specifically, we use a latent dimension of 1024 for our features and train with a batch size of 105 using a learning rate of 0.001. This method was selected since it achieves high performance and also included fully connected and recurrent layers, as well as having a set of parallel layers that make effectively performing cross-layer parameter sharing more challenging.
A.2 PHRASE GROUNDING
Given a phrase the goal of a phrase grounding model is to identify the image region described by the phrase. Success is achieved if the predicted box has at least 0.5 intersection over union with the ground truth box. Performance is measured using the percent of the time a phrase is accurately localized. We evaluate on two datasets: Flickr30K Entities (Plummer et al., 2017) which augments the Flickr30K dataset with 276K bounding boxes associated with phrases in the descriptive captions, and ReferIt (Kazemzadeh et al., 2014) which contains 20K images that are evenly split between training/testing and 120K region descriptions.
SimNet (Wang et al., 2018). This network contains three branches that each operate on different types of features. One branch passes image regions features computed with a 101-layer ResNet that have been fine-tuned for phrase grounding using two fully connected layers. A second branch passes MT GrOVLE features through two fully connected layers. Then, a joint representation is computed for all region-phrase pairs using an elementwise product. Finally, the third branch passes these joint features through three fully connected layers (7 total), where the final layer acts as a classifier indicating the likelihood that phrase is present in the image region. We adapt the code
1https://github.com/BryanPlummer/Two_branch_network 2https://github.com/jwehrmann/retrieval.pytorch
from Plummer et al. (2020)3 and keep all hyperameters at their default settings. Specifically, we use a pretrained Faster R-CNN model (Ren et al., 2015) fine-tuned for phrase grounding by Plummer et al. (2020) on each dataset to extract region features. Then we encode each phrase by averaging MT GrOVLE features (Burns et al., 2019) and provide the image and phrase features as input to our model. We train our model using a learning rate of 5e-5 and a final embedding dimension of 256 until it no longer improves on the validation set for 5 epochs (typically resulting in training times of 15-20 epochs). Performing experiments on this model enables us to test how well our SSNs generalize to another task and how well it can adapt to sharing parameters with layers operating on three types of features (just vision, just language, and a joint representation).
A.3 IMAGE CLASSIFICATION
For image classification the goal is to be able to recognize if an object is present in an image. Typically this task is evaluated using Error@K, or the portion of times that the correct category doesn’t appear in the top k most likely objects. We evaluate our Shapeshifter Networks on three datasets: CIFAR-10 and CIFAR-100 (Krizhevsky, 2009), which are comprised of 60K images of 10 and 100 categories, respectively, and ImageNet (Deng et al., 2009), which is comprised of 1.2M images containing 1,000 categories. We report Error@1 for both CIFAR datasets and Error@5 for ImageNet. In these appendices, we also report Error@1 for ImageNet.
Wide Residual Network (WRN) (Zagoruyko & Komodakis, 2016). WRN modified the traditional ResNets by increasing the width k of each layer while also decreasing the depth d, which they found improved performance. Different variants are identified using WRN-d-k. Following Savarese et al. (Savarese & Maire, 2019), we evaluate our Shapeshifter Networks using WRN-28-10 for CIFAR and WRN-50-2 for ImageNet. We adapt the implementation of Savarese et al.4 and use cutout (DeVries & Taylor, 2017) for data augmentation. Specifically, on CIFAR we train our model using a batch size of 128 for 200 epochs with weight decay set at 5e-4 and an initial learning rate of 0.1 which we decay using a gamma of 0.2 at 60, 120, and 160 epochs. Unlike the vision-language models discussed earlier, these architecture include convolutional layers in addition to a fully connected layer used to implement a classifier, and also have many more layers than the shallow vision-language models.
DenseNet (Huang et al., 2017). Unlike traditional neural networks where each layer in the network is computed in sequence, every layer in a DenseNet using feature maps from every layer which came before it. We adapt PyTorch’s official implementation5 using the hyperparameters as set in Huang et al. (Huang et al., 2017). Specifically, on CIFAR we train our model using a batch size of 96 for 300 epochs with weight decay set at 1e-4 and an initial learning rate of 0.1 which we decay using a gamma of 0.1 at 150 and 225. These networks provide insight into the effect depth has on learning SSNs, as we use a 190-layer DenseNet-BC configuration for CIFAR. However, due to their high computational cost we provide limited results testing only some settings.
EfficientNet (Tan & Le, 2019). EfficientNets are a class of model designed to balance depth, width, and input resolution in order to produce very parameter-efficient models. For ImageNet, we adapt an existing PyTorch implementation and its hyperparameters6, which are derived from the official TensorFlow version. We use the EfficientNet-B0 architecture to illustrate the impact of SSNs on very parameter-efficient, state-of-the-art models. On CIFAR-100 we use an EfficientNet with Network Deconvolution (ND) (Ye et al., 2020), which results in improved results with similar numbers of epochs for training. We use the author’s implementation7, and train each model for 100 epochs (their best performing setting). Note that our best error running different configurations of their model (35.88) is better than those in their paper (37.63), so despite the relatively low performance it is comparable to results from their paper.
3https://github.com/BryanPlummer/phrase_detection 4https://github.com/lolemacs/soft-sharing 5https://pytorch.org/hub/pytorch_vision_densenet/ 6https://rwightman.github.io/pytorch-image-models/ 7https://github.com/yechengxi/deconvolution
A.4 QUESTION ANSWERING
In question answering, a model is given a question and an associated textual passage which may contain the answer, and the goal is to predict the span of text in the passage that contains the answer. We use two versions of the Stanford Question Answering Dataset (SQuAD), SQuAD v1.1 (Rajpurkar et al., 2016), which contains 100K+ question/answer pairs on 500+ Wikipedia particles, and SQuAD v2.0, which augments SQuAD v1.1 with 50K unanswerable questions designed adversarially to be similar to standard SQuAD questions. For both datasets, we report both the F1 score, which captures the precision and recall of the chosen text span, and the Exact Match (EM) score.
ALBERT (Lan et al., 2020) ALBERT is a version of the BERT (Devlin et al., 2019) transformer architecture that applies cross-layer parameter sharing. Specifically, the parameters for all components of a transformer layer are shared among all the transformer layers in the network. ALBERT also includes a factorized embedding to further reduce parameters. We follow the methodology of BERT and ALBERT for reporting results on SQuAD, and our baseline ALBERT scores closely match those reported in the original work. This illustrates the ability of NPAS and SSNs to develop better parameter sharing methods than manually-designed systems for extremely large models.
B EXTENDED RESULTS WITH ADDITIONAL BASELINES
Below we provide additional results with more baseline methods for the three components of our SSNs: weight generator (Section B.1), parameter upsampling (Section B.4), and mapping layers to parameter groups (Section B.3). We provide ablations on the number of parameter groups and templates used by our SSNs in Section C and Section D, respectively.
B.1 ADDITIONAL METHODS THAT GENERATE LAYER WEIGHTS FROM TEMPLATES
Parameter downsampling uses the selected templates T ki for a layer `i to produce its weights wi. In Section 3.1.1 of the paper we discuss two methods of learning a combination of the T ki to generate wi. Below in Section B.2 we provide two simple baseline methods that directly use the candidates. Table 8 compares the baselines to the methods in the main paper that learn weighted combinations of templates, where the learned methods typically perform better than the baselines.
B.2 DIRECT TEMPLATE COMBINATION
Here we describe the strategies we employ that require no parameters to be learned by weight generator, i.e., they operate directly on the templates T ki .
Round Robin (RR) reuses parameters of each template set as few times as possible. The scheme simply returns the weights at index k mod K in the (ordered) template set Ti at the kth query of a parameter group.
Candidate averaging (Avg) averages all candidates in Ti to provide a naive baseline for using multiple candidates. A significant drawback of this approach is that, if K is large, this can result in reusing parameters (across combiners) many times with no way to adapt to a specific layer, especially when the size of the parameter group is small.
B.3 ADDITIONAL PARAMETER MAPPING RESULTS
Table 9 compares approaches that map layers to parameter groups using the same number of parameters as the original model. We see a small, but largely consistent improvement over using a traditional (baseline) network. Notably, our learned grouping methods (WAvg, Emb) perform on par, and sometimes better than using manual mappings. However, our approach can be applied to any architecture to create a selected number of parameter groups, making them more flexible than hand-crafted methods. For example, in Table 10, we see using two groups often helps to improve performance when using very few parameters, but it is not clear how to effectively create two groups by hand for many networks.
B.4 EXTENDED PARAMETER UPSAMPLING
In Table 10 we provide extended results comparing the parameter upsamping methods. We additionally compare with a further naı̈ve baseline of simply repeating parameters until they are the appropriate size. We find that Mask upsampling is always competitive, and typically moreso when two parameter groups are used.
B.5 COMPARISON WITH HYPERNETWORKS
In Table 11 we compare our SSNs on Wide ResNets (Ha et al., 2016) to the same networks implemented using Hypernetworks (Ha et al., 2016) for CIFAR-10, using the results reported in their paper. We can see that, for the same parameter budget, SSNs outperform Hypernetworks.
C EFFECT OF THE NUMBER OF PARAMETER GROUPS P
A significant advantage of using learned mappings of layers to parameter groups, described in Section 3.2, is that our approach can support any number of parameter groups, unlike prior work that required manual grouping and/or heuristics to determine which layers shared parameters (e.g., Lan et al., 2020; Savarese & Maire, 2019). In this section we explore how the number of parameter groups
effects performance on the image classification task. We do not benchmark bidirectional retrieval and phrase grounding since networks addressing these tasks have few layers, so parameter groups are less important (as shown in Table 7).
Table 12 reports the performance of our SSNs when using different numbers P parameter groups. We find that when training with few parameters (first line) low numbers of parameter groups work best, while when more parameters are available larger numbers of groups work better (second line). In fact, there is a significant drop in performance going from 4 to 8 groups when training with few parameters as seen in the first line of Table 12. This is due to the fact that starting at 8 groups some parameter groups had too few weights to implement their layers, resulting in extensive parameter upsampling. This suggests that we may be able to further improve performance when there are few parameters by developing better methods of implementing layers when too few parameters are available.
D EFFECT OF THE NUMBER OF TEMPLATES K
Table 13 reports the results using different numbers of templates. We find that varying the number of templates only has a minor impact on performance most of the time. We note that more templstes tends to lead to reduced variability between runs, making results more stable. As a reminder, however, the number of templates does not guarantee that each layer will have enough parameters to construct them. Thus, parameter groups only use this hyperparameter when many weights are available to it (i.e., it can form multiple templates for the layers it implements). This occurs for the phrase grounding and bidirectional retrieval results at the higher maximum numbers of templates.
E SCALING SSNS TO LARGER NETWORKS
Table 14 demonstrates the ability of our SSNs to significantly reduce the parameters required, and thus the memory required to implement large Wide ResNets so they fall within specific bounds. For example, Table 14(b) shows larger and deeper configurations continue to improve performance even when the number of parameters remains largely constant. Comparing the first line of Table 14(a) and the last line of Table 14(c) we see that SSN-WRN-76-12 outperforms the fully-parameterized WRN28-10 network by 0.6% on CIFAR-100 while only using just over half the parameters, and comes within 0.5% of WRN-76-12 while only using 13.0% of its parameters. We do note that using a SSN does not reduce the number of floating point operations, so although our SSN-WRN-76-12 model uses fewer parameters than the WRN-28-10, it is still slower at both test and train time. However, our results help demonstrate that SSNs can be used to implement very large networks with lower memory
requirements by effectively sharing parameters. This enables us to train larger, better-performing networks than is possible with traditional neural networks on comparable computational resources.
F IMAGE CLASSIFICATION NUMBERS
We provide raw numbers for the results in Figure 3 in Table 15 (CIFAR-100) and Table 16 (ImageNet).
G PERFORMANCE IMPLICATIONS OF NPAS AND SSNS
Our SSNs can offer several performance benefits by reducing parameter counts; notably, they can reduce memory requirements storing a model and can reduce communication costs for distributed training. We emphasize that LB-NPAS does not reduce FLOPs, as the same layer operations are implemented using fewer parameters. Should fewer FLOPs also be desired, SSNs can be combined
with other techniques, such as pruning. Additionally, we note that our implementation has not been extensively optimized, and further performance improvements could likely be achieved with additional engineering.
G.1 COMMUNICATION COSTS FOR DISTRIBUTED TRAINING
Communication for distributed data-parallel training is typically bandwidth-bound, and thus employs bandwidth-optimal allreduces, which are linear in message length (Chan et al., 2007). Thus, we expect communication time to be reduced by a factor proportional to the parameter savings achieved by NPAS, all else being equal. However, frameworks will typically execute allreduces layer-wise as soon as gradient buffers are ready to promote communication/computation overlap in backpropagation; reducing communication that is already fully overlapped is of little benefit. Performance benefits are thus sensitive to the model, implementation details, and the system being used for training.
For CNNs, we indeed observe minor performance improvements, as the number of parameters is typically small. When using 64 V100 GPUs for training WRN-50-2 on ImageNet, we see a 1.04× performance improvement in runtime per epoch when using SSNs with 10.5M parameters (15% of the original model). This is limited because most communication is overlapped. We also observe small performance improvements in some cases because we launch fewer allreduces, resulting in less demand for SMs and memory bandwidth on the GPU. These performance results are in line with prior work on communication compression for CNNs (e.g., Renggli et al., 2019).
For large transformers, however, we observe more significant performance improvements. The SSN-ALBERT-Large is about 1.4× faster using 128 GPUs than the corresponding BERT-Large model. This is in line with the original ALBERT work (Lan et al., 2020), which reported that training ALBERT-Large was 1.7× faster than BERT-Large when using 128 TPUs. Note that due to the differences in the systems for these results, they are not directly comparable.
We would also reiterate that for some applications where communication is more costly, say, for federated learning applications (e.g. McMahan et al. (2017); Konečný et al. (2016)), our approach would be even more beneficial due to the decreased message length.
G.2 MEMORY SAVINGS
LB-NPAS and SSNs reduce the number of parameters, which consequentially reduces the size of the gradients and optimizer state (e.g., momentum) by the same amount. It does not reduce the storage requirements for activations, but note there is much work on recomputation to address this (e.g., Chen et al., 2016; Jain et al., 2020). Thus, the memory savings from SSNs is independent of batch size. For SSN-ALBERT-Large, we use 18M parameters (5% of BERT-Large, which contains 334M parameters). Assuming FP32 is used to store data, we save about 5 GB of memory in this case (about 1/3 of the memory used) | 1. What is the focus of the paper regarding parameter selection and sharing in neural networks?
2. What are the strengths and weaknesses of the proposed approach compared to other works like Slimmable networks and NPAS?
3. Do you have any concerns about the explanation of the method in Section 3, and how could it be improved?
4. Are there any typos or errors in the paper that should be addressed? | Summary Of The Paper
Review | Summary Of The Paper
The paper presents a method to automatically select parameters to share between layers. It proposes to use a shape shifter network to either increase or decrease the number of parameters in the model. The parameters are mapped into parameter groups through a preliminary training step and k-mean cluster the layers. Layers in the same group share parameters. It will generate weights by downsampling or upsampling depending on the layer needs. The method is tested in Low Budget and High Budget regimes and on different tasks. It also shows that the method can be used together with distillation and pruning.
Review
Figure 1. shows comparison between distillation and pruning which seems to be a different class of methods. It is mentioned to be complementary to NPAS. It would be better to show competitive analysis on prior work or similar approaches.
Slimmable networks is also a work presents a network that change parameters sizes automatically
NPAS adds training cost per epoch, but the parameter sharing enables faster convergence. Demonstrating that the effective train time to reach same accuracy would improve the paper.
The explanation flow in section 3 seems non-intuitive, because the method uses section 3.2 first and then it uses section 3.1. Adding pseudo-code or algorithm would also improve clarity of the method.
page 8, typo: covolutional |
ICLR | Title
Test-Time Adaptation to Distribution Shifts by Confidence Maximization and Input Transformation
Abstract
Deep neural networks often exhibit poor performance on data that is unlikely under the train-time data distribution, for instance data affected by corruptions. Previous works demonstrate that test-time adaptation to data shift, for instance using entropy minimization, effectively improves performance on such shifted distributions. This paper focuses on the fully test-time adaptation setting, where only unlabeled data from the target distribution is required. This allows adapting arbitrary pretrained networks. Specifically, we propose a novel loss that improves test-time adaptation by addressing both premature convergence and instability of entropy minimization. This is achieved by replacing the entropy by a non-saturating surrogate and adding a diversity regularizer based on batch-wise entropy maximization that prevents convergence to trivial collapsed solutions. Moreover, we propose to prepend an input transformation module to the network that can partially undo test-time distribution shifts. Surprisingly, this preprocessing can be learned solely using the fully test-time adaptation loss in an end-to-end fashion without any target domain labels or source domain data. We show that our approach outperforms previous work in improving the robustness of publicly available pretrained image classifiers to common corruptions on such challenging benchmarks as ImageNet-C.
1 INTRODUCTION
Deep neural networks achieve impressive performance on test data, which has the same distribution as the training data. Nevertheless, they often exhibit a large performance drop on test (target) data which differs from training (source) data; this effect is known as data shift (Quionero-Candela et al., 2009) and can be caused for instance by image corruptions. There exist different methods to improve the robustness of the model during training (Geirhos et al., 2019; Hendrycks et al., 2019; Tzeng et al., 2017). However, generalization to different data shifts is limited since it is infeasible to include sufficiently many augmentations during training to cover the excessively wide range of potential data shifts (Mintun et al., 2021a). Alternatively, in order to generalize to the data shift at hand, the model can be adapted during test-time. Unsupervised domain adaptation methods such as Vu et al. (2019) use both source and target data to improve the model performance during test-time. In general source data might not be available during inference time, e.g., due to legal constraints (privacy or profit). Therefore we focus on the fully test-time adaptation setting (Wang et al., 2020): model is adapted to the target data during test time given only the arbitrarily pretrained model parameters and unlabeled target data that share the same label space as source data. We extend the work of Wang et al. (2020) by introducing a novel loss function, using a diversity regularizer, and prepending a parametrized input transformation module to the network. We show that our approach outperform previous works and make pretrained models robust against common corruptions on image classification benchmarks as ImageNet-C (Hendrycks & Dietterich, 2019) and ImageNet-R (Hendrycks et al., 2020).
Sun et al. (2020) investigate test-time adaptation using a self-supervision task. Wang et al. (2020) and Liang et al. (2020) use the entropy minimization loss that uses maximization of prediction confidence as self-supervision signal during test-time adaptation. Wang et al. (2020) has shown that such loss performs better adaptation than a proxy task (Sun et al., 2020). When using entropy minimization, however, high confidence predictions do not contribute to the loss significantly anymore and thus provide little self-supervision. This is a drawback since high-confidence samples provide the most
trustworthy self-supervision. We mitigate this by introducing two novel loss functions that ensure that gradients of samples with high confidence predictions do not vanish and learning based on self-supervision from these samples continues. Our losses do not focus on minimizing entropy but on minimizing the negative log likelihood ratio between classes; the two variants differ in using either soft or hard pseudo-labels. In contrast to entropy minimization, the proposed loss functions provide non-saturating gradients, even when there are high confident predictions. Figure 1 provides illustration of the losses and the resulting gradients. Using these new loss functions, we are able to improve the network performance under data shifts in both online and offline adaptation settings.
In general, self-supervision by confidence maximization can lead to collapsed trivial solutions, which make the network to predict only a single or a set of classes independent of the input. To overcome this issue a diversity regularizer (Liang et al., 2020; Wu et al., 2020) can be used, that acts on a batch of samples. It encourages the network to make diverse class predictions on different samples. We extend the regularizer by including a moving average, in order to include the history of the previous batches and show that this stabilizes the adaptation of the network to unlabeled test samples. Furthermore we also introduce a parametrized input transformation module, which we prepend to the network. The module is trained in a fully test-time adaptation manner using the proposed loss function, and without using source data or target labels. It aims to partially undo the data shift at hand and helps to further improve the performance on image classification benchmark with corruptions.
Since our method does not change the training process, it allows to use any pretrained models. This is beneficial because any good performing pretrained network can be readily reused, e.g., a network trained on some proprietary data not available to the public. We show, that our method significantly improves performance of different pretrained models that are trained on clean ImageNet data.
In summary our main contributions are as follows: we propose non-saturating losses based on the negative log likelihood ratio, such that gradients from high confidence predictions still contribute to test-time adaptation. We extend diversity regularizer to its moving average to include the history of previous batch samples to prevent the model collapsing to trivial solutions. We also introduce an input transformation module, which partially undoes the data shift at hand. We show that the performance of different pretrained models can be significantly improved on ImageNet-C and ImageNet-R.
2 RELATED WORK
Common image corruptions are potentially stochastic image transformations motivated by realworld effects that can be used for evaluating a model’s robustness. One such benchmark, ImageNetC (Hendrycks & Dietterich, 2019), contains simulated corruptions such as noise, blur, weather effects, and digital image transformations. Additionally, Hendrycks et al. (2020) proposed three data sets containing real-world distribution shifts, including Imagenet-R. Most proposals for improving robustness involve special training protocols, requiring time and additional resources. This includes data augmentation like Gaussian noise (Ford et al., 2019; Lopes et al., 2019; Hendrycks et al., 2020), CutMix (Yun et al., 2019), AugMix (Hendrycks et al., 2019), training on stylized images (Geirhos et al., 2019; Kamann et al., 2020) or against adversarial noise distributions (Rusak et al., 2020a). Mintun et al. (2021b) pointed out that many improvements on ImageNet-C are due to data augmentations which are too similar to the test corruptions, that is: overfitting to ImageNet-C occurs. Thus, the model might be less robust to corruptions not included in the test set of ImageNet-C.
Unsupervised domain adaptation methods train a joint model of source and target domain by crossdomain losses to find more general and robust features, e. g. optimize feature alignment (QuiñoneroCandela et al., 2008; Sun et al., 2017) between domains, adversarial invariance (Ganin & Lempitsky, 2015; Tzeng et al., 2017; Ganin et al., 2016; Hoffman et al., 2018), shared proxy tasks (Sun et al., 2019) or adapt entropy minimization via an adversarial loss (Vu et al., 2019). While these approaches are effective, they require explicit access to source and target data at the same time, which may not always be feasible. Our approach works with any pretrained model and only needs target data.
Test-time adaptation is a setting, when training (source) data is unavailable at test-time. It is related to source free adaptation, where several works use generative models, alter training (Kundu et al., 2020; Li et al., 2020b; Kurmi et al., 2021; Yeh et al., 2021) and require several thousand epochs to adapt to the target data (Li et al., 2020b; Yeh et al., 2021). Besides, there is another line of work (Sun et al., 2020; Schneider et al., 2020; Nado et al., 2021; Benz et al., 2021; Wang et al., 2020) that
interpret the common corruptions as data shift and aim to improve the model robustness against these corruptions with efficient test-time adaptation strategy to facilitate online adaptation. such settings spare the cost of additional computational overhead. Our work also falls in this line of research and aims to adapt the model to common corruptions efficiently with both online and offline adaptation.
Sun et al. (2020) update feature extractor parameters at test-time via a self-supervised proxy task (predicting image rotations). However, Sun et al. (2020) alter the training procedure by including the proxy loss into the optimization objective as well, hence arbitrary pretrained models cannot be used directly for test-time adaptation. Inspired by the domain adaptation strategies (Maria Carlucci et al., 2017; Li et al., 2016), several works (Schneider et al., 2020; Nado et al., 2021; Benz et al., 2021) replace the estimates of Batch Normalization (BN) activation statistics with the statistics of the corrupted test images. Fully test time adaptation, studied by Wang et al. (2020) (TENT) uses entropy minimization to update the channel-wise affine parameters of BN layers on corrupted data along with the batch statistics estimates. SHOT (Liang et al., 2020) also uses entropy minimization and a diversity regularizer to avoid collapsed solutions. SHOT modifies the model from the standard setting by adopting weight normalization at the fully connected classifier layer during training to facilitate their pseudo labeling technique. Hence, SHOT is not readily applicable to arbitrary pretrained models.
We show that pure entropy minimization (Wang et al., 2020; Liang et al., 2020) as well as alternatives such as max square loss (Chen et al., 2019) and Charbonnier penalty (Yang & Soatto, 2020) results in vanishing gradients for high confidence predictions, thus inhibiting learning. Our work addresses this issue by proposing a novel non-saturating loss, that provides non-vanishing gradients for high confidence predictions. We show that our proposed loss function improves the network performance through test-time adaptation. In particular, performance on corruptions of higher severity improves significantly. Furthermore, we add and extend the diversity regularizer (Liang et al., 2020; Wu et al., 2020) to avoid collapse to trivial, high confidence solutions. Existing diversity regularizers (Liang et al., 2020; Wu et al., 2020) act on a batch of samples, hence the number of classes has to be smaller than the batch size. We mitigate this problem by extending the regularizer to a moving average version. Li et al. (2020a) also use a moving average to estimate the entropy of the unconditional class distribution but source data is used to estimate the gradient of the entropy. In contrast, our work does not need access to the source data since the gradient is estimated using only target data. Prior work Tzeng et al. (2017); Rusak et al. (2020b); Talebi & Milanfar (2021) transformed inputs by an additional module to overcome domain shift, obtain robust models, and also to learn to resize. In our work, we prepend an input transformation module to the model, but in contrast to former works, this module is trained purely at test-time to partially undo the data shift at hand to aid the adaptation.
3 METHOD
We propose a novel method for fully test-time adaption. We assume that a neural network fθ with parameters θ is available that was trained on data from some distributionD, as well a set of (unlabeled) samples X ∼ D′ from a target distribution D′ 6= D (importantly, no samples from D are required). We frame fully test-time adaption as a two-step process: (i) Generate a novel network gφ based on fθ, where φ denotes the parameters that are adapted. A simple variant for this is g = f and φ ⊆ θ Wang et al. (2020). However, we propose a more expressive and flexible variant in Section 3.1. (ii) Adapt the parameters φ of g on X using an unsupervised loss function L. We propose two novel losses Lslr and Lhlr in Section 3.2 that have non-vanishing gradients for high-confidence self-supervision.
3.1 INPUT TRANSFORMATION
We propose to define the adaptable model as g = f ◦ d. That is: we preprend a trainable network d to f . The motivation for the additional component d is to increase expressivity of g such that it can learn to (partially) undo the domain shift D → D′. Specifically, we choose d(x) = γ · [τx+ (1− τ)rψ(x)] + β, where τ ∈ R, (β, γ) ∈ Rnin with nin being the number of input channels, rψ being a network with identical input and output shape, and · denoting elementwise multiplication. Specifically, β and γ implement a channel-wise affine transformation and τ implements a convex combination of unchanged input and the transformed input rψ(x). By choosing τ = 1, γ = 1, β = 0, we ensure d(x) = x and thus g = f at initialization. In principle, rψ can be chosen arbitrarily. Here, we choose rψ as a simple stack of 3× 3 convolutions, group normalization, and ReLUs (refer Sec. A.2 for details). However, exploring other choices would be an interesting avenue for future work.
Importantly, while the motivation for d is to learn to partially undo a domain shift D → D′, we train d end-to-end in the fully test-time adaptation setting on data X ∼ D′, without any access to samples from the source domain D, based on the losses proposed in Section 3.2. The modulation parameters of gφ are φ = (β, γ, τ, ψ, θ′), where θ′ ⊆ θ. That is, we adapt only a subset of the parameters θ of the pretrained network f . We largely follow Wang et al. (2020) in adapting only the affine parameters of normalization layers in f while keeping parameters of convolutional kernels unchanged. Additionally, batch normalization statistics (if any) are adapted to the target distribution.
Note that the proposed method is applicable to any pretrained network that contains normalization layers with a channel-wise affine transformation. For networks with no affine transformation layers, one can add such layers into f that are initialized to identity as part of model augmentation.
3.2 ADAPTATION OBJECTIVE
We propose a loss function L = Ldiv + δLconf for fully test-time network adaptation that consists of two components: (i) a term Ldiv that encourages predictions of the network over the adaptation dataset X that match a target distribution pD′(y). This can help avoiding test-time adaptation collapsing to too narrow distributions such as always predicting the same or very few classes. If pD′(y) is (close to) uniform, it acts as a diversity regularizer. (ii) A term Lconf that encourages high confidence prediction on individual datapoints. We note that test-time entropy minimization (TENT) (Wang et al., 2020) fits into this framework by choosing Ldiv = 0 and Lconf as the entropy.
3.2.1 CLASS DISTRIBUTION MATCHING Ldiv
Assuming knowledge of the class distribution pD′(y) on the target domain D′, we propose to add a term to the loss that encourages the empirical distribution of (soft) predictions of gφ on X to match this distribution. Specifically, let p̂gφ(y) be an estimate of the distribution of (soft) predictions of gφ. We use the Kullback-Leibler divergence Ldiv = DKL(p̂gφ(y)|| pD′(y)) as loss term. In some applications information about the target class distribution is available, e.g. in medical data it might be known that there is a large class imbalance. In general this information is not available, and here we assume a uniform distribution of pD′(y), which corresponds to maximizing the entropy H(p̂gφ(y)). Similar assumption has been made in SHOT to circumvent the collapsed solutions.
Since the estimate p̂gφ(y) depends on φ, which is continuously adapted, it needs to be re-estimated on a per-batch level. Since re-estimating p̂gφ(y) from scratch would be computational expensive, we propose to use a running estimate that tracks the changes of φ as follows: let pt−1(y) be the estimate at iteration t− 1 and pempt = 1n ∑n k=1 ŷ
(k), where ŷ(k) are the predictions (confidences) of gφ on a mini-batch of n inputs x(k) ∼ X . We update the running estimate via pt(y) = κ · sg(pt−1(y))+(1− κ) · pempt , where sg refers stop-gradient. The loss becomes Ldiv = DKL(pt(y)|| pD′(y)) accordingly. Unlike Li et al. (2020a), our approach only requires target but no source data to estimate the gradient.
3.2.2 CONFIDENCE MAXIMIZATION Lconf
We motivate our choice of Lconf step-by-step from the (unavailable) supervised cross-entropy loss: for this, let ŷ = gφ(x) be the predictions (confidences) of model gφ and H(ŷ, yr) = − ∑ c y r c log ŷc be the cross-entropy between prediction ŷ and some reference yr. Let the last layer of g be a softmax activation layer softmax. That is ŷ = softmax(o), where o are the network’s logits. We can rewrite the cross-entropy in terms of the logits o and a one-hot reference yr as follows: H(softmax(o), yr) = −ocr + log ∑ncl i=1 e oi where cr is the index of the 1 in yr and ncl is the number of classes.
When labels being available for the target domain (which we do not assume) in the form of a one-hot encoded reference yt for data xt, one could use the supervised cross-entropy loss by setting yr = yt and using Lsup(ŷ, yr) = H(ŷ, yr) = H(ŷ, yt). Since fully test-time adaptation assumes no label information, supervised cross-entropy loss is not applicable and other options for yr need to be used.
One option is (hard) pseudo-labels. That is, one defines the reference yr based on the network predictions ŷ via yr = onehot(ŷ), where onehot creates a one-hot reference with the 1 corresponding to the class with maximal confidence in ŷ. This results in Lpl(ŷ) = H(ŷ, onehot(ŷ)) = − log ŷc∗ , with c∗ = argmax ŷ. One disadvantage with this loss is that the (hard) pseudo-labels ignore uncertainty in the network predictions during self-supervision. This results in large gradient magnitudes with
respect to the logits |∂Lpl∂oc∗ | being generated on data where the network has low confidence (see Figure 1). This is undesirable since it corresponds to the network being affected most by data points where the network’s self-supervision is least reliable1.
An alternative is to use soft pseudo-labels, that is yr = ŷ. This takes uncertainty in network predictions into account during self-labelling and results in the entropy minimization loss of TENT (Wang et al., 2020): Lent(ŷ) = H(ŷ, ŷ) = H(ŷ) = − ∑ c ŷc log ŷc. However, also for the entropy the logits’ gradient magnitude |∂Lent∂o | goes to 0 when one of the entries in ŷ goes to 1 (see Figure 1). For a binary classification task, for instance, the maximal logits’ gradient amplitude is obtained for ŷ ≈ (0.82, 0.18). This implies that during later stages of test-time adaptation where many predictions typically already have high confidence (significantly above 0.82), gradients are dominated by datapoints with relative low confidence in self-supervision.
While both hard and soft pseudo-labels are clearly motivated, they are not optimal in conjunction with a gradient-based optimizer since the self-supervision from low confidence predictions dominates (at least during later stages of training). We address this issue by proposing two losses that increase the gradient amplitude from high confidence predictions. We argue that this leads to stronger selfsupervision (better gradient direction when averaged over the batch) than from the entropy loss (see also Sec. A.1 for an illustrative example supporting this claim) . The two losses are analogous to Lpl and Lent, but are not based on the cross-entropy H but on the negative log likelihood ratios:
R(ŷ, yr) = − ∑ c yrc log ŷc∑ i6=c ŷi = − ∑ c yrc (log ŷc − log ∑ i 6=c ŷi) = H(ŷ, y r) + ∑ c yrc log ∑ i 6=c ŷi
Note that while the entropy H is lower bounded by 0, R can get arbitrary small if yrc → 1 and the sum ∑ i 6=c ŷi → 0 and thus log ∑ i 6=c ŷi → −∞. This property will induce non-vanishing gradients for high confidence predictions.
The first loss we consider is the hard likelihood ratio loss that is defined similarly to the hard pseudo-labels loss Lpl:
Lhlr(ŷ) = R(ŷ, onehot(ŷ)) = − log( ŷc∗∑ i 6=c∗ ŷi ) = − log( e oc∗∑ i 6=c∗ e oi ) = −oc∗ + log ∑ i 6=c∗ eoi ,
1The prediction confidence for a datapoint can be interpreted as a proxy for its distance to the decision boundary. A low confidence prediction indicates that a datapoint appears to be close to the decision boundary and the model is less certain on which side of the decision boundary the datapoint should lie. We call this "low confidence self-supervision" since the direction of the gradient becomes ambiguous.
where c∗ = argmax ŷ. We note that ∂Lhlr∂oc∗ = −1, thus also high-confidence self-supervision contributes equally to the maximum logits’ gradients. This loss was also independently proposed as negative log likelihood ratio loss by Yao et al. (2020) as a replacement to the fully-supervised cross entropy loss for classification task. However, to the best of our knowledge, we are the first to motivate and identify the advantages of this loss for self-supervised learning and test-time adaptation due to its non-saturating gradient property.
In addition to Lhlr, we also account for uncertainty in network predictions during self-labelling in a similar way as for the entropy loss Lent, and propose the soft likelihood ratio loss:
Lslr(ŷ) = R(ŷ, ŷ) = − ∑ c ŷc · log( ŷc∑ i 6=c ŷi ) = ∑ c ŷc(−oc + log ∑ i 6=c eoi)
We note that as ŷc∗ → 1, Lslr(ŷ) → Lhlr(ŷ). Thus the asymptotic behavior of the two likelihood ratio losses for high confidence predictions is the same. However, the soft likelihood ratio loss creates lower amplitude gradients for low confidence self-supervision. We provide illustrations of the discussed losses and the resulting logits’ gradients in Figure 1. Furthermore, an illustration of other losses like the max square loss and Charbonnier penalty can be found in Sec. A.7.
We note that both likelihood ratio losses would typically encourage the network to simply scale its logits larger and larger, since this would reduce the loss even if the ratios between the logits remain constant. However, when finetuning an existing network and restricting the layers that are adapted such that the logits remain approximately scale-normalized, these losses can provide a useful and non-vanishing gradient signal for network adaptation. We achieve this appproximate scale normalization by freezing the top layers of the respective networks. In this case, normalization layers such as batch normalization prohibit “logit explosion”. However, predicted confidences can presumably become overconfident; calibrating confidences in a self-supervised test-time adaptation setting is an open and important direction for future work.
4 EXPERIMENTAL SETTINGS
Datasets We evaluate our method on image classification datasets for corruption robustness and domain adaptation. We evaluate on the challenging benchmark ImageNet-C (Hendrycks & Dietterich, 2019), which includes a wide variety of 15 different synthetic corruptions with 5 severity levels that attribute to data shift. This benchmark also includes 4 additional corruptions as validation data. For domain adaptation, we choose ImageNet trained models to adapt to ImageNet-R proposed by Hendrycks et al. (2020). ImageNet-R comprises 30,000 image renditions for 200 ImageNet classes. Domain adaptation on VisDA-C (Peng et al., 2017) and digit classification can be found in Sec. A.6.
Models Our method operates in a fully test-time adaptation setting that allows us to use any arbitrary pretrained model. We use publicly available ImageNet pretrained models ResNet50, DenseNet121, ResNeXt50, MobileNetV2 from torchvision Torch-Contributors (2020). We also test on a robust ResNet50 model trained using DeepAugment+AugMix 2 Hendrycks et al. (2020).
Baseline for fully test-time adaptation Since TENT from Wang et al. (2020) outperformed competing methods and fits the fully test-time adaptation setting, we consider it as a baseline and compare our results to this approach. Similar to TENT, we also adapt model features by estimating the normalization statistics and optimize only the channel-wise affine parameters on the target distribution.
Settings We conduct test-time adaptation on a target distribution with both online and offline updates using the Adam optimizer with learning rate 0.0006 with batch size 64. We set the weight of Lconf in our loss function to δ = 0.025 and κ = 0.9 in the running estimate pt(y) of Ldiv (we investigate the effect of κ in the Sec. A.4). Similar to SHOT (Liang et al., 2020), we also choose the target distribution pD′(y) in Ldiv as a uniform distribution over the available classes. For TENT, we use SGD with momentum 0.9 at learning rate 0.00025 with batch size 64. These values correspond to the ones of Wang et al. (2020); alternative settings for TENT did not improve performance. For offline updates, we adapt the models for 5 epochs using a cosine decay schedule of the learning rate. We found that the models converge during 3 to 5 epochs and do not improve further. Similar to Wang et al. (2020), we also control for ordering by data shuffling and sharing the order across the methods.
2From https://github.com/hendrycks/imagenet-r. Owner permitted to use it for research/commercial purposes.
Note that all the hyperparameters are tuned solely on the validation corruptions of ImageNet-C that are disjoint from the test corruptions. As discussed in Section 3.2.2, we freeze all trainable parameters in the top layers of the networks to prohibit “logit explosion”. Normalization statistics are still updated in these layers. Sec. A.3 provides more details regarding frozen layers in different networks.
Furthermore, we prepend a trainable input transformation module d (cf. Sec. 3.1) to the network to partially counteract the data-shift. Note that the parameters of this module discussed in Sec. 3.1 are trainable and subject to optimization. This module is initialized to operate as an identity function prior to adaptation on a target distribution by choosing τ = 1, γ = 1, and β = 0. We adapt the parameters of this module along with the channel-wise affine transformations and normalization statistics in an end-to-end fashion, solely using our proposed loss function along with the optimization details mentioned above. The architecture of this module is discussed in Sec. A.2.
Since Ldiv is independent of Lconf, we also propose to combine Ldiv with TENT, i. e. L = Ldiv +Lent. We denote this as TENT+ and also set κ = 0.9 here. Note that TENT optimizes all channel-wise affine parameters in the network (since entropy is saturating and does not cause logit explosion). For a fair comparison to our method, we also freeze the top layers of the networks in TENT+. We show that adding Ldiv and freezing top layers significantly improves the networks performance over TENT. Note that SHOT (Liang et al., 2020) is the combination of TENT, batch-level diversity regularizer, and their pseudo labeling strategy. TENT+ can be seen as a variant of SHOT but without the pseudo labeling. Please refer to Sec. A.5 for the test-time adaptation of pretrained models with SHOT.
Note that each corruption and severity in ImageNet-C is treated as a different target distribution and we reset model parameters to their pretrained values before every adaptation. We run our experiments for three times with random seeds (2020, 2021, 2022) in PyTorch and report the average accuracies.
5 RESULTS
Evaluation on ImageNet-C We adapt different models on the ImageNet-C benchmark using TENT, TENT+, and both hard likelihood ratio (HLR) and soft likelihood ratio (SLR) losses in an online adaptation setting. Figure 2 (top row) depicts the mean corruption accuracy (mCA%) of each model computed across all the corruptions and severity levels. It can be observed that TENT+ improves over TENT, showcasing the importance of a diversity regularizer Ldiv. Importantly, our methods HLR and SLR outperform TENT and TENT+ across DenseNet121, MobileNetV2, ResNet50, ResNeXt50 and perform comparable with TENT+ on robust ResNet50-DeepAugment+Augmix model. This shows that the mCA% of robust DeepAugment+Augmix model can be further increased from 58% (before adaptation) to 67.5% using test-time adaptation techniques. Here, the average of mCA obtained from three different random seeds are depicted along with the error bars. These smaller error bars represent that the test-time adaptation results are not sensitive to the choice of random seed.
We also illustrate the performance of ResNet50 on the highest severity level across all 15 test corruptions of ImageNet-C in Table 1. Here, online adaptation results along with the offline adaptation on epoch 1 and 5 are reported. It can be seen that online adaptation and single epoch of test-time
adaptation improves the performance significantly and makes minor improvements until epoch 5. TENT adaptation for more than one epoch result in reduced performance and TENT with Ldiv (TENT+) prevents this behavior. Both HLR and SLR clearly and consistently outperform TENT / TENT+ on the ResNet50 and also note that SLR outweighs HLR. We also compare our results with the hard pseudo-labels (PL) objective and also with an oracle setting where the groundtruth labels of the target data are used for adapting the model in a supervised manner (GT). Note that this oracle setting is not of practical importance but illustrates the empirical upper bound on fully test-time adaptation performance under the chosen modulation parametrization.
ImageNet-R We online adapt different models on ImageNet-R and depict the results in Figure 2 (middle row). Results show that HLR and SLR clearly outperform TENT and TENT+ and significantly improve performance of all the models, including the model pretrained with DeepAugment+Augmix.
Evaluation with data subsets Above we evaluate the model on the same data that is also used for the test-time adaptation. Here, we test model generalization by adapting on a subset of target data
and evaluate the performance on the whole dataset (in offline setting), which also includes unseen data that is not used for adaptation. We conduct two case studies: (i) adapt on the data from a subset of ImageNet classes and evaluate the performance on the data from all the classes. (ii) Adapt only on a subset of data from each class and test on all seen and unseen samples from the whole dataset.
Figure 3 illustrates generalization of a ResNet50 adapted on different proportions of the data across different corruptions, both in terms of classes and samples. We observe that adapting a model on a small subset of samples and classes is sufficient to achieve reasonable accuracy on the whole target data. This suggests that the adaptation actually learns to compensate the data shift rather than overfitting to the adapted samples or classes. The performance of TENT decreases as the number of classes/samples increases, because Lent can converge to trivial collapsed solutions and more data corresponds to more updates steps during adaptation. Adding Ldiv such as in TENT+ stabilizes the adaptation process and reduces this issues. Reported are the average of random seeds with error bars.
Input transformation We investigate whether the input transformation (IT) module, trained end-toend with a ResNet50 and SLR loss on data of the respective distortion without seeing any source (undistorted) data, can partially undo certain domain shifts of ImageNet-C and also increase accuracy on corrupted data. We measure domain shift via the structural similarity index measure (SSIM) (Wang et al., 2004) between the clean image (unseen by the model) and its distorted version/the output of IT on the distorted version. Following offline adaptation setting, Table 2 shows that IT increases the SSIM considerably on certain distortions such as Impulse, Contrast, Snow, and Frost. IT increases SSIM also for other types of noise distortions, while it slightly reduces SSIM for the blur distortions, Elastic, Pixelate, and JPEG. When combined with SLR, IT considerably increases accuracy on distortions for which also SSIM increased significantly (for instance +20 percent points on Impulse, +4 percent points on Contrast) and never reduces accuracy by more than 0.11 percent points. More results on online and offline adaptation with TENT / TENT+ can be found in Table A3.
Clean images As a sanity check, we investigate the effect of test-time adaptation when target data comes from the same distribution as training data. For this, we online adapt pretrained models on clean validation data of ImageNet. The results in Figure 2 (bottom row) depict that the performance of SLR/HLR adapted models drops by 0.8 to 1.8 percent points compared to the pretrained model. We attribute this drop to self-supervision being less reliable than the original full supervision on indistribution training data. The drop is smaller for TENT and TENT+, presumably because predictions on in-distribution target data are typically highly confident such that there is little gradient and thus little change to the pretrained networks by TENT. In summary, while self-supervision by confidence maximization is a powerful method for adaptation to domain shift, the observed drop when adapting to data from the source domain indicates that there is “no free lunch” in test-time adaptation.
6 CONCLUSION
We propose a method to improve corruption robustness and domain adaptation of models in a fully test-time adaptation setting. Unlike entropy minimization, our proposed loss functions provide non-vanishing gradients for high confident predictions and thus attribute to improved adaptation in a self-supervised manner. We also show that additional diversity regularization on the model predictions is crucial to prevent trivial solutions and stabilize the adaptation process. Lastly, we introduce a trainable input transformation module that partially refines the corrupted samples to support the adaptation. We show that our method improves corruption robustness on ImageNet-C and domain adaptation to ImageNet-R on different ImageNet models. We also show that adaptation on a small fraction of data and classes is sufficient to generalize to unseen target data and classes.
7 ETHICS STATEMENT
We abide by the general ethical principles listed by ICLR code of ethics. Our work does not include the study of human subjects, dataset releases, do not raise pontential conflicts of interest, or discrimination/bias/fairness concerns, or privacy and security issues. Our non-saturating loss increases accuracy but might result in over confident predictions, which can cause harm in safetycritical downstream applications when not properly calibrated. At the same time, self-supervised confidence maximization might amplify bias in pretrained models. We hope that the diversity regularizer in the loss partially compensates this issue.
8 REPRODUCIBILITY STATEMENT
We provide complete details of our experimental setup for reproducibility. Sec. 4 provides details of the network architectures, optimizer, learning rate, batch size, choice of hyperparameters of our method and the random seeds used for generating the results. Sec. A.3 provides more details regarding frozen layers in different networks. Sec. A.2 shows the structure of input transformation module used in this work. We will also provide a link to an anonymous downloadable source code as a comment directed to the reviewers and area chairs in the discussion forum.
A APPENDIX
A.1 ILLUSTRATIVE EXAMPLE OF LOG LIKELIHOOD RATIO ADAPTATION OBJECTIVE
A simple 1D example is devised to illustrate the benefits of proposed log likelihood ratio as test time adaptation objective. Consider data points (unlabeled) that are sampled from the following bimodal distribution: 0.5 · N (−1, 3) + 0.5 · N (+1, 3), that is: half of the samples come from a normal distribution with mean -1 and the other half from a normal distribution with mean +1 (and both having standard deviation 3). We can interpret these two components of the mixture distributions as corresponding to data of two different classes, but class labels are of course unavailable during unsupervised test-time adaptation.
We assume a simple logistic model of the form pθ(y = 1|x) = 11+e−(x+θ) , where x is the value of the data sample and θ is a scalar offset that determines the decision boundary. By construction, we know that the minimum density of the mixture distribution on [−1, 1] is at 0. Since confidence maximization aims as moving the decision boundary to regions in input space with minimum data density (in this case to 0), we can compare different self-supervised confidence maximization losses in the finite data regime as follows: for every finite data sample with N data points {xi} for i = 1, . . . , N and loss function L , we solve θ∗(L) = argminθ∈[−1,1] L(θ, {xi}), where the loss (such as entropy or SLR) is averaged over all data points. The absolute value |θ∗(L)| gives us then an estimate of the error of the decision boundary parameter |θ∗(L)| for the given data set and loss function. Table A1 provides this error for different loss functions and different number of data samples. It can be seen that SLR and HLR clearly outperform Entropy loss (TENT) for all data regimes. The difference between SLR and HLR is generally very small. While SLR seems to be consistently slightly better than HLR, this difference is not statistically significant. We attribute the superiority of SLR/HLR compared to entropy to the fact that all data points have non-saturating loss, regardless of their distance to the decision boundary. Thus, all data contributes to localizing the decision boundary, while for saturating losses such as the entropy, effectively only "nearby" points determine the decision boundary. This example illustrates that our proposed non-saturating losses are beneficial over entropy loss for self-supervised confidence maximization.
Table A1: Illustrates the error of the decision boundary parameter for different loss functions and different number of samples averaged over 100 runs (shown are mean and standard error of mean).
#samples 100 200 500 1000 2000 10000 20000
Entropy 0.487±0.031 0.364±0.029 0.230±0.018 0.152±0.013 0.117±0.009 0.052±0.004 0.033±0.003 HLR 0.357±0.023 0.234±0.018 0.145±0.012 0.094±0.008 0.071±0.006 0.032±0.002 0.022±0.002 SLR 0.332±0.022 0.214±0.017 0.140±0.011 0.088±0.008 0.067±0.006 0.032±0.002 0.021±0.002
A.2 INPUT TRANSFORMATION MODULE
Note that we define our adaptable model as g = f ◦ d, where d is a trainable network prepended to a pretrained neural network f (e.g., pretrained ResNet50). We choose d(x) = γ ·[τx+ (1− τ)rψ(x)]+ β, where τ ∈ R, (β, γ) ∈ Rnin with nin being the number of input channels, rψ being a network with identical input and output shape, and · denoting elementwise multiplication. Here, β and γ implement a channel-wise affine transformation and τ implements a convex combination of unchanged input and the transformed input rψ(x). We set τ = 1, γ = 1, and β = 0, to ensure that d(x) = x and thus g = f at initialization. In principle, rψ can be chosen arbitrarily. Here, we choose rψ as a simple stack of 3× 3 convolutions with stride 1 and padding 1, group normalization, and ReLUs without any upsampling/downsampling layers. Specifically, the structure of g is illustrated in Figure A1.
In addition to the results reported in Table 2, we also compare TENT and TENT+ with and without Input Transformation (IT) module on ResNet50 for all corruptions at severity level 5 in both online adaptation setting and offline adaptation with 5 epochs in Table A3. Furthermore, we also present the qualitative results of the image transformations from the input transformation module adapted with SLR (offline setting) in Figure A2.
Table A2: Ablation study on the components of input transformation module on ResNet50 for all corruptions at severity level 5.
Corruption Gauss Shot Impulse Defocus Glass Motion Zoom Snow Frost Fog Bright Contrast Elastic Pixel JPEG mean
x 41.52 42.90 44.07 41.69 40.78 54.76 56.59 57.35 51.01 63.53 68.72 50.65 61.49 63.46 58.32 53.12 rψ(x) 13.17 26.57 28.81 5.09 3.61 30.61 49.79 53.73 45.96 58.82 65.79 53.73 56.77 60.14 53.38 40.40 τx+ (1− τ)rψ(x) 43.13 46.43 56.25 41.80 40.90 55.75 56.65 58.55 51.72 63.59 68.83 53.89 61.50 63.73 58.51 54.74 γ · [τx+ (1− τ)rψ(x)] + β 43.18 46.24 56.21 41.91 40.89 55.79 56.66 58.50 51.72 63.56 68.83 54.26 61.49 63.76 58.52 54.76
Table A3: Test-time adaptation of ResNet50 on ImageNet-C at highest severity level 5 with and without Input Transformation (IT) module. Reported are the mean accuracy(%) across three random seeds (2020/2021/2022). While IT also improves performance when combined with TENT+, it is still clearly outperformed by SLR+IT.
Method Gauss Shot Impulse Defocus Glass Motion Zoom Snow Frost Fog Bright Contrast Elastic Pixel JPEG
Online adaptation (evaluation on a batch directly after adaptation on the batch)
TENT 28.60 31.06 30.54 29.09 28.07 42.32 50.39 48.01 42.05 58.40 68.20 27.25 55.68 59.46 53.64 TENT + IT 28.99 31.73 31.15 28.87 27.85 42.43 50.36 48.02 41.95 58.37 68.19 24.35 55.68 59.49 53.57
TENT+ 29.09 31.65 30.68 29.33 28.65 42.32 50.32 48.09 42.54 58.39 68.23 31.43 55.90 59.46 53.68 TENT+ + IT 29.48 32.34 31.38 29.06 28.42 42.43 50.33 48.11 42.47 58.40 68.20 32.11 55.87 59.49 53.64 SLR (ours) 35.11 37.93 36.83 35.13 35.13 48.29 53.45 52.68 46.52 60.74 68.40 44.78 58.74 61.13 55.97 SLR + IT (ours) 36.19 39.17 40.46 35.17 34.87 48.67 53.62 52.71 46.93 60.66 68.30 46.55 58.79 61.27 55.93 Evaluation after epoch 5
TENT 30.64 33.80 34.72 30.13 29.05 49.08 53.63 52.86 38.47 61.13 68.81 10.72 59.25 62.15 56.44 TENT + IT 31.92 36.02 38.14 30.44 28.68 49.04 53.59 52.99 38.76 61.14 68.84 13.52 59.23 62.15 56.56
TENT+ 35.19 38.12 37.43 34.82 34.95 50.33 54.24 53.88 46.28 61.50 69.07 29.87 60.01 62.61 57.09 TENT+ + IT 36.13 39.84 41.03 34.62 34.72 50.33 54.10 53.91 46.46 61.54 69.07 30.22 59.95 62.72 57.11 SLR (ours) 41.52 42.90 44.07 41.69 40.78 54.76 56.59 57.35 51.01 63.53 68.72 50.65 61.49 63.46 58.32
SLR+IT (ours) 43.09 44.39 64.05 41.98 40.99 55.73 56.75 58.56 51.68 63.64 68.85 55.01 61.32 63.59 58.24
A.2.1 CONTRIBUTION OF EACH COMPONENT IN INPUT TRANSFORMATION MODULE
Table A2 shows the results of ablation study on the components of input transformation module on ResNet50 for all corruptions at severity level 5 adapted with SLR for 5 epochs. The ablation study includes: (1) no input transformation module d(x) = x, (2) with network d(x) = rψ(x), (3) including τ , (4) including channel-wise affine transformation γ and β. We can observe that the inputs transformed with network rψ drops the performance without the convex combination with τ . The additional channel wise affine transformations didn’t bring further consistent improvements and can be ignored from the transformation module. Exploring other architectural choices and training (or pretraining) strategy for the input transformation module would be an interesting avenue for future work.
A.3 FROZEN LAYERS IN DIFFERENT NETWORKS
As discussed in Section 3.2.2, we freeze all trainable parameters in the top layers of the networks to prohibit “logit explosion”. That implies, we do not optimize the channel-wise affine transformations of the top layers but normalization statistics are still estimated. Similar to the hyperparameters of test time adaptation settings, the choice of these layers are made using ImageNet-C validation data. We mention the frozen layers of each architecuture below. Note that the naming convention of these layers are based on the model definition in torchvision:
• DenseNet121 - features.denseblock4, features.norm5.
• MobileNetV2 - features.16, features.17, features.18.
• ResNeXt50, ResNet50 and ResNet50 (DeepAugment+Augmix) - layer4.
A.3.1 RESULTS WITHOUT FREEZING THE TOP LAYERS
We mentioned that the proposed losses could alternatively encourage the network to scale the logits grow larger and larger and still reduce the loss. However, we did not find any considerable differences empirically in the explored settings when adapting the model with or without freezing the top layer. We found that adapting the model with and without freezing the top layers have comparable performance in both online and offline adaptation settings as shown in Table A4 respectively. However, we would still recommend freezing the top-most layers as the default choice to be on the safe side. These results indicate that the early layers capture the distribution shift sufficiently to improve the model adaptation.
Table A4: Comparing the online and offline adaptation results with and without freezing the affine parameters of top normalization layers of ResNet50 at severity 5. Here, "Freeze" and "NoFreeze" refer to the setting with and without freezing the top affine layers respectively.
Corruption Gauss Shot Impulse Defocus Glass Motion Zoom Snow Frost Fog Bright Contrast Elastic Pixel JPEG mean
Online evaluation
TENT+ NoFreeze 29.05 31.32 30.32 28.95 28.29 42.37 50.45 48.12 42.21 58.51 68.29 28.17 55.57 59.47 53.46 43.63 TENT+ Freeze 29.21 31.54 30.55 29.17 28.60 42.54 50.47 48.18 42.51 58.50 68.30 31.25 55.76 59.54 53.62 43.98
HLR NoFreeze 33.73 36.50 35.63 33.99 33.88 46.55 52.76 51.44 45.82 59.74 67.37 43.19 57.69 59.77 54.95 47.53 HLR Freeze 33.10 36.08 34.74 33.21 33.31 46.36 52.77 51.42 45.47 60.01 68.07 42.75 58.02 60.42 55.34 47.40
SLR NoFreeze 35.61 38.37 37.50 35.83 35.81 48.29 53.61 52.62 46.85 60.42 67.71 44.93 58.43 60.56 55.65 48.81 SLR Freeze 35.11 37.93 36.83 35.13 35.13 48.29 53.45 52.68 46.52 60.74 68.40 44.78 58.74 61.13 55.97 48.72
offline evaluation
TENT+ NoFreeze 32.03 35.33 35.28 31.92 31.27 49.20 53.79 53.01 40.37 61.22 68.79 19.38 59.25 62.20 56.51 45.97 TENT+ Freeze 35.19 38.12 37.43 34.82 34.95 50.33 54.24 53.88 46.28 61.50 69.07 29.87 60.01 62.61 57.09 48.35
HLR NoFreeze 41.60 43.80 43.89 42.21 41.50 53.82 56.21 56.71 50.83 62.74 67.87 51.34 60.65 62.58 57.70 52.89 HLR Freeze 41.37 44.04 43.68 41.74 41.09 54.26 56.43 57.03 50.81 63.05 68.29 50.98 61.15 63.08 58.13 53.0
SLR NoFreeze 41.45 43.95 44.26 42.56 41.60 54.25 56.13 56.72 50.92 62.97 68.02 50.99 60.90 62.83 57.86 53.02 SLR Freeze 41.52 42.90 44.07 41.69 40.78 54.76 56.59 57.35 51.01 63.53 68.72 50.65 61.49 63.46 58.32 53.12
A.4 EFFECT OF κ
Note that the running estimate of Ldiv prevents model collapsed to trivial solutions i.e., model predicts only a single or a set of classes as outputs regardless of the input samples. Ldiv encourages model to match it’s empirical distribution of predictions to class distribution of target data (uniform distribution in our experiments). Such diversity regularization is crucial as there is no direct supervision attributing to different classes and thus aids to avoid collapsed trivial solutions. In Figure A3, we investigate different values of κ on validation corruptions of ImageNet-C to study its effectiveness on our approach. It can be observed that both the HLR and SLR without Ldiv leads to collapsed solutions (e.g., accuracy drops to 0%) on some of the corruptions and the performance gains are not consistent across all the corruptions. On the other hand, Ldiv with κ = 0.9 remain consistent and improve the performance across all the corruptions.
A.5 TEST-TIME ADPTATION OF PRETRAINED MODELS WITH SHOT
Following SHOT (Liang et al., 2020), we use their pseudo labeling strategy on the ImageNet pretrained ResNet50 in combination with TENT+, HLR and SLR. Note that TENT+ and pseudo labeling strategy jointly forms the method SHOT. The pseudo labeling strategy starts after the 1st epoch and thereafter computed at every epoch. The weight for the loss computed on the pseudo labels is set to 0.3, similar to (Liang et al., 2020). Different values for this weight is explored and found 0.3 to perform best. Table A6 compares the results of the methods with and without pseudo labeling strategy. It can be observed that the results with pseudo labeling strategy perform worse than without taking this strategy into account.
We further modified the pretrained ResNet50 by following the network modifications suggested in (Liang et al., 2020), that includes adding a bottleneck layer with BatchNorm and applying weight norm on the linear classifier along with smooth label training to facilitate the pseudo labeling strategy. Table A7 shows that the pseudo labeling strategy on such network improve the results of TENT+ from epoch 1 to epoch 5. However, there are no improvements noticed in SLR. Moreover, Table A8 shows that NO pseudo labeling strategy on the same network performs better than applying the pseudo labeling strategy. Finally, the no pseduo labeling results from Table A6 and A8 shows that additional modifications to ResNet50 do not improve the performance when compared to the standard ResNet50.
A.6 DOMAIN ADAPTATION ON VISDA-C AND DIGIT CLASSIFICATION
VisDA-C: We extended our experiments to VisDA-C. We followed similar network architecture from SHOT (Liang et al., 2020) and evaluated TENT+, our SLR loss function with diversity regularizer. Similar to ImageNet-C, we adapted only the channel wise affine parameters of batchnorm layers for 5 epochs with Adam optimizer with cosine decay scheduler of the learning rate with initial value 2e− 5. Here, the batchsize is set to 64, the weight of Lconf in our loss function to δ = 0.25 and κ = 0 in the running estimate pt(y) of Ldiv, since the number of classes in this dataset (12 classes) is smaller than the batchsize. Setting κ = 0 enables the batch wise diversity regularizer. Table A9 shows
Table A5: Test-time adaptation of ResNet50 on ImageNet-C at highest severity level 5. Same as Table 1 with error bars.
name Epoch 1 Epoch 5
corruption No adaptation PL TENT TENT+ HLR SLR TENT TENT+ HLR SLR
Gauss 2.44 2.44 32.44±0.10 33.75±0.09 38.39±0.25 39.51±0.23 30.64±0.51 35.19±0.17 41.37±0.09 41.52±0.08 Shot 2.99 2.99 35.01±0.17 36.38±0.19 41.11±0.13 42.09±0.26 33.80±0.74 38.12±0.10 44.04±0.09 42.90±0.08
Impulse 1.96 1.96 34.77±0.09 35.67±0.15 40.28±0.20 41.58±0.04 34.72±1.01 37.43±0.09 43.68±0.06 44.07±0.06 Defocus 17.92 17.92 32.40±0.10 33.43±0.14 38.25±0.32 39.35±0.13 30.13±0.61 34.82±0.25 41.74±0.12 41.69±0.07
Glass 9.82 9.82 31.62±0.15 33.25±0.01 38.18±0.08 39.02±0.09 29.05±0.21 34.95±0.13 41.09±0.17 40.78±0.08 Motion 14.78 14.78 47.23±0.11 47.66±0.12 51.63±0.08 52.67±0.25 49.08±0.08 50.33±0.07 54.26±0.02 54.76±0.04 Zoom 22.50 22.50 53.09±0.06 53.20±0.07 55.55±0.06 55.80±0.07 53.63±0.16 54.24±0.06 56.43±0.07 56.59±0.05 Snow 16.89 16.89 51.61±0.05 52.06±0.09 55.45±0.11 55.92±0.06 52.86±0.13 53.88±0.07 57.03±0.12 57.35±0.03 Frost 23.31 23.31 43.26±0.30 44.85±0.20 48.96±0.07 49.64±0.14 38.47±0.50 46.28±0.27 50.81±0.08 51.01±0.02 Fog 24.43 24.43 60.42±0.08 60.60±0.05 62.19±0.03 62.62±0.04 61.13±0.08 61.50±0.05 63.05±0.04 63.53±0.08 Bright 58.93 58.93 68.85±0.02 68.93±0.03 68.17±0.01 68.47±0.05 68.81±0.06 69.07±0.06 68.29±0.09 68.72±0.10 Contrast 5.43 5.43 24.39±0.98 33.43±0.77 49.47±0.20 50.27±0.08 10.72±0.32 29.87±1.36 50.98±2.54 50.65±0.55 Elastic 16.95 16.95 58.53±0.05 58.94±0.05 60.34±0.18 60.80±0.08 59.25±0.06 60.01±0.02 61.15±0.04 61.49±0.07 Pixel 20.61 20.61 61.62±0.06 61.75±0.07 62.51±0.10 63.01±0.08 62.15±0.04 62.61±0.08 63.08±0.06 63.46±0.08 JPEG 31.65 31.65 56.00±0.09 56.21±0.05 57.42±0.13 57.80±0.04 56.44±0.07 57.09±0.02 58.13±0.09 58.32±0.05
Table A6: Test-time adaptation of ResNet50 on ImageNet-C at highest severity level 5 with and without the pseudo labeling strategy (Liang et al., 2020).
name No pseudo labeling: Epoch 5 Pseudo labeling: Epoch 5
corruption No adaptation TENT+ HLR SLR TENT+ HLR SLR
Gauss 2.44 33.97±0.17 41.37±0.09 41.52±0.08 34.08±0.11 34.88±0.35 35.58±0.06 Shot 2.99 37.95±0.10 44.04±0.09 42.90±0.08 36.74±0.26 37.61±0.49 37.98±0.19 Impulse 1.96 36.93±0.09 43.68±0.06 44.07±0.06 36.69±0.04 37.24±0.22 37.77±0.05 Defocus 17.92 32.69±0.25 41.74±0.12 41.69±0.07 33.99±0.28 34.76±0.11 35.11±0.10
Glass 9.82 33.36±0.13 41.09±0.17 40.78±0.08 34.06±0.12 34.51±0.30 34.81±0.27 Motion 14.78 51.42±0.07 54.26±0.02 54.76±0.04 50.91±0.09 48.96±0.39 49.46±0.20 Zoom 22.50 54.33±0.06 56.43±0.07 56.59±0.05 54.10±0.10 52.49±0.02 52.50±0.23 Snow 16.89 54.55±0.07 57.03±0.12 57.35±0.03 54.06±0.08 52.49±0.19 52.95±0.07 Frost 23.31 45.80±0.27 50.81±0.08 51.01±0.02 44.44±0.07 45.47±0.26 46.06±0.20 Fog 24.43 62.09±0.05 63.05±0.04 63.53±0.08 61.91±0.08 59.66±0.14 59.98±0.12 Bright 58.93 69.03±0.06 68.29±0.09 68.72±0.10 68.98±0.02 65.59±0.06 66.00±0.03 Contrast 5.43 24.08±1.36 50.98±2.54 50.65±0.55 29.37±0.95 44.58±0.38 45.64±0.47 Elastic 16.95 60.36±0.02 61.15±0.04 61.49±0.07 60.23±0.05 57.48±0.14 57.87±0.04 Pixel 20.61 63.10±0.08 63.08±0.06 63.46±0.08 62.98±0.04 59.72±0.02 60.05±0.14 JPEG 31.65 57.21±0.02 58.13±0.09 58.32±0.05 57.09±0.04 54.72±0.09 54.88±0.07
average results from three different random seeds and also shows that SLR outperforms TENT+ on this dataset.
Domain adaptation from SVHN to MNIST / MNIST-M / USPS: ResNet26 is trained on SVHN dataset for 50 epochs with batch size 128, SGD optimizer with momentum 0.9 and initial learning rate 0.01, which drops to 0.001 and 0.0001 at 25th and 40th epoch respectively. ResNet26 obtains 96.49% test accuracy on SVHN. Domain adaptation of SVHN trained ResNet26 to MNIST/MNIST-M/USPS
Table A7: Test-time adaptation of modified ResNet50 (following (Liang et al., 2020)) on ImageNet-C at highest severity level 5 with pseudo labeling strategy at epoch 1 and epoch 5.
name Pseudo labeling: Epoch 1 Pseudo labeling: Epoch 5
corruption No adaptation TENT+ HLR SLR TENT+ HLR SLR
Gauss 2.95 31.03±0.18 34.65±0.28 37.21±0.23 35.26±0.16 35.93±0.23 37.61±0.30 Shot 3.65 33.55±0.07 38.09±0.30 40.30±0.09 37.39±0.05 38.95±0.16 40.42±0.06 Impulse 2.54 32.70±0.07 36.95±0.05 39.73±0.07 38.16±0.08 38.13±0.04 40.12±0.11 Defocus 19.36 31.66±0.15 35.08±0.05 37.18±0.15 35.95±0.17 36.72±0.13 37.96±0.25
Glass 9.72 31.06±0.06 35.46±0.12 37.62±0.10 35.98±0.04 36.84±0.11 37.90±0.02 Motion 15.66 46.96±0.12 49.95±0.12 51.87±0.14 52.24±0.02 51.90±0.12 52.76±0.09 Zoom 22.20 52.45±0.02 54.15±0.22 54.84±0.18 54.80±0.07 54.84±0.09 54.95±0.14 Snow 17.56 51.79±0.05 53.98±0.06 55.44±0.04 55.15±0.02 55.27±0.20 55.75±0.02 Frost 24.11 45.59±0.06 47.87±0.03 48.96±0.11 48.10±0.20 48.52±0.11 49.13±0.20 Fog 25.59 60.33±0.03 61.55±0.10 62.21±0.16 62.39±0.03 62.38±0.12 62.38±0.11 Bright 58.30 68.84±0.04 68.44±0.04 68.60±0.10 69.13±0.04 68.50±0.02 68.47±0.09 Contrast 6.49 42.34±0.19 47.98±0.13 50.32±0.28 42.11±0.15 49.22±0.42 50.80±0.19 Elastic 17.72 58.47±0.02 59.70±0.06 60.30±0.09 60.40±0.04 60.27±0.22 60.45±0.21 Pixel 21.29 61.39±0.06 62.10±0.07 62.71±0.10 63.04±0.02 62.71±0.07 62.81±0.07 JPEG 32.13 55.22±0.03 56.49±0.07 57.04±0.07 57.21±0.06 57.25±0.07 57.37±0.05
Table A8: Test-time adaptation of modified ResNet50 (following (Liang et al., 2020)) on ImageNet-C at highest severity level 5 with and without pseudo labeling strategy.
name No Pseudo labeling: Epoch 5 Pseudo labeling: Epoch 5
corruption No adaptation TENT+ HLR SLR TENT+ HLR SLR
Gauss 2.95 34.96±0.08 38.58±0.12 39.72±0.13 35.26±0.16 35.93±0.23 37.61±0.30 Shot 3.65 37.22±0.17 41.59±0.09 42.45±0.05 37.39±0.05 38.95±0.16 40.42±0.06 Impulse 2.54 37.82±0.04 40.88±0.07 42.39±0.03 38.16±0.08 38.13±0.04 40.12±0.11 Defocus 19.36 34.46±0.12 39.22±0.15 39.78±0.09 35.95±0.17 36.72±0.13 37.96±0.25
Glass 9.72 35.12±0.05 38.83±0.13 39.37±0.07 35.98±0.04 36.84±0.11 37.90±0.02 Motion 15.66 51.91±0.09 53.23±0.05 54.00 52.24±0.02 51.90±0.12 52.76±0.09 Zoom 22.20 54.57±0.05 55.76±0.04 55.79±0.02 54.80±0.07 54.84±0.09 54.95±0.14 Snow 17.56 55.02±0.05 56.35±0.12 56.80±0.04 55.15±0.02 55.27±0.20 55.75±0.02 Frost 24.11 48.18±0.09 49.86±0.22 50.43±0.08 48.10±0.20 48.52±0.11 49.13±0.20 Fog 25.59 62.24±0.04 62.90±0.06 63.29±0.06 62.39±0.03 62 | 1. What is the focus of the paper regarding test-time adaptation?
2. What are the strengths of the proposed approach, particularly its effectiveness and intuitive nature?
3. What are the weaknesses of the paper, such as the lack of proper ablation studies and potential limitations of the proposed loss?
4. Do you have any questions or concerns regarding the proposed approach, specifically regarding its combination of existing methods and the behavior of the loss function? | Summary Of The Paper
Review | Summary Of The Paper
Studies the problem of test-time adaptation across distribution shift, and proposes i) a new self-training loss with better stability than entropy minimization ii) using a diversity regularizer and iii) an additional “input transformation” module. The approach is found to lead to improved performance on standard test-time adaptation settings.
Review
Strengths
– The paper is mostly well written and easy to follow
– The proposed approach is intuitive, appears effective, and consistently outperforms competing methods for test-time adaptation
– The paper includes a comprehensive set of experiments
– The paper does well to compare with existing entropy minimization alternatives like MaxSquares and Charbonnier penalty. I would recommend including those results and a more detailed discussion in the main paper rather than appendix.
Weaknesses
– The proposed approach is largely a combination of existing methods – TENT (Wang et al., ICLR 2021), negative log-likelihood ratio loss (Yao et al., 2020), and batch-level diversity regularization (Li et al., arXiv 2020, Prabhu et al., ICCV 2021 [A]), for the test-time adaptation setting.
[A] Prabhu, Viraj, et al. "Sentry: Selective entropy optimization via committee consistency for unsupervised domain adaptation." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.
– The paper lacks a proper ablation study: while the input transformation module is ablated, what is the individual contribution of each proposed piece?
– The proposed loss does appear to have certain limitations, as it is unbounded and relies on proper scaling via model design (eg. batch norm layers) to prevent logit explosion.
– “the soft likelihood ratio loss creates lower amplitude gradients for low-confidence self-supervision”: this does not appear to match Figure 1 (right), where SLR is slightly larger than HLR for low confidence (<0.2). Further, both SLR and HLR actually appear to have large gradients in this confidence regime as compared to hard pseudolabels – is this not problematic, since that would effectively upweight very low confidence predictions?
----post-rebuttal----
The author response had addressed my concerns about the behavior of the proposed loss. In light of the paper's empirical contributions but limited technical novelty, I recommend a marginal accept. |
ICLR | Title
Test-Time Adaptation to Distribution Shifts by Confidence Maximization and Input Transformation
Abstract
Deep neural networks often exhibit poor performance on data that is unlikely under the train-time data distribution, for instance data affected by corruptions. Previous works demonstrate that test-time adaptation to data shift, for instance using entropy minimization, effectively improves performance on such shifted distributions. This paper focuses on the fully test-time adaptation setting, where only unlabeled data from the target distribution is required. This allows adapting arbitrary pretrained networks. Specifically, we propose a novel loss that improves test-time adaptation by addressing both premature convergence and instability of entropy minimization. This is achieved by replacing the entropy by a non-saturating surrogate and adding a diversity regularizer based on batch-wise entropy maximization that prevents convergence to trivial collapsed solutions. Moreover, we propose to prepend an input transformation module to the network that can partially undo test-time distribution shifts. Surprisingly, this preprocessing can be learned solely using the fully test-time adaptation loss in an end-to-end fashion without any target domain labels or source domain data. We show that our approach outperforms previous work in improving the robustness of publicly available pretrained image classifiers to common corruptions on such challenging benchmarks as ImageNet-C.
1 INTRODUCTION
Deep neural networks achieve impressive performance on test data, which has the same distribution as the training data. Nevertheless, they often exhibit a large performance drop on test (target) data which differs from training (source) data; this effect is known as data shift (Quionero-Candela et al., 2009) and can be caused for instance by image corruptions. There exist different methods to improve the robustness of the model during training (Geirhos et al., 2019; Hendrycks et al., 2019; Tzeng et al., 2017). However, generalization to different data shifts is limited since it is infeasible to include sufficiently many augmentations during training to cover the excessively wide range of potential data shifts (Mintun et al., 2021a). Alternatively, in order to generalize to the data shift at hand, the model can be adapted during test-time. Unsupervised domain adaptation methods such as Vu et al. (2019) use both source and target data to improve the model performance during test-time. In general source data might not be available during inference time, e.g., due to legal constraints (privacy or profit). Therefore we focus on the fully test-time adaptation setting (Wang et al., 2020): model is adapted to the target data during test time given only the arbitrarily pretrained model parameters and unlabeled target data that share the same label space as source data. We extend the work of Wang et al. (2020) by introducing a novel loss function, using a diversity regularizer, and prepending a parametrized input transformation module to the network. We show that our approach outperform previous works and make pretrained models robust against common corruptions on image classification benchmarks as ImageNet-C (Hendrycks & Dietterich, 2019) and ImageNet-R (Hendrycks et al., 2020).
Sun et al. (2020) investigate test-time adaptation using a self-supervision task. Wang et al. (2020) and Liang et al. (2020) use the entropy minimization loss that uses maximization of prediction confidence as self-supervision signal during test-time adaptation. Wang et al. (2020) has shown that such loss performs better adaptation than a proxy task (Sun et al., 2020). When using entropy minimization, however, high confidence predictions do not contribute to the loss significantly anymore and thus provide little self-supervision. This is a drawback since high-confidence samples provide the most
trustworthy self-supervision. We mitigate this by introducing two novel loss functions that ensure that gradients of samples with high confidence predictions do not vanish and learning based on self-supervision from these samples continues. Our losses do not focus on minimizing entropy but on minimizing the negative log likelihood ratio between classes; the two variants differ in using either soft or hard pseudo-labels. In contrast to entropy minimization, the proposed loss functions provide non-saturating gradients, even when there are high confident predictions. Figure 1 provides illustration of the losses and the resulting gradients. Using these new loss functions, we are able to improve the network performance under data shifts in both online and offline adaptation settings.
In general, self-supervision by confidence maximization can lead to collapsed trivial solutions, which make the network to predict only a single or a set of classes independent of the input. To overcome this issue a diversity regularizer (Liang et al., 2020; Wu et al., 2020) can be used, that acts on a batch of samples. It encourages the network to make diverse class predictions on different samples. We extend the regularizer by including a moving average, in order to include the history of the previous batches and show that this stabilizes the adaptation of the network to unlabeled test samples. Furthermore we also introduce a parametrized input transformation module, which we prepend to the network. The module is trained in a fully test-time adaptation manner using the proposed loss function, and without using source data or target labels. It aims to partially undo the data shift at hand and helps to further improve the performance on image classification benchmark with corruptions.
Since our method does not change the training process, it allows to use any pretrained models. This is beneficial because any good performing pretrained network can be readily reused, e.g., a network trained on some proprietary data not available to the public. We show, that our method significantly improves performance of different pretrained models that are trained on clean ImageNet data.
In summary our main contributions are as follows: we propose non-saturating losses based on the negative log likelihood ratio, such that gradients from high confidence predictions still contribute to test-time adaptation. We extend diversity regularizer to its moving average to include the history of previous batch samples to prevent the model collapsing to trivial solutions. We also introduce an input transformation module, which partially undoes the data shift at hand. We show that the performance of different pretrained models can be significantly improved on ImageNet-C and ImageNet-R.
2 RELATED WORK
Common image corruptions are potentially stochastic image transformations motivated by realworld effects that can be used for evaluating a model’s robustness. One such benchmark, ImageNetC (Hendrycks & Dietterich, 2019), contains simulated corruptions such as noise, blur, weather effects, and digital image transformations. Additionally, Hendrycks et al. (2020) proposed three data sets containing real-world distribution shifts, including Imagenet-R. Most proposals for improving robustness involve special training protocols, requiring time and additional resources. This includes data augmentation like Gaussian noise (Ford et al., 2019; Lopes et al., 2019; Hendrycks et al., 2020), CutMix (Yun et al., 2019), AugMix (Hendrycks et al., 2019), training on stylized images (Geirhos et al., 2019; Kamann et al., 2020) or against adversarial noise distributions (Rusak et al., 2020a). Mintun et al. (2021b) pointed out that many improvements on ImageNet-C are due to data augmentations which are too similar to the test corruptions, that is: overfitting to ImageNet-C occurs. Thus, the model might be less robust to corruptions not included in the test set of ImageNet-C.
Unsupervised domain adaptation methods train a joint model of source and target domain by crossdomain losses to find more general and robust features, e. g. optimize feature alignment (QuiñoneroCandela et al., 2008; Sun et al., 2017) between domains, adversarial invariance (Ganin & Lempitsky, 2015; Tzeng et al., 2017; Ganin et al., 2016; Hoffman et al., 2018), shared proxy tasks (Sun et al., 2019) or adapt entropy minimization via an adversarial loss (Vu et al., 2019). While these approaches are effective, they require explicit access to source and target data at the same time, which may not always be feasible. Our approach works with any pretrained model and only needs target data.
Test-time adaptation is a setting, when training (source) data is unavailable at test-time. It is related to source free adaptation, where several works use generative models, alter training (Kundu et al., 2020; Li et al., 2020b; Kurmi et al., 2021; Yeh et al., 2021) and require several thousand epochs to adapt to the target data (Li et al., 2020b; Yeh et al., 2021). Besides, there is another line of work (Sun et al., 2020; Schneider et al., 2020; Nado et al., 2021; Benz et al., 2021; Wang et al., 2020) that
interpret the common corruptions as data shift and aim to improve the model robustness against these corruptions with efficient test-time adaptation strategy to facilitate online adaptation. such settings spare the cost of additional computational overhead. Our work also falls in this line of research and aims to adapt the model to common corruptions efficiently with both online and offline adaptation.
Sun et al. (2020) update feature extractor parameters at test-time via a self-supervised proxy task (predicting image rotations). However, Sun et al. (2020) alter the training procedure by including the proxy loss into the optimization objective as well, hence arbitrary pretrained models cannot be used directly for test-time adaptation. Inspired by the domain adaptation strategies (Maria Carlucci et al., 2017; Li et al., 2016), several works (Schneider et al., 2020; Nado et al., 2021; Benz et al., 2021) replace the estimates of Batch Normalization (BN) activation statistics with the statistics of the corrupted test images. Fully test time adaptation, studied by Wang et al. (2020) (TENT) uses entropy minimization to update the channel-wise affine parameters of BN layers on corrupted data along with the batch statistics estimates. SHOT (Liang et al., 2020) also uses entropy minimization and a diversity regularizer to avoid collapsed solutions. SHOT modifies the model from the standard setting by adopting weight normalization at the fully connected classifier layer during training to facilitate their pseudo labeling technique. Hence, SHOT is not readily applicable to arbitrary pretrained models.
We show that pure entropy minimization (Wang et al., 2020; Liang et al., 2020) as well as alternatives such as max square loss (Chen et al., 2019) and Charbonnier penalty (Yang & Soatto, 2020) results in vanishing gradients for high confidence predictions, thus inhibiting learning. Our work addresses this issue by proposing a novel non-saturating loss, that provides non-vanishing gradients for high confidence predictions. We show that our proposed loss function improves the network performance through test-time adaptation. In particular, performance on corruptions of higher severity improves significantly. Furthermore, we add and extend the diversity regularizer (Liang et al., 2020; Wu et al., 2020) to avoid collapse to trivial, high confidence solutions. Existing diversity regularizers (Liang et al., 2020; Wu et al., 2020) act on a batch of samples, hence the number of classes has to be smaller than the batch size. We mitigate this problem by extending the regularizer to a moving average version. Li et al. (2020a) also use a moving average to estimate the entropy of the unconditional class distribution but source data is used to estimate the gradient of the entropy. In contrast, our work does not need access to the source data since the gradient is estimated using only target data. Prior work Tzeng et al. (2017); Rusak et al. (2020b); Talebi & Milanfar (2021) transformed inputs by an additional module to overcome domain shift, obtain robust models, and also to learn to resize. In our work, we prepend an input transformation module to the model, but in contrast to former works, this module is trained purely at test-time to partially undo the data shift at hand to aid the adaptation.
3 METHOD
We propose a novel method for fully test-time adaption. We assume that a neural network fθ with parameters θ is available that was trained on data from some distributionD, as well a set of (unlabeled) samples X ∼ D′ from a target distribution D′ 6= D (importantly, no samples from D are required). We frame fully test-time adaption as a two-step process: (i) Generate a novel network gφ based on fθ, where φ denotes the parameters that are adapted. A simple variant for this is g = f and φ ⊆ θ Wang et al. (2020). However, we propose a more expressive and flexible variant in Section 3.1. (ii) Adapt the parameters φ of g on X using an unsupervised loss function L. We propose two novel losses Lslr and Lhlr in Section 3.2 that have non-vanishing gradients for high-confidence self-supervision.
3.1 INPUT TRANSFORMATION
We propose to define the adaptable model as g = f ◦ d. That is: we preprend a trainable network d to f . The motivation for the additional component d is to increase expressivity of g such that it can learn to (partially) undo the domain shift D → D′. Specifically, we choose d(x) = γ · [τx+ (1− τ)rψ(x)] + β, where τ ∈ R, (β, γ) ∈ Rnin with nin being the number of input channels, rψ being a network with identical input and output shape, and · denoting elementwise multiplication. Specifically, β and γ implement a channel-wise affine transformation and τ implements a convex combination of unchanged input and the transformed input rψ(x). By choosing τ = 1, γ = 1, β = 0, we ensure d(x) = x and thus g = f at initialization. In principle, rψ can be chosen arbitrarily. Here, we choose rψ as a simple stack of 3× 3 convolutions, group normalization, and ReLUs (refer Sec. A.2 for details). However, exploring other choices would be an interesting avenue for future work.
Importantly, while the motivation for d is to learn to partially undo a domain shift D → D′, we train d end-to-end in the fully test-time adaptation setting on data X ∼ D′, without any access to samples from the source domain D, based on the losses proposed in Section 3.2. The modulation parameters of gφ are φ = (β, γ, τ, ψ, θ′), where θ′ ⊆ θ. That is, we adapt only a subset of the parameters θ of the pretrained network f . We largely follow Wang et al. (2020) in adapting only the affine parameters of normalization layers in f while keeping parameters of convolutional kernels unchanged. Additionally, batch normalization statistics (if any) are adapted to the target distribution.
Note that the proposed method is applicable to any pretrained network that contains normalization layers with a channel-wise affine transformation. For networks with no affine transformation layers, one can add such layers into f that are initialized to identity as part of model augmentation.
3.2 ADAPTATION OBJECTIVE
We propose a loss function L = Ldiv + δLconf for fully test-time network adaptation that consists of two components: (i) a term Ldiv that encourages predictions of the network over the adaptation dataset X that match a target distribution pD′(y). This can help avoiding test-time adaptation collapsing to too narrow distributions such as always predicting the same or very few classes. If pD′(y) is (close to) uniform, it acts as a diversity regularizer. (ii) A term Lconf that encourages high confidence prediction on individual datapoints. We note that test-time entropy minimization (TENT) (Wang et al., 2020) fits into this framework by choosing Ldiv = 0 and Lconf as the entropy.
3.2.1 CLASS DISTRIBUTION MATCHING Ldiv
Assuming knowledge of the class distribution pD′(y) on the target domain D′, we propose to add a term to the loss that encourages the empirical distribution of (soft) predictions of gφ on X to match this distribution. Specifically, let p̂gφ(y) be an estimate of the distribution of (soft) predictions of gφ. We use the Kullback-Leibler divergence Ldiv = DKL(p̂gφ(y)|| pD′(y)) as loss term. In some applications information about the target class distribution is available, e.g. in medical data it might be known that there is a large class imbalance. In general this information is not available, and here we assume a uniform distribution of pD′(y), which corresponds to maximizing the entropy H(p̂gφ(y)). Similar assumption has been made in SHOT to circumvent the collapsed solutions.
Since the estimate p̂gφ(y) depends on φ, which is continuously adapted, it needs to be re-estimated on a per-batch level. Since re-estimating p̂gφ(y) from scratch would be computational expensive, we propose to use a running estimate that tracks the changes of φ as follows: let pt−1(y) be the estimate at iteration t− 1 and pempt = 1n ∑n k=1 ŷ
(k), where ŷ(k) are the predictions (confidences) of gφ on a mini-batch of n inputs x(k) ∼ X . We update the running estimate via pt(y) = κ · sg(pt−1(y))+(1− κ) · pempt , where sg refers stop-gradient. The loss becomes Ldiv = DKL(pt(y)|| pD′(y)) accordingly. Unlike Li et al. (2020a), our approach only requires target but no source data to estimate the gradient.
3.2.2 CONFIDENCE MAXIMIZATION Lconf
We motivate our choice of Lconf step-by-step from the (unavailable) supervised cross-entropy loss: for this, let ŷ = gφ(x) be the predictions (confidences) of model gφ and H(ŷ, yr) = − ∑ c y r c log ŷc be the cross-entropy between prediction ŷ and some reference yr. Let the last layer of g be a softmax activation layer softmax. That is ŷ = softmax(o), where o are the network’s logits. We can rewrite the cross-entropy in terms of the logits o and a one-hot reference yr as follows: H(softmax(o), yr) = −ocr + log ∑ncl i=1 e oi where cr is the index of the 1 in yr and ncl is the number of classes.
When labels being available for the target domain (which we do not assume) in the form of a one-hot encoded reference yt for data xt, one could use the supervised cross-entropy loss by setting yr = yt and using Lsup(ŷ, yr) = H(ŷ, yr) = H(ŷ, yt). Since fully test-time adaptation assumes no label information, supervised cross-entropy loss is not applicable and other options for yr need to be used.
One option is (hard) pseudo-labels. That is, one defines the reference yr based on the network predictions ŷ via yr = onehot(ŷ), where onehot creates a one-hot reference with the 1 corresponding to the class with maximal confidence in ŷ. This results in Lpl(ŷ) = H(ŷ, onehot(ŷ)) = − log ŷc∗ , with c∗ = argmax ŷ. One disadvantage with this loss is that the (hard) pseudo-labels ignore uncertainty in the network predictions during self-supervision. This results in large gradient magnitudes with
respect to the logits |∂Lpl∂oc∗ | being generated on data where the network has low confidence (see Figure 1). This is undesirable since it corresponds to the network being affected most by data points where the network’s self-supervision is least reliable1.
An alternative is to use soft pseudo-labels, that is yr = ŷ. This takes uncertainty in network predictions into account during self-labelling and results in the entropy minimization loss of TENT (Wang et al., 2020): Lent(ŷ) = H(ŷ, ŷ) = H(ŷ) = − ∑ c ŷc log ŷc. However, also for the entropy the logits’ gradient magnitude |∂Lent∂o | goes to 0 when one of the entries in ŷ goes to 1 (see Figure 1). For a binary classification task, for instance, the maximal logits’ gradient amplitude is obtained for ŷ ≈ (0.82, 0.18). This implies that during later stages of test-time adaptation where many predictions typically already have high confidence (significantly above 0.82), gradients are dominated by datapoints with relative low confidence in self-supervision.
While both hard and soft pseudo-labels are clearly motivated, they are not optimal in conjunction with a gradient-based optimizer since the self-supervision from low confidence predictions dominates (at least during later stages of training). We address this issue by proposing two losses that increase the gradient amplitude from high confidence predictions. We argue that this leads to stronger selfsupervision (better gradient direction when averaged over the batch) than from the entropy loss (see also Sec. A.1 for an illustrative example supporting this claim) . The two losses are analogous to Lpl and Lent, but are not based on the cross-entropy H but on the negative log likelihood ratios:
R(ŷ, yr) = − ∑ c yrc log ŷc∑ i6=c ŷi = − ∑ c yrc (log ŷc − log ∑ i 6=c ŷi) = H(ŷ, y r) + ∑ c yrc log ∑ i 6=c ŷi
Note that while the entropy H is lower bounded by 0, R can get arbitrary small if yrc → 1 and the sum ∑ i 6=c ŷi → 0 and thus log ∑ i 6=c ŷi → −∞. This property will induce non-vanishing gradients for high confidence predictions.
The first loss we consider is the hard likelihood ratio loss that is defined similarly to the hard pseudo-labels loss Lpl:
Lhlr(ŷ) = R(ŷ, onehot(ŷ)) = − log( ŷc∗∑ i 6=c∗ ŷi ) = − log( e oc∗∑ i 6=c∗ e oi ) = −oc∗ + log ∑ i 6=c∗ eoi ,
1The prediction confidence for a datapoint can be interpreted as a proxy for its distance to the decision boundary. A low confidence prediction indicates that a datapoint appears to be close to the decision boundary and the model is less certain on which side of the decision boundary the datapoint should lie. We call this "low confidence self-supervision" since the direction of the gradient becomes ambiguous.
where c∗ = argmax ŷ. We note that ∂Lhlr∂oc∗ = −1, thus also high-confidence self-supervision contributes equally to the maximum logits’ gradients. This loss was also independently proposed as negative log likelihood ratio loss by Yao et al. (2020) as a replacement to the fully-supervised cross entropy loss for classification task. However, to the best of our knowledge, we are the first to motivate and identify the advantages of this loss for self-supervised learning and test-time adaptation due to its non-saturating gradient property.
In addition to Lhlr, we also account for uncertainty in network predictions during self-labelling in a similar way as for the entropy loss Lent, and propose the soft likelihood ratio loss:
Lslr(ŷ) = R(ŷ, ŷ) = − ∑ c ŷc · log( ŷc∑ i 6=c ŷi ) = ∑ c ŷc(−oc + log ∑ i 6=c eoi)
We note that as ŷc∗ → 1, Lslr(ŷ) → Lhlr(ŷ). Thus the asymptotic behavior of the two likelihood ratio losses for high confidence predictions is the same. However, the soft likelihood ratio loss creates lower amplitude gradients for low confidence self-supervision. We provide illustrations of the discussed losses and the resulting logits’ gradients in Figure 1. Furthermore, an illustration of other losses like the max square loss and Charbonnier penalty can be found in Sec. A.7.
We note that both likelihood ratio losses would typically encourage the network to simply scale its logits larger and larger, since this would reduce the loss even if the ratios between the logits remain constant. However, when finetuning an existing network and restricting the layers that are adapted such that the logits remain approximately scale-normalized, these losses can provide a useful and non-vanishing gradient signal for network adaptation. We achieve this appproximate scale normalization by freezing the top layers of the respective networks. In this case, normalization layers such as batch normalization prohibit “logit explosion”. However, predicted confidences can presumably become overconfident; calibrating confidences in a self-supervised test-time adaptation setting is an open and important direction for future work.
4 EXPERIMENTAL SETTINGS
Datasets We evaluate our method on image classification datasets for corruption robustness and domain adaptation. We evaluate on the challenging benchmark ImageNet-C (Hendrycks & Dietterich, 2019), which includes a wide variety of 15 different synthetic corruptions with 5 severity levels that attribute to data shift. This benchmark also includes 4 additional corruptions as validation data. For domain adaptation, we choose ImageNet trained models to adapt to ImageNet-R proposed by Hendrycks et al. (2020). ImageNet-R comprises 30,000 image renditions for 200 ImageNet classes. Domain adaptation on VisDA-C (Peng et al., 2017) and digit classification can be found in Sec. A.6.
Models Our method operates in a fully test-time adaptation setting that allows us to use any arbitrary pretrained model. We use publicly available ImageNet pretrained models ResNet50, DenseNet121, ResNeXt50, MobileNetV2 from torchvision Torch-Contributors (2020). We also test on a robust ResNet50 model trained using DeepAugment+AugMix 2 Hendrycks et al. (2020).
Baseline for fully test-time adaptation Since TENT from Wang et al. (2020) outperformed competing methods and fits the fully test-time adaptation setting, we consider it as a baseline and compare our results to this approach. Similar to TENT, we also adapt model features by estimating the normalization statistics and optimize only the channel-wise affine parameters on the target distribution.
Settings We conduct test-time adaptation on a target distribution with both online and offline updates using the Adam optimizer with learning rate 0.0006 with batch size 64. We set the weight of Lconf in our loss function to δ = 0.025 and κ = 0.9 in the running estimate pt(y) of Ldiv (we investigate the effect of κ in the Sec. A.4). Similar to SHOT (Liang et al., 2020), we also choose the target distribution pD′(y) in Ldiv as a uniform distribution over the available classes. For TENT, we use SGD with momentum 0.9 at learning rate 0.00025 with batch size 64. These values correspond to the ones of Wang et al. (2020); alternative settings for TENT did not improve performance. For offline updates, we adapt the models for 5 epochs using a cosine decay schedule of the learning rate. We found that the models converge during 3 to 5 epochs and do not improve further. Similar to Wang et al. (2020), we also control for ordering by data shuffling and sharing the order across the methods.
2From https://github.com/hendrycks/imagenet-r. Owner permitted to use it for research/commercial purposes.
Note that all the hyperparameters are tuned solely on the validation corruptions of ImageNet-C that are disjoint from the test corruptions. As discussed in Section 3.2.2, we freeze all trainable parameters in the top layers of the networks to prohibit “logit explosion”. Normalization statistics are still updated in these layers. Sec. A.3 provides more details regarding frozen layers in different networks.
Furthermore, we prepend a trainable input transformation module d (cf. Sec. 3.1) to the network to partially counteract the data-shift. Note that the parameters of this module discussed in Sec. 3.1 are trainable and subject to optimization. This module is initialized to operate as an identity function prior to adaptation on a target distribution by choosing τ = 1, γ = 1, and β = 0. We adapt the parameters of this module along with the channel-wise affine transformations and normalization statistics in an end-to-end fashion, solely using our proposed loss function along with the optimization details mentioned above. The architecture of this module is discussed in Sec. A.2.
Since Ldiv is independent of Lconf, we also propose to combine Ldiv with TENT, i. e. L = Ldiv +Lent. We denote this as TENT+ and also set κ = 0.9 here. Note that TENT optimizes all channel-wise affine parameters in the network (since entropy is saturating and does not cause logit explosion). For a fair comparison to our method, we also freeze the top layers of the networks in TENT+. We show that adding Ldiv and freezing top layers significantly improves the networks performance over TENT. Note that SHOT (Liang et al., 2020) is the combination of TENT, batch-level diversity regularizer, and their pseudo labeling strategy. TENT+ can be seen as a variant of SHOT but without the pseudo labeling. Please refer to Sec. A.5 for the test-time adaptation of pretrained models with SHOT.
Note that each corruption and severity in ImageNet-C is treated as a different target distribution and we reset model parameters to their pretrained values before every adaptation. We run our experiments for three times with random seeds (2020, 2021, 2022) in PyTorch and report the average accuracies.
5 RESULTS
Evaluation on ImageNet-C We adapt different models on the ImageNet-C benchmark using TENT, TENT+, and both hard likelihood ratio (HLR) and soft likelihood ratio (SLR) losses in an online adaptation setting. Figure 2 (top row) depicts the mean corruption accuracy (mCA%) of each model computed across all the corruptions and severity levels. It can be observed that TENT+ improves over TENT, showcasing the importance of a diversity regularizer Ldiv. Importantly, our methods HLR and SLR outperform TENT and TENT+ across DenseNet121, MobileNetV2, ResNet50, ResNeXt50 and perform comparable with TENT+ on robust ResNet50-DeepAugment+Augmix model. This shows that the mCA% of robust DeepAugment+Augmix model can be further increased from 58% (before adaptation) to 67.5% using test-time adaptation techniques. Here, the average of mCA obtained from three different random seeds are depicted along with the error bars. These smaller error bars represent that the test-time adaptation results are not sensitive to the choice of random seed.
We also illustrate the performance of ResNet50 on the highest severity level across all 15 test corruptions of ImageNet-C in Table 1. Here, online adaptation results along with the offline adaptation on epoch 1 and 5 are reported. It can be seen that online adaptation and single epoch of test-time
adaptation improves the performance significantly and makes minor improvements until epoch 5. TENT adaptation for more than one epoch result in reduced performance and TENT with Ldiv (TENT+) prevents this behavior. Both HLR and SLR clearly and consistently outperform TENT / TENT+ on the ResNet50 and also note that SLR outweighs HLR. We also compare our results with the hard pseudo-labels (PL) objective and also with an oracle setting where the groundtruth labels of the target data are used for adapting the model in a supervised manner (GT). Note that this oracle setting is not of practical importance but illustrates the empirical upper bound on fully test-time adaptation performance under the chosen modulation parametrization.
ImageNet-R We online adapt different models on ImageNet-R and depict the results in Figure 2 (middle row). Results show that HLR and SLR clearly outperform TENT and TENT+ and significantly improve performance of all the models, including the model pretrained with DeepAugment+Augmix.
Evaluation with data subsets Above we evaluate the model on the same data that is also used for the test-time adaptation. Here, we test model generalization by adapting on a subset of target data
and evaluate the performance on the whole dataset (in offline setting), which also includes unseen data that is not used for adaptation. We conduct two case studies: (i) adapt on the data from a subset of ImageNet classes and evaluate the performance on the data from all the classes. (ii) Adapt only on a subset of data from each class and test on all seen and unseen samples from the whole dataset.
Figure 3 illustrates generalization of a ResNet50 adapted on different proportions of the data across different corruptions, both in terms of classes and samples. We observe that adapting a model on a small subset of samples and classes is sufficient to achieve reasonable accuracy on the whole target data. This suggests that the adaptation actually learns to compensate the data shift rather than overfitting to the adapted samples or classes. The performance of TENT decreases as the number of classes/samples increases, because Lent can converge to trivial collapsed solutions and more data corresponds to more updates steps during adaptation. Adding Ldiv such as in TENT+ stabilizes the adaptation process and reduces this issues. Reported are the average of random seeds with error bars.
Input transformation We investigate whether the input transformation (IT) module, trained end-toend with a ResNet50 and SLR loss on data of the respective distortion without seeing any source (undistorted) data, can partially undo certain domain shifts of ImageNet-C and also increase accuracy on corrupted data. We measure domain shift via the structural similarity index measure (SSIM) (Wang et al., 2004) between the clean image (unseen by the model) and its distorted version/the output of IT on the distorted version. Following offline adaptation setting, Table 2 shows that IT increases the SSIM considerably on certain distortions such as Impulse, Contrast, Snow, and Frost. IT increases SSIM also for other types of noise distortions, while it slightly reduces SSIM for the blur distortions, Elastic, Pixelate, and JPEG. When combined with SLR, IT considerably increases accuracy on distortions for which also SSIM increased significantly (for instance +20 percent points on Impulse, +4 percent points on Contrast) and never reduces accuracy by more than 0.11 percent points. More results on online and offline adaptation with TENT / TENT+ can be found in Table A3.
Clean images As a sanity check, we investigate the effect of test-time adaptation when target data comes from the same distribution as training data. For this, we online adapt pretrained models on clean validation data of ImageNet. The results in Figure 2 (bottom row) depict that the performance of SLR/HLR adapted models drops by 0.8 to 1.8 percent points compared to the pretrained model. We attribute this drop to self-supervision being less reliable than the original full supervision on indistribution training data. The drop is smaller for TENT and TENT+, presumably because predictions on in-distribution target data are typically highly confident such that there is little gradient and thus little change to the pretrained networks by TENT. In summary, while self-supervision by confidence maximization is a powerful method for adaptation to domain shift, the observed drop when adapting to data from the source domain indicates that there is “no free lunch” in test-time adaptation.
6 CONCLUSION
We propose a method to improve corruption robustness and domain adaptation of models in a fully test-time adaptation setting. Unlike entropy minimization, our proposed loss functions provide non-vanishing gradients for high confident predictions and thus attribute to improved adaptation in a self-supervised manner. We also show that additional diversity regularization on the model predictions is crucial to prevent trivial solutions and stabilize the adaptation process. Lastly, we introduce a trainable input transformation module that partially refines the corrupted samples to support the adaptation. We show that our method improves corruption robustness on ImageNet-C and domain adaptation to ImageNet-R on different ImageNet models. We also show that adaptation on a small fraction of data and classes is sufficient to generalize to unseen target data and classes.
7 ETHICS STATEMENT
We abide by the general ethical principles listed by ICLR code of ethics. Our work does not include the study of human subjects, dataset releases, do not raise pontential conflicts of interest, or discrimination/bias/fairness concerns, or privacy and security issues. Our non-saturating loss increases accuracy but might result in over confident predictions, which can cause harm in safetycritical downstream applications when not properly calibrated. At the same time, self-supervised confidence maximization might amplify bias in pretrained models. We hope that the diversity regularizer in the loss partially compensates this issue.
8 REPRODUCIBILITY STATEMENT
We provide complete details of our experimental setup for reproducibility. Sec. 4 provides details of the network architectures, optimizer, learning rate, batch size, choice of hyperparameters of our method and the random seeds used for generating the results. Sec. A.3 provides more details regarding frozen layers in different networks. Sec. A.2 shows the structure of input transformation module used in this work. We will also provide a link to an anonymous downloadable source code as a comment directed to the reviewers and area chairs in the discussion forum.
A APPENDIX
A.1 ILLUSTRATIVE EXAMPLE OF LOG LIKELIHOOD RATIO ADAPTATION OBJECTIVE
A simple 1D example is devised to illustrate the benefits of proposed log likelihood ratio as test time adaptation objective. Consider data points (unlabeled) that are sampled from the following bimodal distribution: 0.5 · N (−1, 3) + 0.5 · N (+1, 3), that is: half of the samples come from a normal distribution with mean -1 and the other half from a normal distribution with mean +1 (and both having standard deviation 3). We can interpret these two components of the mixture distributions as corresponding to data of two different classes, but class labels are of course unavailable during unsupervised test-time adaptation.
We assume a simple logistic model of the form pθ(y = 1|x) = 11+e−(x+θ) , where x is the value of the data sample and θ is a scalar offset that determines the decision boundary. By construction, we know that the minimum density of the mixture distribution on [−1, 1] is at 0. Since confidence maximization aims as moving the decision boundary to regions in input space with minimum data density (in this case to 0), we can compare different self-supervised confidence maximization losses in the finite data regime as follows: for every finite data sample with N data points {xi} for i = 1, . . . , N and loss function L , we solve θ∗(L) = argminθ∈[−1,1] L(θ, {xi}), where the loss (such as entropy or SLR) is averaged over all data points. The absolute value |θ∗(L)| gives us then an estimate of the error of the decision boundary parameter |θ∗(L)| for the given data set and loss function. Table A1 provides this error for different loss functions and different number of data samples. It can be seen that SLR and HLR clearly outperform Entropy loss (TENT) for all data regimes. The difference between SLR and HLR is generally very small. While SLR seems to be consistently slightly better than HLR, this difference is not statistically significant. We attribute the superiority of SLR/HLR compared to entropy to the fact that all data points have non-saturating loss, regardless of their distance to the decision boundary. Thus, all data contributes to localizing the decision boundary, while for saturating losses such as the entropy, effectively only "nearby" points determine the decision boundary. This example illustrates that our proposed non-saturating losses are beneficial over entropy loss for self-supervised confidence maximization.
Table A1: Illustrates the error of the decision boundary parameter for different loss functions and different number of samples averaged over 100 runs (shown are mean and standard error of mean).
#samples 100 200 500 1000 2000 10000 20000
Entropy 0.487±0.031 0.364±0.029 0.230±0.018 0.152±0.013 0.117±0.009 0.052±0.004 0.033±0.003 HLR 0.357±0.023 0.234±0.018 0.145±0.012 0.094±0.008 0.071±0.006 0.032±0.002 0.022±0.002 SLR 0.332±0.022 0.214±0.017 0.140±0.011 0.088±0.008 0.067±0.006 0.032±0.002 0.021±0.002
A.2 INPUT TRANSFORMATION MODULE
Note that we define our adaptable model as g = f ◦ d, where d is a trainable network prepended to a pretrained neural network f (e.g., pretrained ResNet50). We choose d(x) = γ ·[τx+ (1− τ)rψ(x)]+ β, where τ ∈ R, (β, γ) ∈ Rnin with nin being the number of input channels, rψ being a network with identical input and output shape, and · denoting elementwise multiplication. Here, β and γ implement a channel-wise affine transformation and τ implements a convex combination of unchanged input and the transformed input rψ(x). We set τ = 1, γ = 1, and β = 0, to ensure that d(x) = x and thus g = f at initialization. In principle, rψ can be chosen arbitrarily. Here, we choose rψ as a simple stack of 3× 3 convolutions with stride 1 and padding 1, group normalization, and ReLUs without any upsampling/downsampling layers. Specifically, the structure of g is illustrated in Figure A1.
In addition to the results reported in Table 2, we also compare TENT and TENT+ with and without Input Transformation (IT) module on ResNet50 for all corruptions at severity level 5 in both online adaptation setting and offline adaptation with 5 epochs in Table A3. Furthermore, we also present the qualitative results of the image transformations from the input transformation module adapted with SLR (offline setting) in Figure A2.
Table A2: Ablation study on the components of input transformation module on ResNet50 for all corruptions at severity level 5.
Corruption Gauss Shot Impulse Defocus Glass Motion Zoom Snow Frost Fog Bright Contrast Elastic Pixel JPEG mean
x 41.52 42.90 44.07 41.69 40.78 54.76 56.59 57.35 51.01 63.53 68.72 50.65 61.49 63.46 58.32 53.12 rψ(x) 13.17 26.57 28.81 5.09 3.61 30.61 49.79 53.73 45.96 58.82 65.79 53.73 56.77 60.14 53.38 40.40 τx+ (1− τ)rψ(x) 43.13 46.43 56.25 41.80 40.90 55.75 56.65 58.55 51.72 63.59 68.83 53.89 61.50 63.73 58.51 54.74 γ · [τx+ (1− τ)rψ(x)] + β 43.18 46.24 56.21 41.91 40.89 55.79 56.66 58.50 51.72 63.56 68.83 54.26 61.49 63.76 58.52 54.76
Table A3: Test-time adaptation of ResNet50 on ImageNet-C at highest severity level 5 with and without Input Transformation (IT) module. Reported are the mean accuracy(%) across three random seeds (2020/2021/2022). While IT also improves performance when combined with TENT+, it is still clearly outperformed by SLR+IT.
Method Gauss Shot Impulse Defocus Glass Motion Zoom Snow Frost Fog Bright Contrast Elastic Pixel JPEG
Online adaptation (evaluation on a batch directly after adaptation on the batch)
TENT 28.60 31.06 30.54 29.09 28.07 42.32 50.39 48.01 42.05 58.40 68.20 27.25 55.68 59.46 53.64 TENT + IT 28.99 31.73 31.15 28.87 27.85 42.43 50.36 48.02 41.95 58.37 68.19 24.35 55.68 59.49 53.57
TENT+ 29.09 31.65 30.68 29.33 28.65 42.32 50.32 48.09 42.54 58.39 68.23 31.43 55.90 59.46 53.68 TENT+ + IT 29.48 32.34 31.38 29.06 28.42 42.43 50.33 48.11 42.47 58.40 68.20 32.11 55.87 59.49 53.64 SLR (ours) 35.11 37.93 36.83 35.13 35.13 48.29 53.45 52.68 46.52 60.74 68.40 44.78 58.74 61.13 55.97 SLR + IT (ours) 36.19 39.17 40.46 35.17 34.87 48.67 53.62 52.71 46.93 60.66 68.30 46.55 58.79 61.27 55.93 Evaluation after epoch 5
TENT 30.64 33.80 34.72 30.13 29.05 49.08 53.63 52.86 38.47 61.13 68.81 10.72 59.25 62.15 56.44 TENT + IT 31.92 36.02 38.14 30.44 28.68 49.04 53.59 52.99 38.76 61.14 68.84 13.52 59.23 62.15 56.56
TENT+ 35.19 38.12 37.43 34.82 34.95 50.33 54.24 53.88 46.28 61.50 69.07 29.87 60.01 62.61 57.09 TENT+ + IT 36.13 39.84 41.03 34.62 34.72 50.33 54.10 53.91 46.46 61.54 69.07 30.22 59.95 62.72 57.11 SLR (ours) 41.52 42.90 44.07 41.69 40.78 54.76 56.59 57.35 51.01 63.53 68.72 50.65 61.49 63.46 58.32
SLR+IT (ours) 43.09 44.39 64.05 41.98 40.99 55.73 56.75 58.56 51.68 63.64 68.85 55.01 61.32 63.59 58.24
A.2.1 CONTRIBUTION OF EACH COMPONENT IN INPUT TRANSFORMATION MODULE
Table A2 shows the results of ablation study on the components of input transformation module on ResNet50 for all corruptions at severity level 5 adapted with SLR for 5 epochs. The ablation study includes: (1) no input transformation module d(x) = x, (2) with network d(x) = rψ(x), (3) including τ , (4) including channel-wise affine transformation γ and β. We can observe that the inputs transformed with network rψ drops the performance without the convex combination with τ . The additional channel wise affine transformations didn’t bring further consistent improvements and can be ignored from the transformation module. Exploring other architectural choices and training (or pretraining) strategy for the input transformation module would be an interesting avenue for future work.
A.3 FROZEN LAYERS IN DIFFERENT NETWORKS
As discussed in Section 3.2.2, we freeze all trainable parameters in the top layers of the networks to prohibit “logit explosion”. That implies, we do not optimize the channel-wise affine transformations of the top layers but normalization statistics are still estimated. Similar to the hyperparameters of test time adaptation settings, the choice of these layers are made using ImageNet-C validation data. We mention the frozen layers of each architecuture below. Note that the naming convention of these layers are based on the model definition in torchvision:
• DenseNet121 - features.denseblock4, features.norm5.
• MobileNetV2 - features.16, features.17, features.18.
• ResNeXt50, ResNet50 and ResNet50 (DeepAugment+Augmix) - layer4.
A.3.1 RESULTS WITHOUT FREEZING THE TOP LAYERS
We mentioned that the proposed losses could alternatively encourage the network to scale the logits grow larger and larger and still reduce the loss. However, we did not find any considerable differences empirically in the explored settings when adapting the model with or without freezing the top layer. We found that adapting the model with and without freezing the top layers have comparable performance in both online and offline adaptation settings as shown in Table A4 respectively. However, we would still recommend freezing the top-most layers as the default choice to be on the safe side. These results indicate that the early layers capture the distribution shift sufficiently to improve the model adaptation.
Table A4: Comparing the online and offline adaptation results with and without freezing the affine parameters of top normalization layers of ResNet50 at severity 5. Here, "Freeze" and "NoFreeze" refer to the setting with and without freezing the top affine layers respectively.
Corruption Gauss Shot Impulse Defocus Glass Motion Zoom Snow Frost Fog Bright Contrast Elastic Pixel JPEG mean
Online evaluation
TENT+ NoFreeze 29.05 31.32 30.32 28.95 28.29 42.37 50.45 48.12 42.21 58.51 68.29 28.17 55.57 59.47 53.46 43.63 TENT+ Freeze 29.21 31.54 30.55 29.17 28.60 42.54 50.47 48.18 42.51 58.50 68.30 31.25 55.76 59.54 53.62 43.98
HLR NoFreeze 33.73 36.50 35.63 33.99 33.88 46.55 52.76 51.44 45.82 59.74 67.37 43.19 57.69 59.77 54.95 47.53 HLR Freeze 33.10 36.08 34.74 33.21 33.31 46.36 52.77 51.42 45.47 60.01 68.07 42.75 58.02 60.42 55.34 47.40
SLR NoFreeze 35.61 38.37 37.50 35.83 35.81 48.29 53.61 52.62 46.85 60.42 67.71 44.93 58.43 60.56 55.65 48.81 SLR Freeze 35.11 37.93 36.83 35.13 35.13 48.29 53.45 52.68 46.52 60.74 68.40 44.78 58.74 61.13 55.97 48.72
offline evaluation
TENT+ NoFreeze 32.03 35.33 35.28 31.92 31.27 49.20 53.79 53.01 40.37 61.22 68.79 19.38 59.25 62.20 56.51 45.97 TENT+ Freeze 35.19 38.12 37.43 34.82 34.95 50.33 54.24 53.88 46.28 61.50 69.07 29.87 60.01 62.61 57.09 48.35
HLR NoFreeze 41.60 43.80 43.89 42.21 41.50 53.82 56.21 56.71 50.83 62.74 67.87 51.34 60.65 62.58 57.70 52.89 HLR Freeze 41.37 44.04 43.68 41.74 41.09 54.26 56.43 57.03 50.81 63.05 68.29 50.98 61.15 63.08 58.13 53.0
SLR NoFreeze 41.45 43.95 44.26 42.56 41.60 54.25 56.13 56.72 50.92 62.97 68.02 50.99 60.90 62.83 57.86 53.02 SLR Freeze 41.52 42.90 44.07 41.69 40.78 54.76 56.59 57.35 51.01 63.53 68.72 50.65 61.49 63.46 58.32 53.12
A.4 EFFECT OF κ
Note that the running estimate of Ldiv prevents model collapsed to trivial solutions i.e., model predicts only a single or a set of classes as outputs regardless of the input samples. Ldiv encourages model to match it’s empirical distribution of predictions to class distribution of target data (uniform distribution in our experiments). Such diversity regularization is crucial as there is no direct supervision attributing to different classes and thus aids to avoid collapsed trivial solutions. In Figure A3, we investigate different values of κ on validation corruptions of ImageNet-C to study its effectiveness on our approach. It can be observed that both the HLR and SLR without Ldiv leads to collapsed solutions (e.g., accuracy drops to 0%) on some of the corruptions and the performance gains are not consistent across all the corruptions. On the other hand, Ldiv with κ = 0.9 remain consistent and improve the performance across all the corruptions.
A.5 TEST-TIME ADPTATION OF PRETRAINED MODELS WITH SHOT
Following SHOT (Liang et al., 2020), we use their pseudo labeling strategy on the ImageNet pretrained ResNet50 in combination with TENT+, HLR and SLR. Note that TENT+ and pseudo labeling strategy jointly forms the method SHOT. The pseudo labeling strategy starts after the 1st epoch and thereafter computed at every epoch. The weight for the loss computed on the pseudo labels is set to 0.3, similar to (Liang et al., 2020). Different values for this weight is explored and found 0.3 to perform best. Table A6 compares the results of the methods with and without pseudo labeling strategy. It can be observed that the results with pseudo labeling strategy perform worse than without taking this strategy into account.
We further modified the pretrained ResNet50 by following the network modifications suggested in (Liang et al., 2020), that includes adding a bottleneck layer with BatchNorm and applying weight norm on the linear classifier along with smooth label training to facilitate the pseudo labeling strategy. Table A7 shows that the pseudo labeling strategy on such network improve the results of TENT+ from epoch 1 to epoch 5. However, there are no improvements noticed in SLR. Moreover, Table A8 shows that NO pseudo labeling strategy on the same network performs better than applying the pseudo labeling strategy. Finally, the no pseduo labeling results from Table A6 and A8 shows that additional modifications to ResNet50 do not improve the performance when compared to the standard ResNet50.
A.6 DOMAIN ADAPTATION ON VISDA-C AND DIGIT CLASSIFICATION
VisDA-C: We extended our experiments to VisDA-C. We followed similar network architecture from SHOT (Liang et al., 2020) and evaluated TENT+, our SLR loss function with diversity regularizer. Similar to ImageNet-C, we adapted only the channel wise affine parameters of batchnorm layers for 5 epochs with Adam optimizer with cosine decay scheduler of the learning rate with initial value 2e− 5. Here, the batchsize is set to 64, the weight of Lconf in our loss function to δ = 0.25 and κ = 0 in the running estimate pt(y) of Ldiv, since the number of classes in this dataset (12 classes) is smaller than the batchsize. Setting κ = 0 enables the batch wise diversity regularizer. Table A9 shows
Table A5: Test-time adaptation of ResNet50 on ImageNet-C at highest severity level 5. Same as Table 1 with error bars.
name Epoch 1 Epoch 5
corruption No adaptation PL TENT TENT+ HLR SLR TENT TENT+ HLR SLR
Gauss 2.44 2.44 32.44±0.10 33.75±0.09 38.39±0.25 39.51±0.23 30.64±0.51 35.19±0.17 41.37±0.09 41.52±0.08 Shot 2.99 2.99 35.01±0.17 36.38±0.19 41.11±0.13 42.09±0.26 33.80±0.74 38.12±0.10 44.04±0.09 42.90±0.08
Impulse 1.96 1.96 34.77±0.09 35.67±0.15 40.28±0.20 41.58±0.04 34.72±1.01 37.43±0.09 43.68±0.06 44.07±0.06 Defocus 17.92 17.92 32.40±0.10 33.43±0.14 38.25±0.32 39.35±0.13 30.13±0.61 34.82±0.25 41.74±0.12 41.69±0.07
Glass 9.82 9.82 31.62±0.15 33.25±0.01 38.18±0.08 39.02±0.09 29.05±0.21 34.95±0.13 41.09±0.17 40.78±0.08 Motion 14.78 14.78 47.23±0.11 47.66±0.12 51.63±0.08 52.67±0.25 49.08±0.08 50.33±0.07 54.26±0.02 54.76±0.04 Zoom 22.50 22.50 53.09±0.06 53.20±0.07 55.55±0.06 55.80±0.07 53.63±0.16 54.24±0.06 56.43±0.07 56.59±0.05 Snow 16.89 16.89 51.61±0.05 52.06±0.09 55.45±0.11 55.92±0.06 52.86±0.13 53.88±0.07 57.03±0.12 57.35±0.03 Frost 23.31 23.31 43.26±0.30 44.85±0.20 48.96±0.07 49.64±0.14 38.47±0.50 46.28±0.27 50.81±0.08 51.01±0.02 Fog 24.43 24.43 60.42±0.08 60.60±0.05 62.19±0.03 62.62±0.04 61.13±0.08 61.50±0.05 63.05±0.04 63.53±0.08 Bright 58.93 58.93 68.85±0.02 68.93±0.03 68.17±0.01 68.47±0.05 68.81±0.06 69.07±0.06 68.29±0.09 68.72±0.10 Contrast 5.43 5.43 24.39±0.98 33.43±0.77 49.47±0.20 50.27±0.08 10.72±0.32 29.87±1.36 50.98±2.54 50.65±0.55 Elastic 16.95 16.95 58.53±0.05 58.94±0.05 60.34±0.18 60.80±0.08 59.25±0.06 60.01±0.02 61.15±0.04 61.49±0.07 Pixel 20.61 20.61 61.62±0.06 61.75±0.07 62.51±0.10 63.01±0.08 62.15±0.04 62.61±0.08 63.08±0.06 63.46±0.08 JPEG 31.65 31.65 56.00±0.09 56.21±0.05 57.42±0.13 57.80±0.04 56.44±0.07 57.09±0.02 58.13±0.09 58.32±0.05
Table A6: Test-time adaptation of ResNet50 on ImageNet-C at highest severity level 5 with and without the pseudo labeling strategy (Liang et al., 2020).
name No pseudo labeling: Epoch 5 Pseudo labeling: Epoch 5
corruption No adaptation TENT+ HLR SLR TENT+ HLR SLR
Gauss 2.44 33.97±0.17 41.37±0.09 41.52±0.08 34.08±0.11 34.88±0.35 35.58±0.06 Shot 2.99 37.95±0.10 44.04±0.09 42.90±0.08 36.74±0.26 37.61±0.49 37.98±0.19 Impulse 1.96 36.93±0.09 43.68±0.06 44.07±0.06 36.69±0.04 37.24±0.22 37.77±0.05 Defocus 17.92 32.69±0.25 41.74±0.12 41.69±0.07 33.99±0.28 34.76±0.11 35.11±0.10
Glass 9.82 33.36±0.13 41.09±0.17 40.78±0.08 34.06±0.12 34.51±0.30 34.81±0.27 Motion 14.78 51.42±0.07 54.26±0.02 54.76±0.04 50.91±0.09 48.96±0.39 49.46±0.20 Zoom 22.50 54.33±0.06 56.43±0.07 56.59±0.05 54.10±0.10 52.49±0.02 52.50±0.23 Snow 16.89 54.55±0.07 57.03±0.12 57.35±0.03 54.06±0.08 52.49±0.19 52.95±0.07 Frost 23.31 45.80±0.27 50.81±0.08 51.01±0.02 44.44±0.07 45.47±0.26 46.06±0.20 Fog 24.43 62.09±0.05 63.05±0.04 63.53±0.08 61.91±0.08 59.66±0.14 59.98±0.12 Bright 58.93 69.03±0.06 68.29±0.09 68.72±0.10 68.98±0.02 65.59±0.06 66.00±0.03 Contrast 5.43 24.08±1.36 50.98±2.54 50.65±0.55 29.37±0.95 44.58±0.38 45.64±0.47 Elastic 16.95 60.36±0.02 61.15±0.04 61.49±0.07 60.23±0.05 57.48±0.14 57.87±0.04 Pixel 20.61 63.10±0.08 63.08±0.06 63.46±0.08 62.98±0.04 59.72±0.02 60.05±0.14 JPEG 31.65 57.21±0.02 58.13±0.09 58.32±0.05 57.09±0.04 54.72±0.09 54.88±0.07
average results from three different random seeds and also shows that SLR outperforms TENT+ on this dataset.
Domain adaptation from SVHN to MNIST / MNIST-M / USPS: ResNet26 is trained on SVHN dataset for 50 epochs with batch size 128, SGD optimizer with momentum 0.9 and initial learning rate 0.01, which drops to 0.001 and 0.0001 at 25th and 40th epoch respectively. ResNet26 obtains 96.49% test accuracy on SVHN. Domain adaptation of SVHN trained ResNet26 to MNIST/MNIST-M/USPS
Table A7: Test-time adaptation of modified ResNet50 (following (Liang et al., 2020)) on ImageNet-C at highest severity level 5 with pseudo labeling strategy at epoch 1 and epoch 5.
name Pseudo labeling: Epoch 1 Pseudo labeling: Epoch 5
corruption No adaptation TENT+ HLR SLR TENT+ HLR SLR
Gauss 2.95 31.03±0.18 34.65±0.28 37.21±0.23 35.26±0.16 35.93±0.23 37.61±0.30 Shot 3.65 33.55±0.07 38.09±0.30 40.30±0.09 37.39±0.05 38.95±0.16 40.42±0.06 Impulse 2.54 32.70±0.07 36.95±0.05 39.73±0.07 38.16±0.08 38.13±0.04 40.12±0.11 Defocus 19.36 31.66±0.15 35.08±0.05 37.18±0.15 35.95±0.17 36.72±0.13 37.96±0.25
Glass 9.72 31.06±0.06 35.46±0.12 37.62±0.10 35.98±0.04 36.84±0.11 37.90±0.02 Motion 15.66 46.96±0.12 49.95±0.12 51.87±0.14 52.24±0.02 51.90±0.12 52.76±0.09 Zoom 22.20 52.45±0.02 54.15±0.22 54.84±0.18 54.80±0.07 54.84±0.09 54.95±0.14 Snow 17.56 51.79±0.05 53.98±0.06 55.44±0.04 55.15±0.02 55.27±0.20 55.75±0.02 Frost 24.11 45.59±0.06 47.87±0.03 48.96±0.11 48.10±0.20 48.52±0.11 49.13±0.20 Fog 25.59 60.33±0.03 61.55±0.10 62.21±0.16 62.39±0.03 62.38±0.12 62.38±0.11 Bright 58.30 68.84±0.04 68.44±0.04 68.60±0.10 69.13±0.04 68.50±0.02 68.47±0.09 Contrast 6.49 42.34±0.19 47.98±0.13 50.32±0.28 42.11±0.15 49.22±0.42 50.80±0.19 Elastic 17.72 58.47±0.02 59.70±0.06 60.30±0.09 60.40±0.04 60.27±0.22 60.45±0.21 Pixel 21.29 61.39±0.06 62.10±0.07 62.71±0.10 63.04±0.02 62.71±0.07 62.81±0.07 JPEG 32.13 55.22±0.03 56.49±0.07 57.04±0.07 57.21±0.06 57.25±0.07 57.37±0.05
Table A8: Test-time adaptation of modified ResNet50 (following (Liang et al., 2020)) on ImageNet-C at highest severity level 5 with and without pseudo labeling strategy.
name No Pseudo labeling: Epoch 5 Pseudo labeling: Epoch 5
corruption No adaptation TENT+ HLR SLR TENT+ HLR SLR
Gauss 2.95 34.96±0.08 38.58±0.12 39.72±0.13 35.26±0.16 35.93±0.23 37.61±0.30 Shot 3.65 37.22±0.17 41.59±0.09 42.45±0.05 37.39±0.05 38.95±0.16 40.42±0.06 Impulse 2.54 37.82±0.04 40.88±0.07 42.39±0.03 38.16±0.08 38.13±0.04 40.12±0.11 Defocus 19.36 34.46±0.12 39.22±0.15 39.78±0.09 35.95±0.17 36.72±0.13 37.96±0.25
Glass 9.72 35.12±0.05 38.83±0.13 39.37±0.07 35.98±0.04 36.84±0.11 37.90±0.02 Motion 15.66 51.91±0.09 53.23±0.05 54.00 52.24±0.02 51.90±0.12 52.76±0.09 Zoom 22.20 54.57±0.05 55.76±0.04 55.79±0.02 54.80±0.07 54.84±0.09 54.95±0.14 Snow 17.56 55.02±0.05 56.35±0.12 56.80±0.04 55.15±0.02 55.27±0.20 55.75±0.02 Frost 24.11 48.18±0.09 49.86±0.22 50.43±0.08 48.10±0.20 48.52±0.11 49.13±0.20 Fog 25.59 62.24±0.04 62.90±0.06 63.29±0.06 62.39±0.03 62 | 1. What is the focus of the paper regarding test time adaptation?
2. What are the strengths of the proposed method, particularly in its novelty and empirical results?
3. What are the weaknesses of the paper, especially regarding its motivation and potential limitations?
4. How does the reviewer suggest improving the paper's explanation of its approach?
5. Are there any additional experiments or results that the reviewer suggests including in the paper? | Summary Of The Paper
Review | Summary Of The Paper
In the spirit of full disclosure: I have recently reviewed this paper, and several parts of my previous review are still applicable, thus I am copying in these parts when appropriate.
This paper presents a method for test time adaptation based on several techniques. These include a self-supervised adaptation objective based on log likelihood ratios, an additional regularizing objective to encourage diverse predictions, and an input transformation module that is also trained with the aforementioned objectives. Together, these techniques lead to better performance on ImageNet-C and ImageNet-R compared to Tent, a recent and similar test time adaptation method based on entropy minimization.
Review
Strengths
The self-supervised log likelihood ratio objective appears novel, as far as I am aware. And, for this problem setting, the combination of the aforementioned techniques is novel and leads to stronger empirical results than what has been previously reported.
The experiments are generally comprehensive and cover, as far as I can see, the important aspects of evaluation. I appreciate the results presented for both the "offline" and "online" adaptation settings, as well as the results adding all of the various techniques to Tent to evaluate whether any technique is of paramount importance.
The paper is generally well written and structured.
Weaknesses
I think that the paper has improved on this point, but the motivation behind the general approach is still somewhat shaky. The idea that the model should extract a self-supervised learning signal from data points it is already very confident for still seems strange to me. Imagine if the model was already very confident for the entire batch of data points, but there is a (predicted) class imbalance in the batch. Would it not be the case that the model would adapt in this case when using the proposed approach, even though it would make more sense to not adapt at all, which for example entropy minimization would (roughly) do? And it is in general just unclear to me how incorporating a stronger gradient signal from confident points would help when it comes to ambiguous points. Perhaps what could be useful here is to actually "show this in action", e.g., take a real batch of data during adaptation and demonstrate how the model adapts with the proposed approach vs with Tent. This may provide greater intuition as to why the proposed approach is a good idea.
Negative results are also of interest to the community, and to this end, including results on challenging distribution shift benchmarks such as ImageNet-A and ImageNet-v2, which prior work [1] has shown adaptation to be unhelpful for, would be great. Even just in the appendix, it would still be appreciated.
A final minor nit from my previous review: I would still like to know whether or not a confidence of 0.82 is "low" for the corrupted image datasets or other instances of test distribution shift.
[1] Schneider et al, "Improving robustness against common corruptions by covariate shift adaptation". NeurIPS 2020. |
ICLR | Title
Test-Time Adaptation to Distribution Shifts by Confidence Maximization and Input Transformation
Abstract
Deep neural networks often exhibit poor performance on data that is unlikely under the train-time data distribution, for instance data affected by corruptions. Previous works demonstrate that test-time adaptation to data shift, for instance using entropy minimization, effectively improves performance on such shifted distributions. This paper focuses on the fully test-time adaptation setting, where only unlabeled data from the target distribution is required. This allows adapting arbitrary pretrained networks. Specifically, we propose a novel loss that improves test-time adaptation by addressing both premature convergence and instability of entropy minimization. This is achieved by replacing the entropy by a non-saturating surrogate and adding a diversity regularizer based on batch-wise entropy maximization that prevents convergence to trivial collapsed solutions. Moreover, we propose to prepend an input transformation module to the network that can partially undo test-time distribution shifts. Surprisingly, this preprocessing can be learned solely using the fully test-time adaptation loss in an end-to-end fashion without any target domain labels or source domain data. We show that our approach outperforms previous work in improving the robustness of publicly available pretrained image classifiers to common corruptions on such challenging benchmarks as ImageNet-C.
1 INTRODUCTION
Deep neural networks achieve impressive performance on test data, which has the same distribution as the training data. Nevertheless, they often exhibit a large performance drop on test (target) data which differs from training (source) data; this effect is known as data shift (Quionero-Candela et al., 2009) and can be caused for instance by image corruptions. There exist different methods to improve the robustness of the model during training (Geirhos et al., 2019; Hendrycks et al., 2019; Tzeng et al., 2017). However, generalization to different data shifts is limited since it is infeasible to include sufficiently many augmentations during training to cover the excessively wide range of potential data shifts (Mintun et al., 2021a). Alternatively, in order to generalize to the data shift at hand, the model can be adapted during test-time. Unsupervised domain adaptation methods such as Vu et al. (2019) use both source and target data to improve the model performance during test-time. In general source data might not be available during inference time, e.g., due to legal constraints (privacy or profit). Therefore we focus on the fully test-time adaptation setting (Wang et al., 2020): model is adapted to the target data during test time given only the arbitrarily pretrained model parameters and unlabeled target data that share the same label space as source data. We extend the work of Wang et al. (2020) by introducing a novel loss function, using a diversity regularizer, and prepending a parametrized input transformation module to the network. We show that our approach outperform previous works and make pretrained models robust against common corruptions on image classification benchmarks as ImageNet-C (Hendrycks & Dietterich, 2019) and ImageNet-R (Hendrycks et al., 2020).
Sun et al. (2020) investigate test-time adaptation using a self-supervision task. Wang et al. (2020) and Liang et al. (2020) use the entropy minimization loss that uses maximization of prediction confidence as self-supervision signal during test-time adaptation. Wang et al. (2020) has shown that such loss performs better adaptation than a proxy task (Sun et al., 2020). When using entropy minimization, however, high confidence predictions do not contribute to the loss significantly anymore and thus provide little self-supervision. This is a drawback since high-confidence samples provide the most
trustworthy self-supervision. We mitigate this by introducing two novel loss functions that ensure that gradients of samples with high confidence predictions do not vanish and learning based on self-supervision from these samples continues. Our losses do not focus on minimizing entropy but on minimizing the negative log likelihood ratio between classes; the two variants differ in using either soft or hard pseudo-labels. In contrast to entropy minimization, the proposed loss functions provide non-saturating gradients, even when there are high confident predictions. Figure 1 provides illustration of the losses and the resulting gradients. Using these new loss functions, we are able to improve the network performance under data shifts in both online and offline adaptation settings.
In general, self-supervision by confidence maximization can lead to collapsed trivial solutions, which make the network to predict only a single or a set of classes independent of the input. To overcome this issue a diversity regularizer (Liang et al., 2020; Wu et al., 2020) can be used, that acts on a batch of samples. It encourages the network to make diverse class predictions on different samples. We extend the regularizer by including a moving average, in order to include the history of the previous batches and show that this stabilizes the adaptation of the network to unlabeled test samples. Furthermore we also introduce a parametrized input transformation module, which we prepend to the network. The module is trained in a fully test-time adaptation manner using the proposed loss function, and without using source data or target labels. It aims to partially undo the data shift at hand and helps to further improve the performance on image classification benchmark with corruptions.
Since our method does not change the training process, it allows to use any pretrained models. This is beneficial because any good performing pretrained network can be readily reused, e.g., a network trained on some proprietary data not available to the public. We show, that our method significantly improves performance of different pretrained models that are trained on clean ImageNet data.
In summary our main contributions are as follows: we propose non-saturating losses based on the negative log likelihood ratio, such that gradients from high confidence predictions still contribute to test-time adaptation. We extend diversity regularizer to its moving average to include the history of previous batch samples to prevent the model collapsing to trivial solutions. We also introduce an input transformation module, which partially undoes the data shift at hand. We show that the performance of different pretrained models can be significantly improved on ImageNet-C and ImageNet-R.
2 RELATED WORK
Common image corruptions are potentially stochastic image transformations motivated by realworld effects that can be used for evaluating a model’s robustness. One such benchmark, ImageNetC (Hendrycks & Dietterich, 2019), contains simulated corruptions such as noise, blur, weather effects, and digital image transformations. Additionally, Hendrycks et al. (2020) proposed three data sets containing real-world distribution shifts, including Imagenet-R. Most proposals for improving robustness involve special training protocols, requiring time and additional resources. This includes data augmentation like Gaussian noise (Ford et al., 2019; Lopes et al., 2019; Hendrycks et al., 2020), CutMix (Yun et al., 2019), AugMix (Hendrycks et al., 2019), training on stylized images (Geirhos et al., 2019; Kamann et al., 2020) or against adversarial noise distributions (Rusak et al., 2020a). Mintun et al. (2021b) pointed out that many improvements on ImageNet-C are due to data augmentations which are too similar to the test corruptions, that is: overfitting to ImageNet-C occurs. Thus, the model might be less robust to corruptions not included in the test set of ImageNet-C.
Unsupervised domain adaptation methods train a joint model of source and target domain by crossdomain losses to find more general and robust features, e. g. optimize feature alignment (QuiñoneroCandela et al., 2008; Sun et al., 2017) between domains, adversarial invariance (Ganin & Lempitsky, 2015; Tzeng et al., 2017; Ganin et al., 2016; Hoffman et al., 2018), shared proxy tasks (Sun et al., 2019) or adapt entropy minimization via an adversarial loss (Vu et al., 2019). While these approaches are effective, they require explicit access to source and target data at the same time, which may not always be feasible. Our approach works with any pretrained model and only needs target data.
Test-time adaptation is a setting, when training (source) data is unavailable at test-time. It is related to source free adaptation, where several works use generative models, alter training (Kundu et al., 2020; Li et al., 2020b; Kurmi et al., 2021; Yeh et al., 2021) and require several thousand epochs to adapt to the target data (Li et al., 2020b; Yeh et al., 2021). Besides, there is another line of work (Sun et al., 2020; Schneider et al., 2020; Nado et al., 2021; Benz et al., 2021; Wang et al., 2020) that
interpret the common corruptions as data shift and aim to improve the model robustness against these corruptions with efficient test-time adaptation strategy to facilitate online adaptation. such settings spare the cost of additional computational overhead. Our work also falls in this line of research and aims to adapt the model to common corruptions efficiently with both online and offline adaptation.
Sun et al. (2020) update feature extractor parameters at test-time via a self-supervised proxy task (predicting image rotations). However, Sun et al. (2020) alter the training procedure by including the proxy loss into the optimization objective as well, hence arbitrary pretrained models cannot be used directly for test-time adaptation. Inspired by the domain adaptation strategies (Maria Carlucci et al., 2017; Li et al., 2016), several works (Schneider et al., 2020; Nado et al., 2021; Benz et al., 2021) replace the estimates of Batch Normalization (BN) activation statistics with the statistics of the corrupted test images. Fully test time adaptation, studied by Wang et al. (2020) (TENT) uses entropy minimization to update the channel-wise affine parameters of BN layers on corrupted data along with the batch statistics estimates. SHOT (Liang et al., 2020) also uses entropy minimization and a diversity regularizer to avoid collapsed solutions. SHOT modifies the model from the standard setting by adopting weight normalization at the fully connected classifier layer during training to facilitate their pseudo labeling technique. Hence, SHOT is not readily applicable to arbitrary pretrained models.
We show that pure entropy minimization (Wang et al., 2020; Liang et al., 2020) as well as alternatives such as max square loss (Chen et al., 2019) and Charbonnier penalty (Yang & Soatto, 2020) results in vanishing gradients for high confidence predictions, thus inhibiting learning. Our work addresses this issue by proposing a novel non-saturating loss, that provides non-vanishing gradients for high confidence predictions. We show that our proposed loss function improves the network performance through test-time adaptation. In particular, performance on corruptions of higher severity improves significantly. Furthermore, we add and extend the diversity regularizer (Liang et al., 2020; Wu et al., 2020) to avoid collapse to trivial, high confidence solutions. Existing diversity regularizers (Liang et al., 2020; Wu et al., 2020) act on a batch of samples, hence the number of classes has to be smaller than the batch size. We mitigate this problem by extending the regularizer to a moving average version. Li et al. (2020a) also use a moving average to estimate the entropy of the unconditional class distribution but source data is used to estimate the gradient of the entropy. In contrast, our work does not need access to the source data since the gradient is estimated using only target data. Prior work Tzeng et al. (2017); Rusak et al. (2020b); Talebi & Milanfar (2021) transformed inputs by an additional module to overcome domain shift, obtain robust models, and also to learn to resize. In our work, we prepend an input transformation module to the model, but in contrast to former works, this module is trained purely at test-time to partially undo the data shift at hand to aid the adaptation.
3 METHOD
We propose a novel method for fully test-time adaption. We assume that a neural network fθ with parameters θ is available that was trained on data from some distributionD, as well a set of (unlabeled) samples X ∼ D′ from a target distribution D′ 6= D (importantly, no samples from D are required). We frame fully test-time adaption as a two-step process: (i) Generate a novel network gφ based on fθ, where φ denotes the parameters that are adapted. A simple variant for this is g = f and φ ⊆ θ Wang et al. (2020). However, we propose a more expressive and flexible variant in Section 3.1. (ii) Adapt the parameters φ of g on X using an unsupervised loss function L. We propose two novel losses Lslr and Lhlr in Section 3.2 that have non-vanishing gradients for high-confidence self-supervision.
3.1 INPUT TRANSFORMATION
We propose to define the adaptable model as g = f ◦ d. That is: we preprend a trainable network d to f . The motivation for the additional component d is to increase expressivity of g such that it can learn to (partially) undo the domain shift D → D′. Specifically, we choose d(x) = γ · [τx+ (1− τ)rψ(x)] + β, where τ ∈ R, (β, γ) ∈ Rnin with nin being the number of input channels, rψ being a network with identical input and output shape, and · denoting elementwise multiplication. Specifically, β and γ implement a channel-wise affine transformation and τ implements a convex combination of unchanged input and the transformed input rψ(x). By choosing τ = 1, γ = 1, β = 0, we ensure d(x) = x and thus g = f at initialization. In principle, rψ can be chosen arbitrarily. Here, we choose rψ as a simple stack of 3× 3 convolutions, group normalization, and ReLUs (refer Sec. A.2 for details). However, exploring other choices would be an interesting avenue for future work.
Importantly, while the motivation for d is to learn to partially undo a domain shift D → D′, we train d end-to-end in the fully test-time adaptation setting on data X ∼ D′, without any access to samples from the source domain D, based on the losses proposed in Section 3.2. The modulation parameters of gφ are φ = (β, γ, τ, ψ, θ′), where θ′ ⊆ θ. That is, we adapt only a subset of the parameters θ of the pretrained network f . We largely follow Wang et al. (2020) in adapting only the affine parameters of normalization layers in f while keeping parameters of convolutional kernels unchanged. Additionally, batch normalization statistics (if any) are adapted to the target distribution.
Note that the proposed method is applicable to any pretrained network that contains normalization layers with a channel-wise affine transformation. For networks with no affine transformation layers, one can add such layers into f that are initialized to identity as part of model augmentation.
3.2 ADAPTATION OBJECTIVE
We propose a loss function L = Ldiv + δLconf for fully test-time network adaptation that consists of two components: (i) a term Ldiv that encourages predictions of the network over the adaptation dataset X that match a target distribution pD′(y). This can help avoiding test-time adaptation collapsing to too narrow distributions such as always predicting the same or very few classes. If pD′(y) is (close to) uniform, it acts as a diversity regularizer. (ii) A term Lconf that encourages high confidence prediction on individual datapoints. We note that test-time entropy minimization (TENT) (Wang et al., 2020) fits into this framework by choosing Ldiv = 0 and Lconf as the entropy.
3.2.1 CLASS DISTRIBUTION MATCHING Ldiv
Assuming knowledge of the class distribution pD′(y) on the target domain D′, we propose to add a term to the loss that encourages the empirical distribution of (soft) predictions of gφ on X to match this distribution. Specifically, let p̂gφ(y) be an estimate of the distribution of (soft) predictions of gφ. We use the Kullback-Leibler divergence Ldiv = DKL(p̂gφ(y)|| pD′(y)) as loss term. In some applications information about the target class distribution is available, e.g. in medical data it might be known that there is a large class imbalance. In general this information is not available, and here we assume a uniform distribution of pD′(y), which corresponds to maximizing the entropy H(p̂gφ(y)). Similar assumption has been made in SHOT to circumvent the collapsed solutions.
Since the estimate p̂gφ(y) depends on φ, which is continuously adapted, it needs to be re-estimated on a per-batch level. Since re-estimating p̂gφ(y) from scratch would be computational expensive, we propose to use a running estimate that tracks the changes of φ as follows: let pt−1(y) be the estimate at iteration t− 1 and pempt = 1n ∑n k=1 ŷ
(k), where ŷ(k) are the predictions (confidences) of gφ on a mini-batch of n inputs x(k) ∼ X . We update the running estimate via pt(y) = κ · sg(pt−1(y))+(1− κ) · pempt , where sg refers stop-gradient. The loss becomes Ldiv = DKL(pt(y)|| pD′(y)) accordingly. Unlike Li et al. (2020a), our approach only requires target but no source data to estimate the gradient.
3.2.2 CONFIDENCE MAXIMIZATION Lconf
We motivate our choice of Lconf step-by-step from the (unavailable) supervised cross-entropy loss: for this, let ŷ = gφ(x) be the predictions (confidences) of model gφ and H(ŷ, yr) = − ∑ c y r c log ŷc be the cross-entropy between prediction ŷ and some reference yr. Let the last layer of g be a softmax activation layer softmax. That is ŷ = softmax(o), where o are the network’s logits. We can rewrite the cross-entropy in terms of the logits o and a one-hot reference yr as follows: H(softmax(o), yr) = −ocr + log ∑ncl i=1 e oi where cr is the index of the 1 in yr and ncl is the number of classes.
When labels being available for the target domain (which we do not assume) in the form of a one-hot encoded reference yt for data xt, one could use the supervised cross-entropy loss by setting yr = yt and using Lsup(ŷ, yr) = H(ŷ, yr) = H(ŷ, yt). Since fully test-time adaptation assumes no label information, supervised cross-entropy loss is not applicable and other options for yr need to be used.
One option is (hard) pseudo-labels. That is, one defines the reference yr based on the network predictions ŷ via yr = onehot(ŷ), where onehot creates a one-hot reference with the 1 corresponding to the class with maximal confidence in ŷ. This results in Lpl(ŷ) = H(ŷ, onehot(ŷ)) = − log ŷc∗ , with c∗ = argmax ŷ. One disadvantage with this loss is that the (hard) pseudo-labels ignore uncertainty in the network predictions during self-supervision. This results in large gradient magnitudes with
respect to the logits |∂Lpl∂oc∗ | being generated on data where the network has low confidence (see Figure 1). This is undesirable since it corresponds to the network being affected most by data points where the network’s self-supervision is least reliable1.
An alternative is to use soft pseudo-labels, that is yr = ŷ. This takes uncertainty in network predictions into account during self-labelling and results in the entropy minimization loss of TENT (Wang et al., 2020): Lent(ŷ) = H(ŷ, ŷ) = H(ŷ) = − ∑ c ŷc log ŷc. However, also for the entropy the logits’ gradient magnitude |∂Lent∂o | goes to 0 when one of the entries in ŷ goes to 1 (see Figure 1). For a binary classification task, for instance, the maximal logits’ gradient amplitude is obtained for ŷ ≈ (0.82, 0.18). This implies that during later stages of test-time adaptation where many predictions typically already have high confidence (significantly above 0.82), gradients are dominated by datapoints with relative low confidence in self-supervision.
While both hard and soft pseudo-labels are clearly motivated, they are not optimal in conjunction with a gradient-based optimizer since the self-supervision from low confidence predictions dominates (at least during later stages of training). We address this issue by proposing two losses that increase the gradient amplitude from high confidence predictions. We argue that this leads to stronger selfsupervision (better gradient direction when averaged over the batch) than from the entropy loss (see also Sec. A.1 for an illustrative example supporting this claim) . The two losses are analogous to Lpl and Lent, but are not based on the cross-entropy H but on the negative log likelihood ratios:
R(ŷ, yr) = − ∑ c yrc log ŷc∑ i6=c ŷi = − ∑ c yrc (log ŷc − log ∑ i 6=c ŷi) = H(ŷ, y r) + ∑ c yrc log ∑ i 6=c ŷi
Note that while the entropy H is lower bounded by 0, R can get arbitrary small if yrc → 1 and the sum ∑ i 6=c ŷi → 0 and thus log ∑ i 6=c ŷi → −∞. This property will induce non-vanishing gradients for high confidence predictions.
The first loss we consider is the hard likelihood ratio loss that is defined similarly to the hard pseudo-labels loss Lpl:
Lhlr(ŷ) = R(ŷ, onehot(ŷ)) = − log( ŷc∗∑ i 6=c∗ ŷi ) = − log( e oc∗∑ i 6=c∗ e oi ) = −oc∗ + log ∑ i 6=c∗ eoi ,
1The prediction confidence for a datapoint can be interpreted as a proxy for its distance to the decision boundary. A low confidence prediction indicates that a datapoint appears to be close to the decision boundary and the model is less certain on which side of the decision boundary the datapoint should lie. We call this "low confidence self-supervision" since the direction of the gradient becomes ambiguous.
where c∗ = argmax ŷ. We note that ∂Lhlr∂oc∗ = −1, thus also high-confidence self-supervision contributes equally to the maximum logits’ gradients. This loss was also independently proposed as negative log likelihood ratio loss by Yao et al. (2020) as a replacement to the fully-supervised cross entropy loss for classification task. However, to the best of our knowledge, we are the first to motivate and identify the advantages of this loss for self-supervised learning and test-time adaptation due to its non-saturating gradient property.
In addition to Lhlr, we also account for uncertainty in network predictions during self-labelling in a similar way as for the entropy loss Lent, and propose the soft likelihood ratio loss:
Lslr(ŷ) = R(ŷ, ŷ) = − ∑ c ŷc · log( ŷc∑ i 6=c ŷi ) = ∑ c ŷc(−oc + log ∑ i 6=c eoi)
We note that as ŷc∗ → 1, Lslr(ŷ) → Lhlr(ŷ). Thus the asymptotic behavior of the two likelihood ratio losses for high confidence predictions is the same. However, the soft likelihood ratio loss creates lower amplitude gradients for low confidence self-supervision. We provide illustrations of the discussed losses and the resulting logits’ gradients in Figure 1. Furthermore, an illustration of other losses like the max square loss and Charbonnier penalty can be found in Sec. A.7.
We note that both likelihood ratio losses would typically encourage the network to simply scale its logits larger and larger, since this would reduce the loss even if the ratios between the logits remain constant. However, when finetuning an existing network and restricting the layers that are adapted such that the logits remain approximately scale-normalized, these losses can provide a useful and non-vanishing gradient signal for network adaptation. We achieve this appproximate scale normalization by freezing the top layers of the respective networks. In this case, normalization layers such as batch normalization prohibit “logit explosion”. However, predicted confidences can presumably become overconfident; calibrating confidences in a self-supervised test-time adaptation setting is an open and important direction for future work.
4 EXPERIMENTAL SETTINGS
Datasets We evaluate our method on image classification datasets for corruption robustness and domain adaptation. We evaluate on the challenging benchmark ImageNet-C (Hendrycks & Dietterich, 2019), which includes a wide variety of 15 different synthetic corruptions with 5 severity levels that attribute to data shift. This benchmark also includes 4 additional corruptions as validation data. For domain adaptation, we choose ImageNet trained models to adapt to ImageNet-R proposed by Hendrycks et al. (2020). ImageNet-R comprises 30,000 image renditions for 200 ImageNet classes. Domain adaptation on VisDA-C (Peng et al., 2017) and digit classification can be found in Sec. A.6.
Models Our method operates in a fully test-time adaptation setting that allows us to use any arbitrary pretrained model. We use publicly available ImageNet pretrained models ResNet50, DenseNet121, ResNeXt50, MobileNetV2 from torchvision Torch-Contributors (2020). We also test on a robust ResNet50 model trained using DeepAugment+AugMix 2 Hendrycks et al. (2020).
Baseline for fully test-time adaptation Since TENT from Wang et al. (2020) outperformed competing methods and fits the fully test-time adaptation setting, we consider it as a baseline and compare our results to this approach. Similar to TENT, we also adapt model features by estimating the normalization statistics and optimize only the channel-wise affine parameters on the target distribution.
Settings We conduct test-time adaptation on a target distribution with both online and offline updates using the Adam optimizer with learning rate 0.0006 with batch size 64. We set the weight of Lconf in our loss function to δ = 0.025 and κ = 0.9 in the running estimate pt(y) of Ldiv (we investigate the effect of κ in the Sec. A.4). Similar to SHOT (Liang et al., 2020), we also choose the target distribution pD′(y) in Ldiv as a uniform distribution over the available classes. For TENT, we use SGD with momentum 0.9 at learning rate 0.00025 with batch size 64. These values correspond to the ones of Wang et al. (2020); alternative settings for TENT did not improve performance. For offline updates, we adapt the models for 5 epochs using a cosine decay schedule of the learning rate. We found that the models converge during 3 to 5 epochs and do not improve further. Similar to Wang et al. (2020), we also control for ordering by data shuffling and sharing the order across the methods.
2From https://github.com/hendrycks/imagenet-r. Owner permitted to use it for research/commercial purposes.
Note that all the hyperparameters are tuned solely on the validation corruptions of ImageNet-C that are disjoint from the test corruptions. As discussed in Section 3.2.2, we freeze all trainable parameters in the top layers of the networks to prohibit “logit explosion”. Normalization statistics are still updated in these layers. Sec. A.3 provides more details regarding frozen layers in different networks.
Furthermore, we prepend a trainable input transformation module d (cf. Sec. 3.1) to the network to partially counteract the data-shift. Note that the parameters of this module discussed in Sec. 3.1 are trainable and subject to optimization. This module is initialized to operate as an identity function prior to adaptation on a target distribution by choosing τ = 1, γ = 1, and β = 0. We adapt the parameters of this module along with the channel-wise affine transformations and normalization statistics in an end-to-end fashion, solely using our proposed loss function along with the optimization details mentioned above. The architecture of this module is discussed in Sec. A.2.
Since Ldiv is independent of Lconf, we also propose to combine Ldiv with TENT, i. e. L = Ldiv +Lent. We denote this as TENT+ and also set κ = 0.9 here. Note that TENT optimizes all channel-wise affine parameters in the network (since entropy is saturating and does not cause logit explosion). For a fair comparison to our method, we also freeze the top layers of the networks in TENT+. We show that adding Ldiv and freezing top layers significantly improves the networks performance over TENT. Note that SHOT (Liang et al., 2020) is the combination of TENT, batch-level diversity regularizer, and their pseudo labeling strategy. TENT+ can be seen as a variant of SHOT but without the pseudo labeling. Please refer to Sec. A.5 for the test-time adaptation of pretrained models with SHOT.
Note that each corruption and severity in ImageNet-C is treated as a different target distribution and we reset model parameters to their pretrained values before every adaptation. We run our experiments for three times with random seeds (2020, 2021, 2022) in PyTorch and report the average accuracies.
5 RESULTS
Evaluation on ImageNet-C We adapt different models on the ImageNet-C benchmark using TENT, TENT+, and both hard likelihood ratio (HLR) and soft likelihood ratio (SLR) losses in an online adaptation setting. Figure 2 (top row) depicts the mean corruption accuracy (mCA%) of each model computed across all the corruptions and severity levels. It can be observed that TENT+ improves over TENT, showcasing the importance of a diversity regularizer Ldiv. Importantly, our methods HLR and SLR outperform TENT and TENT+ across DenseNet121, MobileNetV2, ResNet50, ResNeXt50 and perform comparable with TENT+ on robust ResNet50-DeepAugment+Augmix model. This shows that the mCA% of robust DeepAugment+Augmix model can be further increased from 58% (before adaptation) to 67.5% using test-time adaptation techniques. Here, the average of mCA obtained from three different random seeds are depicted along with the error bars. These smaller error bars represent that the test-time adaptation results are not sensitive to the choice of random seed.
We also illustrate the performance of ResNet50 on the highest severity level across all 15 test corruptions of ImageNet-C in Table 1. Here, online adaptation results along with the offline adaptation on epoch 1 and 5 are reported. It can be seen that online adaptation and single epoch of test-time
adaptation improves the performance significantly and makes minor improvements until epoch 5. TENT adaptation for more than one epoch result in reduced performance and TENT with Ldiv (TENT+) prevents this behavior. Both HLR and SLR clearly and consistently outperform TENT / TENT+ on the ResNet50 and also note that SLR outweighs HLR. We also compare our results with the hard pseudo-labels (PL) objective and also with an oracle setting where the groundtruth labels of the target data are used for adapting the model in a supervised manner (GT). Note that this oracle setting is not of practical importance but illustrates the empirical upper bound on fully test-time adaptation performance under the chosen modulation parametrization.
ImageNet-R We online adapt different models on ImageNet-R and depict the results in Figure 2 (middle row). Results show that HLR and SLR clearly outperform TENT and TENT+ and significantly improve performance of all the models, including the model pretrained with DeepAugment+Augmix.
Evaluation with data subsets Above we evaluate the model on the same data that is also used for the test-time adaptation. Here, we test model generalization by adapting on a subset of target data
and evaluate the performance on the whole dataset (in offline setting), which also includes unseen data that is not used for adaptation. We conduct two case studies: (i) adapt on the data from a subset of ImageNet classes and evaluate the performance on the data from all the classes. (ii) Adapt only on a subset of data from each class and test on all seen and unseen samples from the whole dataset.
Figure 3 illustrates generalization of a ResNet50 adapted on different proportions of the data across different corruptions, both in terms of classes and samples. We observe that adapting a model on a small subset of samples and classes is sufficient to achieve reasonable accuracy on the whole target data. This suggests that the adaptation actually learns to compensate the data shift rather than overfitting to the adapted samples or classes. The performance of TENT decreases as the number of classes/samples increases, because Lent can converge to trivial collapsed solutions and more data corresponds to more updates steps during adaptation. Adding Ldiv such as in TENT+ stabilizes the adaptation process and reduces this issues. Reported are the average of random seeds with error bars.
Input transformation We investigate whether the input transformation (IT) module, trained end-toend with a ResNet50 and SLR loss on data of the respective distortion without seeing any source (undistorted) data, can partially undo certain domain shifts of ImageNet-C and also increase accuracy on corrupted data. We measure domain shift via the structural similarity index measure (SSIM) (Wang et al., 2004) between the clean image (unseen by the model) and its distorted version/the output of IT on the distorted version. Following offline adaptation setting, Table 2 shows that IT increases the SSIM considerably on certain distortions such as Impulse, Contrast, Snow, and Frost. IT increases SSIM also for other types of noise distortions, while it slightly reduces SSIM for the blur distortions, Elastic, Pixelate, and JPEG. When combined with SLR, IT considerably increases accuracy on distortions for which also SSIM increased significantly (for instance +20 percent points on Impulse, +4 percent points on Contrast) and never reduces accuracy by more than 0.11 percent points. More results on online and offline adaptation with TENT / TENT+ can be found in Table A3.
Clean images As a sanity check, we investigate the effect of test-time adaptation when target data comes from the same distribution as training data. For this, we online adapt pretrained models on clean validation data of ImageNet. The results in Figure 2 (bottom row) depict that the performance of SLR/HLR adapted models drops by 0.8 to 1.8 percent points compared to the pretrained model. We attribute this drop to self-supervision being less reliable than the original full supervision on indistribution training data. The drop is smaller for TENT and TENT+, presumably because predictions on in-distribution target data are typically highly confident such that there is little gradient and thus little change to the pretrained networks by TENT. In summary, while self-supervision by confidence maximization is a powerful method for adaptation to domain shift, the observed drop when adapting to data from the source domain indicates that there is “no free lunch” in test-time adaptation.
6 CONCLUSION
We propose a method to improve corruption robustness and domain adaptation of models in a fully test-time adaptation setting. Unlike entropy minimization, our proposed loss functions provide non-vanishing gradients for high confident predictions and thus attribute to improved adaptation in a self-supervised manner. We also show that additional diversity regularization on the model predictions is crucial to prevent trivial solutions and stabilize the adaptation process. Lastly, we introduce a trainable input transformation module that partially refines the corrupted samples to support the adaptation. We show that our method improves corruption robustness on ImageNet-C and domain adaptation to ImageNet-R on different ImageNet models. We also show that adaptation on a small fraction of data and classes is sufficient to generalize to unseen target data and classes.
7 ETHICS STATEMENT
We abide by the general ethical principles listed by ICLR code of ethics. Our work does not include the study of human subjects, dataset releases, do not raise pontential conflicts of interest, or discrimination/bias/fairness concerns, or privacy and security issues. Our non-saturating loss increases accuracy but might result in over confident predictions, which can cause harm in safetycritical downstream applications when not properly calibrated. At the same time, self-supervised confidence maximization might amplify bias in pretrained models. We hope that the diversity regularizer in the loss partially compensates this issue.
8 REPRODUCIBILITY STATEMENT
We provide complete details of our experimental setup for reproducibility. Sec. 4 provides details of the network architectures, optimizer, learning rate, batch size, choice of hyperparameters of our method and the random seeds used for generating the results. Sec. A.3 provides more details regarding frozen layers in different networks. Sec. A.2 shows the structure of input transformation module used in this work. We will also provide a link to an anonymous downloadable source code as a comment directed to the reviewers and area chairs in the discussion forum.
A APPENDIX
A.1 ILLUSTRATIVE EXAMPLE OF LOG LIKELIHOOD RATIO ADAPTATION OBJECTIVE
A simple 1D example is devised to illustrate the benefits of proposed log likelihood ratio as test time adaptation objective. Consider data points (unlabeled) that are sampled from the following bimodal distribution: 0.5 · N (−1, 3) + 0.5 · N (+1, 3), that is: half of the samples come from a normal distribution with mean -1 and the other half from a normal distribution with mean +1 (and both having standard deviation 3). We can interpret these two components of the mixture distributions as corresponding to data of two different classes, but class labels are of course unavailable during unsupervised test-time adaptation.
We assume a simple logistic model of the form pθ(y = 1|x) = 11+e−(x+θ) , where x is the value of the data sample and θ is a scalar offset that determines the decision boundary. By construction, we know that the minimum density of the mixture distribution on [−1, 1] is at 0. Since confidence maximization aims as moving the decision boundary to regions in input space with minimum data density (in this case to 0), we can compare different self-supervised confidence maximization losses in the finite data regime as follows: for every finite data sample with N data points {xi} for i = 1, . . . , N and loss function L , we solve θ∗(L) = argminθ∈[−1,1] L(θ, {xi}), where the loss (such as entropy or SLR) is averaged over all data points. The absolute value |θ∗(L)| gives us then an estimate of the error of the decision boundary parameter |θ∗(L)| for the given data set and loss function. Table A1 provides this error for different loss functions and different number of data samples. It can be seen that SLR and HLR clearly outperform Entropy loss (TENT) for all data regimes. The difference between SLR and HLR is generally very small. While SLR seems to be consistently slightly better than HLR, this difference is not statistically significant. We attribute the superiority of SLR/HLR compared to entropy to the fact that all data points have non-saturating loss, regardless of their distance to the decision boundary. Thus, all data contributes to localizing the decision boundary, while for saturating losses such as the entropy, effectively only "nearby" points determine the decision boundary. This example illustrates that our proposed non-saturating losses are beneficial over entropy loss for self-supervised confidence maximization.
Table A1: Illustrates the error of the decision boundary parameter for different loss functions and different number of samples averaged over 100 runs (shown are mean and standard error of mean).
#samples 100 200 500 1000 2000 10000 20000
Entropy 0.487±0.031 0.364±0.029 0.230±0.018 0.152±0.013 0.117±0.009 0.052±0.004 0.033±0.003 HLR 0.357±0.023 0.234±0.018 0.145±0.012 0.094±0.008 0.071±0.006 0.032±0.002 0.022±0.002 SLR 0.332±0.022 0.214±0.017 0.140±0.011 0.088±0.008 0.067±0.006 0.032±0.002 0.021±0.002
A.2 INPUT TRANSFORMATION MODULE
Note that we define our adaptable model as g = f ◦ d, where d is a trainable network prepended to a pretrained neural network f (e.g., pretrained ResNet50). We choose d(x) = γ ·[τx+ (1− τ)rψ(x)]+ β, where τ ∈ R, (β, γ) ∈ Rnin with nin being the number of input channels, rψ being a network with identical input and output shape, and · denoting elementwise multiplication. Here, β and γ implement a channel-wise affine transformation and τ implements a convex combination of unchanged input and the transformed input rψ(x). We set τ = 1, γ = 1, and β = 0, to ensure that d(x) = x and thus g = f at initialization. In principle, rψ can be chosen arbitrarily. Here, we choose rψ as a simple stack of 3× 3 convolutions with stride 1 and padding 1, group normalization, and ReLUs without any upsampling/downsampling layers. Specifically, the structure of g is illustrated in Figure A1.
In addition to the results reported in Table 2, we also compare TENT and TENT+ with and without Input Transformation (IT) module on ResNet50 for all corruptions at severity level 5 in both online adaptation setting and offline adaptation with 5 epochs in Table A3. Furthermore, we also present the qualitative results of the image transformations from the input transformation module adapted with SLR (offline setting) in Figure A2.
Table A2: Ablation study on the components of input transformation module on ResNet50 for all corruptions at severity level 5.
Corruption Gauss Shot Impulse Defocus Glass Motion Zoom Snow Frost Fog Bright Contrast Elastic Pixel JPEG mean
x 41.52 42.90 44.07 41.69 40.78 54.76 56.59 57.35 51.01 63.53 68.72 50.65 61.49 63.46 58.32 53.12 rψ(x) 13.17 26.57 28.81 5.09 3.61 30.61 49.79 53.73 45.96 58.82 65.79 53.73 56.77 60.14 53.38 40.40 τx+ (1− τ)rψ(x) 43.13 46.43 56.25 41.80 40.90 55.75 56.65 58.55 51.72 63.59 68.83 53.89 61.50 63.73 58.51 54.74 γ · [τx+ (1− τ)rψ(x)] + β 43.18 46.24 56.21 41.91 40.89 55.79 56.66 58.50 51.72 63.56 68.83 54.26 61.49 63.76 58.52 54.76
Table A3: Test-time adaptation of ResNet50 on ImageNet-C at highest severity level 5 with and without Input Transformation (IT) module. Reported are the mean accuracy(%) across three random seeds (2020/2021/2022). While IT also improves performance when combined with TENT+, it is still clearly outperformed by SLR+IT.
Method Gauss Shot Impulse Defocus Glass Motion Zoom Snow Frost Fog Bright Contrast Elastic Pixel JPEG
Online adaptation (evaluation on a batch directly after adaptation on the batch)
TENT 28.60 31.06 30.54 29.09 28.07 42.32 50.39 48.01 42.05 58.40 68.20 27.25 55.68 59.46 53.64 TENT + IT 28.99 31.73 31.15 28.87 27.85 42.43 50.36 48.02 41.95 58.37 68.19 24.35 55.68 59.49 53.57
TENT+ 29.09 31.65 30.68 29.33 28.65 42.32 50.32 48.09 42.54 58.39 68.23 31.43 55.90 59.46 53.68 TENT+ + IT 29.48 32.34 31.38 29.06 28.42 42.43 50.33 48.11 42.47 58.40 68.20 32.11 55.87 59.49 53.64 SLR (ours) 35.11 37.93 36.83 35.13 35.13 48.29 53.45 52.68 46.52 60.74 68.40 44.78 58.74 61.13 55.97 SLR + IT (ours) 36.19 39.17 40.46 35.17 34.87 48.67 53.62 52.71 46.93 60.66 68.30 46.55 58.79 61.27 55.93 Evaluation after epoch 5
TENT 30.64 33.80 34.72 30.13 29.05 49.08 53.63 52.86 38.47 61.13 68.81 10.72 59.25 62.15 56.44 TENT + IT 31.92 36.02 38.14 30.44 28.68 49.04 53.59 52.99 38.76 61.14 68.84 13.52 59.23 62.15 56.56
TENT+ 35.19 38.12 37.43 34.82 34.95 50.33 54.24 53.88 46.28 61.50 69.07 29.87 60.01 62.61 57.09 TENT+ + IT 36.13 39.84 41.03 34.62 34.72 50.33 54.10 53.91 46.46 61.54 69.07 30.22 59.95 62.72 57.11 SLR (ours) 41.52 42.90 44.07 41.69 40.78 54.76 56.59 57.35 51.01 63.53 68.72 50.65 61.49 63.46 58.32
SLR+IT (ours) 43.09 44.39 64.05 41.98 40.99 55.73 56.75 58.56 51.68 63.64 68.85 55.01 61.32 63.59 58.24
A.2.1 CONTRIBUTION OF EACH COMPONENT IN INPUT TRANSFORMATION MODULE
Table A2 shows the results of ablation study on the components of input transformation module on ResNet50 for all corruptions at severity level 5 adapted with SLR for 5 epochs. The ablation study includes: (1) no input transformation module d(x) = x, (2) with network d(x) = rψ(x), (3) including τ , (4) including channel-wise affine transformation γ and β. We can observe that the inputs transformed with network rψ drops the performance without the convex combination with τ . The additional channel wise affine transformations didn’t bring further consistent improvements and can be ignored from the transformation module. Exploring other architectural choices and training (or pretraining) strategy for the input transformation module would be an interesting avenue for future work.
A.3 FROZEN LAYERS IN DIFFERENT NETWORKS
As discussed in Section 3.2.2, we freeze all trainable parameters in the top layers of the networks to prohibit “logit explosion”. That implies, we do not optimize the channel-wise affine transformations of the top layers but normalization statistics are still estimated. Similar to the hyperparameters of test time adaptation settings, the choice of these layers are made using ImageNet-C validation data. We mention the frozen layers of each architecuture below. Note that the naming convention of these layers are based on the model definition in torchvision:
• DenseNet121 - features.denseblock4, features.norm5.
• MobileNetV2 - features.16, features.17, features.18.
• ResNeXt50, ResNet50 and ResNet50 (DeepAugment+Augmix) - layer4.
A.3.1 RESULTS WITHOUT FREEZING THE TOP LAYERS
We mentioned that the proposed losses could alternatively encourage the network to scale the logits grow larger and larger and still reduce the loss. However, we did not find any considerable differences empirically in the explored settings when adapting the model with or without freezing the top layer. We found that adapting the model with and without freezing the top layers have comparable performance in both online and offline adaptation settings as shown in Table A4 respectively. However, we would still recommend freezing the top-most layers as the default choice to be on the safe side. These results indicate that the early layers capture the distribution shift sufficiently to improve the model adaptation.
Table A4: Comparing the online and offline adaptation results with and without freezing the affine parameters of top normalization layers of ResNet50 at severity 5. Here, "Freeze" and "NoFreeze" refer to the setting with and without freezing the top affine layers respectively.
Corruption Gauss Shot Impulse Defocus Glass Motion Zoom Snow Frost Fog Bright Contrast Elastic Pixel JPEG mean
Online evaluation
TENT+ NoFreeze 29.05 31.32 30.32 28.95 28.29 42.37 50.45 48.12 42.21 58.51 68.29 28.17 55.57 59.47 53.46 43.63 TENT+ Freeze 29.21 31.54 30.55 29.17 28.60 42.54 50.47 48.18 42.51 58.50 68.30 31.25 55.76 59.54 53.62 43.98
HLR NoFreeze 33.73 36.50 35.63 33.99 33.88 46.55 52.76 51.44 45.82 59.74 67.37 43.19 57.69 59.77 54.95 47.53 HLR Freeze 33.10 36.08 34.74 33.21 33.31 46.36 52.77 51.42 45.47 60.01 68.07 42.75 58.02 60.42 55.34 47.40
SLR NoFreeze 35.61 38.37 37.50 35.83 35.81 48.29 53.61 52.62 46.85 60.42 67.71 44.93 58.43 60.56 55.65 48.81 SLR Freeze 35.11 37.93 36.83 35.13 35.13 48.29 53.45 52.68 46.52 60.74 68.40 44.78 58.74 61.13 55.97 48.72
offline evaluation
TENT+ NoFreeze 32.03 35.33 35.28 31.92 31.27 49.20 53.79 53.01 40.37 61.22 68.79 19.38 59.25 62.20 56.51 45.97 TENT+ Freeze 35.19 38.12 37.43 34.82 34.95 50.33 54.24 53.88 46.28 61.50 69.07 29.87 60.01 62.61 57.09 48.35
HLR NoFreeze 41.60 43.80 43.89 42.21 41.50 53.82 56.21 56.71 50.83 62.74 67.87 51.34 60.65 62.58 57.70 52.89 HLR Freeze 41.37 44.04 43.68 41.74 41.09 54.26 56.43 57.03 50.81 63.05 68.29 50.98 61.15 63.08 58.13 53.0
SLR NoFreeze 41.45 43.95 44.26 42.56 41.60 54.25 56.13 56.72 50.92 62.97 68.02 50.99 60.90 62.83 57.86 53.02 SLR Freeze 41.52 42.90 44.07 41.69 40.78 54.76 56.59 57.35 51.01 63.53 68.72 50.65 61.49 63.46 58.32 53.12
A.4 EFFECT OF κ
Note that the running estimate of Ldiv prevents model collapsed to trivial solutions i.e., model predicts only a single or a set of classes as outputs regardless of the input samples. Ldiv encourages model to match it’s empirical distribution of predictions to class distribution of target data (uniform distribution in our experiments). Such diversity regularization is crucial as there is no direct supervision attributing to different classes and thus aids to avoid collapsed trivial solutions. In Figure A3, we investigate different values of κ on validation corruptions of ImageNet-C to study its effectiveness on our approach. It can be observed that both the HLR and SLR without Ldiv leads to collapsed solutions (e.g., accuracy drops to 0%) on some of the corruptions and the performance gains are not consistent across all the corruptions. On the other hand, Ldiv with κ = 0.9 remain consistent and improve the performance across all the corruptions.
A.5 TEST-TIME ADPTATION OF PRETRAINED MODELS WITH SHOT
Following SHOT (Liang et al., 2020), we use their pseudo labeling strategy on the ImageNet pretrained ResNet50 in combination with TENT+, HLR and SLR. Note that TENT+ and pseudo labeling strategy jointly forms the method SHOT. The pseudo labeling strategy starts after the 1st epoch and thereafter computed at every epoch. The weight for the loss computed on the pseudo labels is set to 0.3, similar to (Liang et al., 2020). Different values for this weight is explored and found 0.3 to perform best. Table A6 compares the results of the methods with and without pseudo labeling strategy. It can be observed that the results with pseudo labeling strategy perform worse than without taking this strategy into account.
We further modified the pretrained ResNet50 by following the network modifications suggested in (Liang et al., 2020), that includes adding a bottleneck layer with BatchNorm and applying weight norm on the linear classifier along with smooth label training to facilitate the pseudo labeling strategy. Table A7 shows that the pseudo labeling strategy on such network improve the results of TENT+ from epoch 1 to epoch 5. However, there are no improvements noticed in SLR. Moreover, Table A8 shows that NO pseudo labeling strategy on the same network performs better than applying the pseudo labeling strategy. Finally, the no pseduo labeling results from Table A6 and A8 shows that additional modifications to ResNet50 do not improve the performance when compared to the standard ResNet50.
A.6 DOMAIN ADAPTATION ON VISDA-C AND DIGIT CLASSIFICATION
VisDA-C: We extended our experiments to VisDA-C. We followed similar network architecture from SHOT (Liang et al., 2020) and evaluated TENT+, our SLR loss function with diversity regularizer. Similar to ImageNet-C, we adapted only the channel wise affine parameters of batchnorm layers for 5 epochs with Adam optimizer with cosine decay scheduler of the learning rate with initial value 2e− 5. Here, the batchsize is set to 64, the weight of Lconf in our loss function to δ = 0.25 and κ = 0 in the running estimate pt(y) of Ldiv, since the number of classes in this dataset (12 classes) is smaller than the batchsize. Setting κ = 0 enables the batch wise diversity regularizer. Table A9 shows
Table A5: Test-time adaptation of ResNet50 on ImageNet-C at highest severity level 5. Same as Table 1 with error bars.
name Epoch 1 Epoch 5
corruption No adaptation PL TENT TENT+ HLR SLR TENT TENT+ HLR SLR
Gauss 2.44 2.44 32.44±0.10 33.75±0.09 38.39±0.25 39.51±0.23 30.64±0.51 35.19±0.17 41.37±0.09 41.52±0.08 Shot 2.99 2.99 35.01±0.17 36.38±0.19 41.11±0.13 42.09±0.26 33.80±0.74 38.12±0.10 44.04±0.09 42.90±0.08
Impulse 1.96 1.96 34.77±0.09 35.67±0.15 40.28±0.20 41.58±0.04 34.72±1.01 37.43±0.09 43.68±0.06 44.07±0.06 Defocus 17.92 17.92 32.40±0.10 33.43±0.14 38.25±0.32 39.35±0.13 30.13±0.61 34.82±0.25 41.74±0.12 41.69±0.07
Glass 9.82 9.82 31.62±0.15 33.25±0.01 38.18±0.08 39.02±0.09 29.05±0.21 34.95±0.13 41.09±0.17 40.78±0.08 Motion 14.78 14.78 47.23±0.11 47.66±0.12 51.63±0.08 52.67±0.25 49.08±0.08 50.33±0.07 54.26±0.02 54.76±0.04 Zoom 22.50 22.50 53.09±0.06 53.20±0.07 55.55±0.06 55.80±0.07 53.63±0.16 54.24±0.06 56.43±0.07 56.59±0.05 Snow 16.89 16.89 51.61±0.05 52.06±0.09 55.45±0.11 55.92±0.06 52.86±0.13 53.88±0.07 57.03±0.12 57.35±0.03 Frost 23.31 23.31 43.26±0.30 44.85±0.20 48.96±0.07 49.64±0.14 38.47±0.50 46.28±0.27 50.81±0.08 51.01±0.02 Fog 24.43 24.43 60.42±0.08 60.60±0.05 62.19±0.03 62.62±0.04 61.13±0.08 61.50±0.05 63.05±0.04 63.53±0.08 Bright 58.93 58.93 68.85±0.02 68.93±0.03 68.17±0.01 68.47±0.05 68.81±0.06 69.07±0.06 68.29±0.09 68.72±0.10 Contrast 5.43 5.43 24.39±0.98 33.43±0.77 49.47±0.20 50.27±0.08 10.72±0.32 29.87±1.36 50.98±2.54 50.65±0.55 Elastic 16.95 16.95 58.53±0.05 58.94±0.05 60.34±0.18 60.80±0.08 59.25±0.06 60.01±0.02 61.15±0.04 61.49±0.07 Pixel 20.61 20.61 61.62±0.06 61.75±0.07 62.51±0.10 63.01±0.08 62.15±0.04 62.61±0.08 63.08±0.06 63.46±0.08 JPEG 31.65 31.65 56.00±0.09 56.21±0.05 57.42±0.13 57.80±0.04 56.44±0.07 57.09±0.02 58.13±0.09 58.32±0.05
Table A6: Test-time adaptation of ResNet50 on ImageNet-C at highest severity level 5 with and without the pseudo labeling strategy (Liang et al., 2020).
name No pseudo labeling: Epoch 5 Pseudo labeling: Epoch 5
corruption No adaptation TENT+ HLR SLR TENT+ HLR SLR
Gauss 2.44 33.97±0.17 41.37±0.09 41.52±0.08 34.08±0.11 34.88±0.35 35.58±0.06 Shot 2.99 37.95±0.10 44.04±0.09 42.90±0.08 36.74±0.26 37.61±0.49 37.98±0.19 Impulse 1.96 36.93±0.09 43.68±0.06 44.07±0.06 36.69±0.04 37.24±0.22 37.77±0.05 Defocus 17.92 32.69±0.25 41.74±0.12 41.69±0.07 33.99±0.28 34.76±0.11 35.11±0.10
Glass 9.82 33.36±0.13 41.09±0.17 40.78±0.08 34.06±0.12 34.51±0.30 34.81±0.27 Motion 14.78 51.42±0.07 54.26±0.02 54.76±0.04 50.91±0.09 48.96±0.39 49.46±0.20 Zoom 22.50 54.33±0.06 56.43±0.07 56.59±0.05 54.10±0.10 52.49±0.02 52.50±0.23 Snow 16.89 54.55±0.07 57.03±0.12 57.35±0.03 54.06±0.08 52.49±0.19 52.95±0.07 Frost 23.31 45.80±0.27 50.81±0.08 51.01±0.02 44.44±0.07 45.47±0.26 46.06±0.20 Fog 24.43 62.09±0.05 63.05±0.04 63.53±0.08 61.91±0.08 59.66±0.14 59.98±0.12 Bright 58.93 69.03±0.06 68.29±0.09 68.72±0.10 68.98±0.02 65.59±0.06 66.00±0.03 Contrast 5.43 24.08±1.36 50.98±2.54 50.65±0.55 29.37±0.95 44.58±0.38 45.64±0.47 Elastic 16.95 60.36±0.02 61.15±0.04 61.49±0.07 60.23±0.05 57.48±0.14 57.87±0.04 Pixel 20.61 63.10±0.08 63.08±0.06 63.46±0.08 62.98±0.04 59.72±0.02 60.05±0.14 JPEG 31.65 57.21±0.02 58.13±0.09 58.32±0.05 57.09±0.04 54.72±0.09 54.88±0.07
average results from three different random seeds and also shows that SLR outperforms TENT+ on this dataset.
Domain adaptation from SVHN to MNIST / MNIST-M / USPS: ResNet26 is trained on SVHN dataset for 50 epochs with batch size 128, SGD optimizer with momentum 0.9 and initial learning rate 0.01, which drops to 0.001 and 0.0001 at 25th and 40th epoch respectively. ResNet26 obtains 96.49% test accuracy on SVHN. Domain adaptation of SVHN trained ResNet26 to MNIST/MNIST-M/USPS
Table A7: Test-time adaptation of modified ResNet50 (following (Liang et al., 2020)) on ImageNet-C at highest severity level 5 with pseudo labeling strategy at epoch 1 and epoch 5.
name Pseudo labeling: Epoch 1 Pseudo labeling: Epoch 5
corruption No adaptation TENT+ HLR SLR TENT+ HLR SLR
Gauss 2.95 31.03±0.18 34.65±0.28 37.21±0.23 35.26±0.16 35.93±0.23 37.61±0.30 Shot 3.65 33.55±0.07 38.09±0.30 40.30±0.09 37.39±0.05 38.95±0.16 40.42±0.06 Impulse 2.54 32.70±0.07 36.95±0.05 39.73±0.07 38.16±0.08 38.13±0.04 40.12±0.11 Defocus 19.36 31.66±0.15 35.08±0.05 37.18±0.15 35.95±0.17 36.72±0.13 37.96±0.25
Glass 9.72 31.06±0.06 35.46±0.12 37.62±0.10 35.98±0.04 36.84±0.11 37.90±0.02 Motion 15.66 46.96±0.12 49.95±0.12 51.87±0.14 52.24±0.02 51.90±0.12 52.76±0.09 Zoom 22.20 52.45±0.02 54.15±0.22 54.84±0.18 54.80±0.07 54.84±0.09 54.95±0.14 Snow 17.56 51.79±0.05 53.98±0.06 55.44±0.04 55.15±0.02 55.27±0.20 55.75±0.02 Frost 24.11 45.59±0.06 47.87±0.03 48.96±0.11 48.10±0.20 48.52±0.11 49.13±0.20 Fog 25.59 60.33±0.03 61.55±0.10 62.21±0.16 62.39±0.03 62.38±0.12 62.38±0.11 Bright 58.30 68.84±0.04 68.44±0.04 68.60±0.10 69.13±0.04 68.50±0.02 68.47±0.09 Contrast 6.49 42.34±0.19 47.98±0.13 50.32±0.28 42.11±0.15 49.22±0.42 50.80±0.19 Elastic 17.72 58.47±0.02 59.70±0.06 60.30±0.09 60.40±0.04 60.27±0.22 60.45±0.21 Pixel 21.29 61.39±0.06 62.10±0.07 62.71±0.10 63.04±0.02 62.71±0.07 62.81±0.07 JPEG 32.13 55.22±0.03 56.49±0.07 57.04±0.07 57.21±0.06 57.25±0.07 57.37±0.05
Table A8: Test-time adaptation of modified ResNet50 (following (Liang et al., 2020)) on ImageNet-C at highest severity level 5 with and without pseudo labeling strategy.
name No Pseudo labeling: Epoch 5 Pseudo labeling: Epoch 5
corruption No adaptation TENT+ HLR SLR TENT+ HLR SLR
Gauss 2.95 34.96±0.08 38.58±0.12 39.72±0.13 35.26±0.16 35.93±0.23 37.61±0.30 Shot 3.65 37.22±0.17 41.59±0.09 42.45±0.05 37.39±0.05 38.95±0.16 40.42±0.06 Impulse 2.54 37.82±0.04 40.88±0.07 42.39±0.03 38.16±0.08 38.13±0.04 40.12±0.11 Defocus 19.36 34.46±0.12 39.22±0.15 39.78±0.09 35.95±0.17 36.72±0.13 37.96±0.25
Glass 9.72 35.12±0.05 38.83±0.13 39.37±0.07 35.98±0.04 36.84±0.11 37.90±0.02 Motion 15.66 51.91±0.09 53.23±0.05 54.00 52.24±0.02 51.90±0.12 52.76±0.09 Zoom 22.20 54.57±0.05 55.76±0.04 55.79±0.02 54.80±0.07 54.84±0.09 54.95±0.14 Snow 17.56 55.02±0.05 56.35±0.12 56.80±0.04 55.15±0.02 55.27±0.20 55.75±0.02 Frost 24.11 48.18±0.09 49.86±0.22 50.43±0.08 48.10±0.20 48.52±0.11 49.13±0.20 Fog 25.59 62.24±0.04 62.90±0.06 63.29±0.06 62.39±0.03 62 | 1. What is the focus and contribution of the paper on test-time BN adaptation?
2. What are the strengths of the proposed approach, particularly in terms of the diversity maximization loss and confidence maximization loss?
3. What are the weaknesses of the paper, especially regarding the effectiveness of the diversity loss and the limited novelty of the approach?
4. Do you have any concerns or suggestions for improving the proposed method?
5. What are the limitations of the paper, and how do the authors address them? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a new loss to improve test-time BN adaptation for domain adaptation. The proposed loss consists of two components: the diversity maximization loss and the confidence maximization loss. Specifically, they use a running estimate for the diversity loss based on KL divergence. They propose the hard and soft likelihood ratio for the confidence loss which has large gradients for high confidence predictions.
Review
Strength:
The paper introduces an information maximization loss for the test-time BN adaptation on unlabeled target mini-batch data. The method has been well-motivated by pointing out the limitations in SOTA methods.
The proposed formulation incorporates the diversity regularizer to avoid the trivial collapsed solutions on mini-batch data.
It presents the analysis of the gradients of different losses for confidence maximization. And, proposes to use the negative log-likelihood ratio loss [Yao 2020] for TTA which has non-vanishing gradients for high confidence predictions.
It proposes to jointly update the IT module and BN parameters for TTA
The paper provides comprehensive experimental results in the paper and appendix.
Weakness:
The proposed diversity loss is not very effective on mini-batch data. The results show the improvement is very marginal.
The paper should provide examples of failure cases, and more explanation & discussion about the issues in the last paragraph of section 3.
The novelty of the paper is relatively limited. IM loss, KL based
L
d
i
v
, IT module, and the negative log-likelihood ratio are all proposed in the previous works. |
ICLR | Title
Test-Time Adaptation to Distribution Shifts by Confidence Maximization and Input Transformation
Abstract
Deep neural networks often exhibit poor performance on data that is unlikely under the train-time data distribution, for instance data affected by corruptions. Previous works demonstrate that test-time adaptation to data shift, for instance using entropy minimization, effectively improves performance on such shifted distributions. This paper focuses on the fully test-time adaptation setting, where only unlabeled data from the target distribution is required. This allows adapting arbitrary pretrained networks. Specifically, we propose a novel loss that improves test-time adaptation by addressing both premature convergence and instability of entropy minimization. This is achieved by replacing the entropy by a non-saturating surrogate and adding a diversity regularizer based on batch-wise entropy maximization that prevents convergence to trivial collapsed solutions. Moreover, we propose to prepend an input transformation module to the network that can partially undo test-time distribution shifts. Surprisingly, this preprocessing can be learned solely using the fully test-time adaptation loss in an end-to-end fashion without any target domain labels or source domain data. We show that our approach outperforms previous work in improving the robustness of publicly available pretrained image classifiers to common corruptions on such challenging benchmarks as ImageNet-C.
1 INTRODUCTION
Deep neural networks achieve impressive performance on test data, which has the same distribution as the training data. Nevertheless, they often exhibit a large performance drop on test (target) data which differs from training (source) data; this effect is known as data shift (Quionero-Candela et al., 2009) and can be caused for instance by image corruptions. There exist different methods to improve the robustness of the model during training (Geirhos et al., 2019; Hendrycks et al., 2019; Tzeng et al., 2017). However, generalization to different data shifts is limited since it is infeasible to include sufficiently many augmentations during training to cover the excessively wide range of potential data shifts (Mintun et al., 2021a). Alternatively, in order to generalize to the data shift at hand, the model can be adapted during test-time. Unsupervised domain adaptation methods such as Vu et al. (2019) use both source and target data to improve the model performance during test-time. In general source data might not be available during inference time, e.g., due to legal constraints (privacy or profit). Therefore we focus on the fully test-time adaptation setting (Wang et al., 2020): model is adapted to the target data during test time given only the arbitrarily pretrained model parameters and unlabeled target data that share the same label space as source data. We extend the work of Wang et al. (2020) by introducing a novel loss function, using a diversity regularizer, and prepending a parametrized input transformation module to the network. We show that our approach outperform previous works and make pretrained models robust against common corruptions on image classification benchmarks as ImageNet-C (Hendrycks & Dietterich, 2019) and ImageNet-R (Hendrycks et al., 2020).
Sun et al. (2020) investigate test-time adaptation using a self-supervision task. Wang et al. (2020) and Liang et al. (2020) use the entropy minimization loss that uses maximization of prediction confidence as self-supervision signal during test-time adaptation. Wang et al. (2020) has shown that such loss performs better adaptation than a proxy task (Sun et al., 2020). When using entropy minimization, however, high confidence predictions do not contribute to the loss significantly anymore and thus provide little self-supervision. This is a drawback since high-confidence samples provide the most
trustworthy self-supervision. We mitigate this by introducing two novel loss functions that ensure that gradients of samples with high confidence predictions do not vanish and learning based on self-supervision from these samples continues. Our losses do not focus on minimizing entropy but on minimizing the negative log likelihood ratio between classes; the two variants differ in using either soft or hard pseudo-labels. In contrast to entropy minimization, the proposed loss functions provide non-saturating gradients, even when there are high confident predictions. Figure 1 provides illustration of the losses and the resulting gradients. Using these new loss functions, we are able to improve the network performance under data shifts in both online and offline adaptation settings.
In general, self-supervision by confidence maximization can lead to collapsed trivial solutions, which make the network to predict only a single or a set of classes independent of the input. To overcome this issue a diversity regularizer (Liang et al., 2020; Wu et al., 2020) can be used, that acts on a batch of samples. It encourages the network to make diverse class predictions on different samples. We extend the regularizer by including a moving average, in order to include the history of the previous batches and show that this stabilizes the adaptation of the network to unlabeled test samples. Furthermore we also introduce a parametrized input transformation module, which we prepend to the network. The module is trained in a fully test-time adaptation manner using the proposed loss function, and without using source data or target labels. It aims to partially undo the data shift at hand and helps to further improve the performance on image classification benchmark with corruptions.
Since our method does not change the training process, it allows to use any pretrained models. This is beneficial because any good performing pretrained network can be readily reused, e.g., a network trained on some proprietary data not available to the public. We show, that our method significantly improves performance of different pretrained models that are trained on clean ImageNet data.
In summary our main contributions are as follows: we propose non-saturating losses based on the negative log likelihood ratio, such that gradients from high confidence predictions still contribute to test-time adaptation. We extend diversity regularizer to its moving average to include the history of previous batch samples to prevent the model collapsing to trivial solutions. We also introduce an input transformation module, which partially undoes the data shift at hand. We show that the performance of different pretrained models can be significantly improved on ImageNet-C and ImageNet-R.
2 RELATED WORK
Common image corruptions are potentially stochastic image transformations motivated by realworld effects that can be used for evaluating a model’s robustness. One such benchmark, ImageNetC (Hendrycks & Dietterich, 2019), contains simulated corruptions such as noise, blur, weather effects, and digital image transformations. Additionally, Hendrycks et al. (2020) proposed three data sets containing real-world distribution shifts, including Imagenet-R. Most proposals for improving robustness involve special training protocols, requiring time and additional resources. This includes data augmentation like Gaussian noise (Ford et al., 2019; Lopes et al., 2019; Hendrycks et al., 2020), CutMix (Yun et al., 2019), AugMix (Hendrycks et al., 2019), training on stylized images (Geirhos et al., 2019; Kamann et al., 2020) or against adversarial noise distributions (Rusak et al., 2020a). Mintun et al. (2021b) pointed out that many improvements on ImageNet-C are due to data augmentations which are too similar to the test corruptions, that is: overfitting to ImageNet-C occurs. Thus, the model might be less robust to corruptions not included in the test set of ImageNet-C.
Unsupervised domain adaptation methods train a joint model of source and target domain by crossdomain losses to find more general and robust features, e. g. optimize feature alignment (QuiñoneroCandela et al., 2008; Sun et al., 2017) between domains, adversarial invariance (Ganin & Lempitsky, 2015; Tzeng et al., 2017; Ganin et al., 2016; Hoffman et al., 2018), shared proxy tasks (Sun et al., 2019) or adapt entropy minimization via an adversarial loss (Vu et al., 2019). While these approaches are effective, they require explicit access to source and target data at the same time, which may not always be feasible. Our approach works with any pretrained model and only needs target data.
Test-time adaptation is a setting, when training (source) data is unavailable at test-time. It is related to source free adaptation, where several works use generative models, alter training (Kundu et al., 2020; Li et al., 2020b; Kurmi et al., 2021; Yeh et al., 2021) and require several thousand epochs to adapt to the target data (Li et al., 2020b; Yeh et al., 2021). Besides, there is another line of work (Sun et al., 2020; Schneider et al., 2020; Nado et al., 2021; Benz et al., 2021; Wang et al., 2020) that
interpret the common corruptions as data shift and aim to improve the model robustness against these corruptions with efficient test-time adaptation strategy to facilitate online adaptation. such settings spare the cost of additional computational overhead. Our work also falls in this line of research and aims to adapt the model to common corruptions efficiently with both online and offline adaptation.
Sun et al. (2020) update feature extractor parameters at test-time via a self-supervised proxy task (predicting image rotations). However, Sun et al. (2020) alter the training procedure by including the proxy loss into the optimization objective as well, hence arbitrary pretrained models cannot be used directly for test-time adaptation. Inspired by the domain adaptation strategies (Maria Carlucci et al., 2017; Li et al., 2016), several works (Schneider et al., 2020; Nado et al., 2021; Benz et al., 2021) replace the estimates of Batch Normalization (BN) activation statistics with the statistics of the corrupted test images. Fully test time adaptation, studied by Wang et al. (2020) (TENT) uses entropy minimization to update the channel-wise affine parameters of BN layers on corrupted data along with the batch statistics estimates. SHOT (Liang et al., 2020) also uses entropy minimization and a diversity regularizer to avoid collapsed solutions. SHOT modifies the model from the standard setting by adopting weight normalization at the fully connected classifier layer during training to facilitate their pseudo labeling technique. Hence, SHOT is not readily applicable to arbitrary pretrained models.
We show that pure entropy minimization (Wang et al., 2020; Liang et al., 2020) as well as alternatives such as max square loss (Chen et al., 2019) and Charbonnier penalty (Yang & Soatto, 2020) results in vanishing gradients for high confidence predictions, thus inhibiting learning. Our work addresses this issue by proposing a novel non-saturating loss, that provides non-vanishing gradients for high confidence predictions. We show that our proposed loss function improves the network performance through test-time adaptation. In particular, performance on corruptions of higher severity improves significantly. Furthermore, we add and extend the diversity regularizer (Liang et al., 2020; Wu et al., 2020) to avoid collapse to trivial, high confidence solutions. Existing diversity regularizers (Liang et al., 2020; Wu et al., 2020) act on a batch of samples, hence the number of classes has to be smaller than the batch size. We mitigate this problem by extending the regularizer to a moving average version. Li et al. (2020a) also use a moving average to estimate the entropy of the unconditional class distribution but source data is used to estimate the gradient of the entropy. In contrast, our work does not need access to the source data since the gradient is estimated using only target data. Prior work Tzeng et al. (2017); Rusak et al. (2020b); Talebi & Milanfar (2021) transformed inputs by an additional module to overcome domain shift, obtain robust models, and also to learn to resize. In our work, we prepend an input transformation module to the model, but in contrast to former works, this module is trained purely at test-time to partially undo the data shift at hand to aid the adaptation.
3 METHOD
We propose a novel method for fully test-time adaption. We assume that a neural network fθ with parameters θ is available that was trained on data from some distributionD, as well a set of (unlabeled) samples X ∼ D′ from a target distribution D′ 6= D (importantly, no samples from D are required). We frame fully test-time adaption as a two-step process: (i) Generate a novel network gφ based on fθ, where φ denotes the parameters that are adapted. A simple variant for this is g = f and φ ⊆ θ Wang et al. (2020). However, we propose a more expressive and flexible variant in Section 3.1. (ii) Adapt the parameters φ of g on X using an unsupervised loss function L. We propose two novel losses Lslr and Lhlr in Section 3.2 that have non-vanishing gradients for high-confidence self-supervision.
3.1 INPUT TRANSFORMATION
We propose to define the adaptable model as g = f ◦ d. That is: we preprend a trainable network d to f . The motivation for the additional component d is to increase expressivity of g such that it can learn to (partially) undo the domain shift D → D′. Specifically, we choose d(x) = γ · [τx+ (1− τ)rψ(x)] + β, where τ ∈ R, (β, γ) ∈ Rnin with nin being the number of input channels, rψ being a network with identical input and output shape, and · denoting elementwise multiplication. Specifically, β and γ implement a channel-wise affine transformation and τ implements a convex combination of unchanged input and the transformed input rψ(x). By choosing τ = 1, γ = 1, β = 0, we ensure d(x) = x and thus g = f at initialization. In principle, rψ can be chosen arbitrarily. Here, we choose rψ as a simple stack of 3× 3 convolutions, group normalization, and ReLUs (refer Sec. A.2 for details). However, exploring other choices would be an interesting avenue for future work.
Importantly, while the motivation for d is to learn to partially undo a domain shift D → D′, we train d end-to-end in the fully test-time adaptation setting on data X ∼ D′, without any access to samples from the source domain D, based on the losses proposed in Section 3.2. The modulation parameters of gφ are φ = (β, γ, τ, ψ, θ′), where θ′ ⊆ θ. That is, we adapt only a subset of the parameters θ of the pretrained network f . We largely follow Wang et al. (2020) in adapting only the affine parameters of normalization layers in f while keeping parameters of convolutional kernels unchanged. Additionally, batch normalization statistics (if any) are adapted to the target distribution.
Note that the proposed method is applicable to any pretrained network that contains normalization layers with a channel-wise affine transformation. For networks with no affine transformation layers, one can add such layers into f that are initialized to identity as part of model augmentation.
3.2 ADAPTATION OBJECTIVE
We propose a loss function L = Ldiv + δLconf for fully test-time network adaptation that consists of two components: (i) a term Ldiv that encourages predictions of the network over the adaptation dataset X that match a target distribution pD′(y). This can help avoiding test-time adaptation collapsing to too narrow distributions such as always predicting the same or very few classes. If pD′(y) is (close to) uniform, it acts as a diversity regularizer. (ii) A term Lconf that encourages high confidence prediction on individual datapoints. We note that test-time entropy minimization (TENT) (Wang et al., 2020) fits into this framework by choosing Ldiv = 0 and Lconf as the entropy.
3.2.1 CLASS DISTRIBUTION MATCHING Ldiv
Assuming knowledge of the class distribution pD′(y) on the target domain D′, we propose to add a term to the loss that encourages the empirical distribution of (soft) predictions of gφ on X to match this distribution. Specifically, let p̂gφ(y) be an estimate of the distribution of (soft) predictions of gφ. We use the Kullback-Leibler divergence Ldiv = DKL(p̂gφ(y)|| pD′(y)) as loss term. In some applications information about the target class distribution is available, e.g. in medical data it might be known that there is a large class imbalance. In general this information is not available, and here we assume a uniform distribution of pD′(y), which corresponds to maximizing the entropy H(p̂gφ(y)). Similar assumption has been made in SHOT to circumvent the collapsed solutions.
Since the estimate p̂gφ(y) depends on φ, which is continuously adapted, it needs to be re-estimated on a per-batch level. Since re-estimating p̂gφ(y) from scratch would be computational expensive, we propose to use a running estimate that tracks the changes of φ as follows: let pt−1(y) be the estimate at iteration t− 1 and pempt = 1n ∑n k=1 ŷ
(k), where ŷ(k) are the predictions (confidences) of gφ on a mini-batch of n inputs x(k) ∼ X . We update the running estimate via pt(y) = κ · sg(pt−1(y))+(1− κ) · pempt , where sg refers stop-gradient. The loss becomes Ldiv = DKL(pt(y)|| pD′(y)) accordingly. Unlike Li et al. (2020a), our approach only requires target but no source data to estimate the gradient.
3.2.2 CONFIDENCE MAXIMIZATION Lconf
We motivate our choice of Lconf step-by-step from the (unavailable) supervised cross-entropy loss: for this, let ŷ = gφ(x) be the predictions (confidences) of model gφ and H(ŷ, yr) = − ∑ c y r c log ŷc be the cross-entropy between prediction ŷ and some reference yr. Let the last layer of g be a softmax activation layer softmax. That is ŷ = softmax(o), where o are the network’s logits. We can rewrite the cross-entropy in terms of the logits o and a one-hot reference yr as follows: H(softmax(o), yr) = −ocr + log ∑ncl i=1 e oi where cr is the index of the 1 in yr and ncl is the number of classes.
When labels being available for the target domain (which we do not assume) in the form of a one-hot encoded reference yt for data xt, one could use the supervised cross-entropy loss by setting yr = yt and using Lsup(ŷ, yr) = H(ŷ, yr) = H(ŷ, yt). Since fully test-time adaptation assumes no label information, supervised cross-entropy loss is not applicable and other options for yr need to be used.
One option is (hard) pseudo-labels. That is, one defines the reference yr based on the network predictions ŷ via yr = onehot(ŷ), where onehot creates a one-hot reference with the 1 corresponding to the class with maximal confidence in ŷ. This results in Lpl(ŷ) = H(ŷ, onehot(ŷ)) = − log ŷc∗ , with c∗ = argmax ŷ. One disadvantage with this loss is that the (hard) pseudo-labels ignore uncertainty in the network predictions during self-supervision. This results in large gradient magnitudes with
respect to the logits |∂Lpl∂oc∗ | being generated on data where the network has low confidence (see Figure 1). This is undesirable since it corresponds to the network being affected most by data points where the network’s self-supervision is least reliable1.
An alternative is to use soft pseudo-labels, that is yr = ŷ. This takes uncertainty in network predictions into account during self-labelling and results in the entropy minimization loss of TENT (Wang et al., 2020): Lent(ŷ) = H(ŷ, ŷ) = H(ŷ) = − ∑ c ŷc log ŷc. However, also for the entropy the logits’ gradient magnitude |∂Lent∂o | goes to 0 when one of the entries in ŷ goes to 1 (see Figure 1). For a binary classification task, for instance, the maximal logits’ gradient amplitude is obtained for ŷ ≈ (0.82, 0.18). This implies that during later stages of test-time adaptation where many predictions typically already have high confidence (significantly above 0.82), gradients are dominated by datapoints with relative low confidence in self-supervision.
While both hard and soft pseudo-labels are clearly motivated, they are not optimal in conjunction with a gradient-based optimizer since the self-supervision from low confidence predictions dominates (at least during later stages of training). We address this issue by proposing two losses that increase the gradient amplitude from high confidence predictions. We argue that this leads to stronger selfsupervision (better gradient direction when averaged over the batch) than from the entropy loss (see also Sec. A.1 for an illustrative example supporting this claim) . The two losses are analogous to Lpl and Lent, but are not based on the cross-entropy H but on the negative log likelihood ratios:
R(ŷ, yr) = − ∑ c yrc log ŷc∑ i6=c ŷi = − ∑ c yrc (log ŷc − log ∑ i 6=c ŷi) = H(ŷ, y r) + ∑ c yrc log ∑ i 6=c ŷi
Note that while the entropy H is lower bounded by 0, R can get arbitrary small if yrc → 1 and the sum ∑ i 6=c ŷi → 0 and thus log ∑ i 6=c ŷi → −∞. This property will induce non-vanishing gradients for high confidence predictions.
The first loss we consider is the hard likelihood ratio loss that is defined similarly to the hard pseudo-labels loss Lpl:
Lhlr(ŷ) = R(ŷ, onehot(ŷ)) = − log( ŷc∗∑ i 6=c∗ ŷi ) = − log( e oc∗∑ i 6=c∗ e oi ) = −oc∗ + log ∑ i 6=c∗ eoi ,
1The prediction confidence for a datapoint can be interpreted as a proxy for its distance to the decision boundary. A low confidence prediction indicates that a datapoint appears to be close to the decision boundary and the model is less certain on which side of the decision boundary the datapoint should lie. We call this "low confidence self-supervision" since the direction of the gradient becomes ambiguous.
where c∗ = argmax ŷ. We note that ∂Lhlr∂oc∗ = −1, thus also high-confidence self-supervision contributes equally to the maximum logits’ gradients. This loss was also independently proposed as negative log likelihood ratio loss by Yao et al. (2020) as a replacement to the fully-supervised cross entropy loss for classification task. However, to the best of our knowledge, we are the first to motivate and identify the advantages of this loss for self-supervised learning and test-time adaptation due to its non-saturating gradient property.
In addition to Lhlr, we also account for uncertainty in network predictions during self-labelling in a similar way as for the entropy loss Lent, and propose the soft likelihood ratio loss:
Lslr(ŷ) = R(ŷ, ŷ) = − ∑ c ŷc · log( ŷc∑ i 6=c ŷi ) = ∑ c ŷc(−oc + log ∑ i 6=c eoi)
We note that as ŷc∗ → 1, Lslr(ŷ) → Lhlr(ŷ). Thus the asymptotic behavior of the two likelihood ratio losses for high confidence predictions is the same. However, the soft likelihood ratio loss creates lower amplitude gradients for low confidence self-supervision. We provide illustrations of the discussed losses and the resulting logits’ gradients in Figure 1. Furthermore, an illustration of other losses like the max square loss and Charbonnier penalty can be found in Sec. A.7.
We note that both likelihood ratio losses would typically encourage the network to simply scale its logits larger and larger, since this would reduce the loss even if the ratios between the logits remain constant. However, when finetuning an existing network and restricting the layers that are adapted such that the logits remain approximately scale-normalized, these losses can provide a useful and non-vanishing gradient signal for network adaptation. We achieve this appproximate scale normalization by freezing the top layers of the respective networks. In this case, normalization layers such as batch normalization prohibit “logit explosion”. However, predicted confidences can presumably become overconfident; calibrating confidences in a self-supervised test-time adaptation setting is an open and important direction for future work.
4 EXPERIMENTAL SETTINGS
Datasets We evaluate our method on image classification datasets for corruption robustness and domain adaptation. We evaluate on the challenging benchmark ImageNet-C (Hendrycks & Dietterich, 2019), which includes a wide variety of 15 different synthetic corruptions with 5 severity levels that attribute to data shift. This benchmark also includes 4 additional corruptions as validation data. For domain adaptation, we choose ImageNet trained models to adapt to ImageNet-R proposed by Hendrycks et al. (2020). ImageNet-R comprises 30,000 image renditions for 200 ImageNet classes. Domain adaptation on VisDA-C (Peng et al., 2017) and digit classification can be found in Sec. A.6.
Models Our method operates in a fully test-time adaptation setting that allows us to use any arbitrary pretrained model. We use publicly available ImageNet pretrained models ResNet50, DenseNet121, ResNeXt50, MobileNetV2 from torchvision Torch-Contributors (2020). We also test on a robust ResNet50 model trained using DeepAugment+AugMix 2 Hendrycks et al. (2020).
Baseline for fully test-time adaptation Since TENT from Wang et al. (2020) outperformed competing methods and fits the fully test-time adaptation setting, we consider it as a baseline and compare our results to this approach. Similar to TENT, we also adapt model features by estimating the normalization statistics and optimize only the channel-wise affine parameters on the target distribution.
Settings We conduct test-time adaptation on a target distribution with both online and offline updates using the Adam optimizer with learning rate 0.0006 with batch size 64. We set the weight of Lconf in our loss function to δ = 0.025 and κ = 0.9 in the running estimate pt(y) of Ldiv (we investigate the effect of κ in the Sec. A.4). Similar to SHOT (Liang et al., 2020), we also choose the target distribution pD′(y) in Ldiv as a uniform distribution over the available classes. For TENT, we use SGD with momentum 0.9 at learning rate 0.00025 with batch size 64. These values correspond to the ones of Wang et al. (2020); alternative settings for TENT did not improve performance. For offline updates, we adapt the models for 5 epochs using a cosine decay schedule of the learning rate. We found that the models converge during 3 to 5 epochs and do not improve further. Similar to Wang et al. (2020), we also control for ordering by data shuffling and sharing the order across the methods.
2From https://github.com/hendrycks/imagenet-r. Owner permitted to use it for research/commercial purposes.
Note that all the hyperparameters are tuned solely on the validation corruptions of ImageNet-C that are disjoint from the test corruptions. As discussed in Section 3.2.2, we freeze all trainable parameters in the top layers of the networks to prohibit “logit explosion”. Normalization statistics are still updated in these layers. Sec. A.3 provides more details regarding frozen layers in different networks.
Furthermore, we prepend a trainable input transformation module d (cf. Sec. 3.1) to the network to partially counteract the data-shift. Note that the parameters of this module discussed in Sec. 3.1 are trainable and subject to optimization. This module is initialized to operate as an identity function prior to adaptation on a target distribution by choosing τ = 1, γ = 1, and β = 0. We adapt the parameters of this module along with the channel-wise affine transformations and normalization statistics in an end-to-end fashion, solely using our proposed loss function along with the optimization details mentioned above. The architecture of this module is discussed in Sec. A.2.
Since Ldiv is independent of Lconf, we also propose to combine Ldiv with TENT, i. e. L = Ldiv +Lent. We denote this as TENT+ and also set κ = 0.9 here. Note that TENT optimizes all channel-wise affine parameters in the network (since entropy is saturating and does not cause logit explosion). For a fair comparison to our method, we also freeze the top layers of the networks in TENT+. We show that adding Ldiv and freezing top layers significantly improves the networks performance over TENT. Note that SHOT (Liang et al., 2020) is the combination of TENT, batch-level diversity regularizer, and their pseudo labeling strategy. TENT+ can be seen as a variant of SHOT but without the pseudo labeling. Please refer to Sec. A.5 for the test-time adaptation of pretrained models with SHOT.
Note that each corruption and severity in ImageNet-C is treated as a different target distribution and we reset model parameters to their pretrained values before every adaptation. We run our experiments for three times with random seeds (2020, 2021, 2022) in PyTorch and report the average accuracies.
5 RESULTS
Evaluation on ImageNet-C We adapt different models on the ImageNet-C benchmark using TENT, TENT+, and both hard likelihood ratio (HLR) and soft likelihood ratio (SLR) losses in an online adaptation setting. Figure 2 (top row) depicts the mean corruption accuracy (mCA%) of each model computed across all the corruptions and severity levels. It can be observed that TENT+ improves over TENT, showcasing the importance of a diversity regularizer Ldiv. Importantly, our methods HLR and SLR outperform TENT and TENT+ across DenseNet121, MobileNetV2, ResNet50, ResNeXt50 and perform comparable with TENT+ on robust ResNet50-DeepAugment+Augmix model. This shows that the mCA% of robust DeepAugment+Augmix model can be further increased from 58% (before adaptation) to 67.5% using test-time adaptation techniques. Here, the average of mCA obtained from three different random seeds are depicted along with the error bars. These smaller error bars represent that the test-time adaptation results are not sensitive to the choice of random seed.
We also illustrate the performance of ResNet50 on the highest severity level across all 15 test corruptions of ImageNet-C in Table 1. Here, online adaptation results along with the offline adaptation on epoch 1 and 5 are reported. It can be seen that online adaptation and single epoch of test-time
adaptation improves the performance significantly and makes minor improvements until epoch 5. TENT adaptation for more than one epoch result in reduced performance and TENT with Ldiv (TENT+) prevents this behavior. Both HLR and SLR clearly and consistently outperform TENT / TENT+ on the ResNet50 and also note that SLR outweighs HLR. We also compare our results with the hard pseudo-labels (PL) objective and also with an oracle setting where the groundtruth labels of the target data are used for adapting the model in a supervised manner (GT). Note that this oracle setting is not of practical importance but illustrates the empirical upper bound on fully test-time adaptation performance under the chosen modulation parametrization.
ImageNet-R We online adapt different models on ImageNet-R and depict the results in Figure 2 (middle row). Results show that HLR and SLR clearly outperform TENT and TENT+ and significantly improve performance of all the models, including the model pretrained with DeepAugment+Augmix.
Evaluation with data subsets Above we evaluate the model on the same data that is also used for the test-time adaptation. Here, we test model generalization by adapting on a subset of target data
and evaluate the performance on the whole dataset (in offline setting), which also includes unseen data that is not used for adaptation. We conduct two case studies: (i) adapt on the data from a subset of ImageNet classes and evaluate the performance on the data from all the classes. (ii) Adapt only on a subset of data from each class and test on all seen and unseen samples from the whole dataset.
Figure 3 illustrates generalization of a ResNet50 adapted on different proportions of the data across different corruptions, both in terms of classes and samples. We observe that adapting a model on a small subset of samples and classes is sufficient to achieve reasonable accuracy on the whole target data. This suggests that the adaptation actually learns to compensate the data shift rather than overfitting to the adapted samples or classes. The performance of TENT decreases as the number of classes/samples increases, because Lent can converge to trivial collapsed solutions and more data corresponds to more updates steps during adaptation. Adding Ldiv such as in TENT+ stabilizes the adaptation process and reduces this issues. Reported are the average of random seeds with error bars.
Input transformation We investigate whether the input transformation (IT) module, trained end-toend with a ResNet50 and SLR loss on data of the respective distortion without seeing any source (undistorted) data, can partially undo certain domain shifts of ImageNet-C and also increase accuracy on corrupted data. We measure domain shift via the structural similarity index measure (SSIM) (Wang et al., 2004) between the clean image (unseen by the model) and its distorted version/the output of IT on the distorted version. Following offline adaptation setting, Table 2 shows that IT increases the SSIM considerably on certain distortions such as Impulse, Contrast, Snow, and Frost. IT increases SSIM also for other types of noise distortions, while it slightly reduces SSIM for the blur distortions, Elastic, Pixelate, and JPEG. When combined with SLR, IT considerably increases accuracy on distortions for which also SSIM increased significantly (for instance +20 percent points on Impulse, +4 percent points on Contrast) and never reduces accuracy by more than 0.11 percent points. More results on online and offline adaptation with TENT / TENT+ can be found in Table A3.
Clean images As a sanity check, we investigate the effect of test-time adaptation when target data comes from the same distribution as training data. For this, we online adapt pretrained models on clean validation data of ImageNet. The results in Figure 2 (bottom row) depict that the performance of SLR/HLR adapted models drops by 0.8 to 1.8 percent points compared to the pretrained model. We attribute this drop to self-supervision being less reliable than the original full supervision on indistribution training data. The drop is smaller for TENT and TENT+, presumably because predictions on in-distribution target data are typically highly confident such that there is little gradient and thus little change to the pretrained networks by TENT. In summary, while self-supervision by confidence maximization is a powerful method for adaptation to domain shift, the observed drop when adapting to data from the source domain indicates that there is “no free lunch” in test-time adaptation.
6 CONCLUSION
We propose a method to improve corruption robustness and domain adaptation of models in a fully test-time adaptation setting. Unlike entropy minimization, our proposed loss functions provide non-vanishing gradients for high confident predictions and thus attribute to improved adaptation in a self-supervised manner. We also show that additional diversity regularization on the model predictions is crucial to prevent trivial solutions and stabilize the adaptation process. Lastly, we introduce a trainable input transformation module that partially refines the corrupted samples to support the adaptation. We show that our method improves corruption robustness on ImageNet-C and domain adaptation to ImageNet-R on different ImageNet models. We also show that adaptation on a small fraction of data and classes is sufficient to generalize to unseen target data and classes.
7 ETHICS STATEMENT
We abide by the general ethical principles listed by ICLR code of ethics. Our work does not include the study of human subjects, dataset releases, do not raise pontential conflicts of interest, or discrimination/bias/fairness concerns, or privacy and security issues. Our non-saturating loss increases accuracy but might result in over confident predictions, which can cause harm in safetycritical downstream applications when not properly calibrated. At the same time, self-supervised confidence maximization might amplify bias in pretrained models. We hope that the diversity regularizer in the loss partially compensates this issue.
8 REPRODUCIBILITY STATEMENT
We provide complete details of our experimental setup for reproducibility. Sec. 4 provides details of the network architectures, optimizer, learning rate, batch size, choice of hyperparameters of our method and the random seeds used for generating the results. Sec. A.3 provides more details regarding frozen layers in different networks. Sec. A.2 shows the structure of input transformation module used in this work. We will also provide a link to an anonymous downloadable source code as a comment directed to the reviewers and area chairs in the discussion forum.
A APPENDIX
A.1 ILLUSTRATIVE EXAMPLE OF LOG LIKELIHOOD RATIO ADAPTATION OBJECTIVE
A simple 1D example is devised to illustrate the benefits of proposed log likelihood ratio as test time adaptation objective. Consider data points (unlabeled) that are sampled from the following bimodal distribution: 0.5 · N (−1, 3) + 0.5 · N (+1, 3), that is: half of the samples come from a normal distribution with mean -1 and the other half from a normal distribution with mean +1 (and both having standard deviation 3). We can interpret these two components of the mixture distributions as corresponding to data of two different classes, but class labels are of course unavailable during unsupervised test-time adaptation.
We assume a simple logistic model of the form pθ(y = 1|x) = 11+e−(x+θ) , where x is the value of the data sample and θ is a scalar offset that determines the decision boundary. By construction, we know that the minimum density of the mixture distribution on [−1, 1] is at 0. Since confidence maximization aims as moving the decision boundary to regions in input space with minimum data density (in this case to 0), we can compare different self-supervised confidence maximization losses in the finite data regime as follows: for every finite data sample with N data points {xi} for i = 1, . . . , N and loss function L , we solve θ∗(L) = argminθ∈[−1,1] L(θ, {xi}), where the loss (such as entropy or SLR) is averaged over all data points. The absolute value |θ∗(L)| gives us then an estimate of the error of the decision boundary parameter |θ∗(L)| for the given data set and loss function. Table A1 provides this error for different loss functions and different number of data samples. It can be seen that SLR and HLR clearly outperform Entropy loss (TENT) for all data regimes. The difference between SLR and HLR is generally very small. While SLR seems to be consistently slightly better than HLR, this difference is not statistically significant. We attribute the superiority of SLR/HLR compared to entropy to the fact that all data points have non-saturating loss, regardless of their distance to the decision boundary. Thus, all data contributes to localizing the decision boundary, while for saturating losses such as the entropy, effectively only "nearby" points determine the decision boundary. This example illustrates that our proposed non-saturating losses are beneficial over entropy loss for self-supervised confidence maximization.
Table A1: Illustrates the error of the decision boundary parameter for different loss functions and different number of samples averaged over 100 runs (shown are mean and standard error of mean).
#samples 100 200 500 1000 2000 10000 20000
Entropy 0.487±0.031 0.364±0.029 0.230±0.018 0.152±0.013 0.117±0.009 0.052±0.004 0.033±0.003 HLR 0.357±0.023 0.234±0.018 0.145±0.012 0.094±0.008 0.071±0.006 0.032±0.002 0.022±0.002 SLR 0.332±0.022 0.214±0.017 0.140±0.011 0.088±0.008 0.067±0.006 0.032±0.002 0.021±0.002
A.2 INPUT TRANSFORMATION MODULE
Note that we define our adaptable model as g = f ◦ d, where d is a trainable network prepended to a pretrained neural network f (e.g., pretrained ResNet50). We choose d(x) = γ ·[τx+ (1− τ)rψ(x)]+ β, where τ ∈ R, (β, γ) ∈ Rnin with nin being the number of input channels, rψ being a network with identical input and output shape, and · denoting elementwise multiplication. Here, β and γ implement a channel-wise affine transformation and τ implements a convex combination of unchanged input and the transformed input rψ(x). We set τ = 1, γ = 1, and β = 0, to ensure that d(x) = x and thus g = f at initialization. In principle, rψ can be chosen arbitrarily. Here, we choose rψ as a simple stack of 3× 3 convolutions with stride 1 and padding 1, group normalization, and ReLUs without any upsampling/downsampling layers. Specifically, the structure of g is illustrated in Figure A1.
In addition to the results reported in Table 2, we also compare TENT and TENT+ with and without Input Transformation (IT) module on ResNet50 for all corruptions at severity level 5 in both online adaptation setting and offline adaptation with 5 epochs in Table A3. Furthermore, we also present the qualitative results of the image transformations from the input transformation module adapted with SLR (offline setting) in Figure A2.
Table A2: Ablation study on the components of input transformation module on ResNet50 for all corruptions at severity level 5.
Corruption Gauss Shot Impulse Defocus Glass Motion Zoom Snow Frost Fog Bright Contrast Elastic Pixel JPEG mean
x 41.52 42.90 44.07 41.69 40.78 54.76 56.59 57.35 51.01 63.53 68.72 50.65 61.49 63.46 58.32 53.12 rψ(x) 13.17 26.57 28.81 5.09 3.61 30.61 49.79 53.73 45.96 58.82 65.79 53.73 56.77 60.14 53.38 40.40 τx+ (1− τ)rψ(x) 43.13 46.43 56.25 41.80 40.90 55.75 56.65 58.55 51.72 63.59 68.83 53.89 61.50 63.73 58.51 54.74 γ · [τx+ (1− τ)rψ(x)] + β 43.18 46.24 56.21 41.91 40.89 55.79 56.66 58.50 51.72 63.56 68.83 54.26 61.49 63.76 58.52 54.76
Table A3: Test-time adaptation of ResNet50 on ImageNet-C at highest severity level 5 with and without Input Transformation (IT) module. Reported are the mean accuracy(%) across three random seeds (2020/2021/2022). While IT also improves performance when combined with TENT+, it is still clearly outperformed by SLR+IT.
Method Gauss Shot Impulse Defocus Glass Motion Zoom Snow Frost Fog Bright Contrast Elastic Pixel JPEG
Online adaptation (evaluation on a batch directly after adaptation on the batch)
TENT 28.60 31.06 30.54 29.09 28.07 42.32 50.39 48.01 42.05 58.40 68.20 27.25 55.68 59.46 53.64 TENT + IT 28.99 31.73 31.15 28.87 27.85 42.43 50.36 48.02 41.95 58.37 68.19 24.35 55.68 59.49 53.57
TENT+ 29.09 31.65 30.68 29.33 28.65 42.32 50.32 48.09 42.54 58.39 68.23 31.43 55.90 59.46 53.68 TENT+ + IT 29.48 32.34 31.38 29.06 28.42 42.43 50.33 48.11 42.47 58.40 68.20 32.11 55.87 59.49 53.64 SLR (ours) 35.11 37.93 36.83 35.13 35.13 48.29 53.45 52.68 46.52 60.74 68.40 44.78 58.74 61.13 55.97 SLR + IT (ours) 36.19 39.17 40.46 35.17 34.87 48.67 53.62 52.71 46.93 60.66 68.30 46.55 58.79 61.27 55.93 Evaluation after epoch 5
TENT 30.64 33.80 34.72 30.13 29.05 49.08 53.63 52.86 38.47 61.13 68.81 10.72 59.25 62.15 56.44 TENT + IT 31.92 36.02 38.14 30.44 28.68 49.04 53.59 52.99 38.76 61.14 68.84 13.52 59.23 62.15 56.56
TENT+ 35.19 38.12 37.43 34.82 34.95 50.33 54.24 53.88 46.28 61.50 69.07 29.87 60.01 62.61 57.09 TENT+ + IT 36.13 39.84 41.03 34.62 34.72 50.33 54.10 53.91 46.46 61.54 69.07 30.22 59.95 62.72 57.11 SLR (ours) 41.52 42.90 44.07 41.69 40.78 54.76 56.59 57.35 51.01 63.53 68.72 50.65 61.49 63.46 58.32
SLR+IT (ours) 43.09 44.39 64.05 41.98 40.99 55.73 56.75 58.56 51.68 63.64 68.85 55.01 61.32 63.59 58.24
A.2.1 CONTRIBUTION OF EACH COMPONENT IN INPUT TRANSFORMATION MODULE
Table A2 shows the results of ablation study on the components of input transformation module on ResNet50 for all corruptions at severity level 5 adapted with SLR for 5 epochs. The ablation study includes: (1) no input transformation module d(x) = x, (2) with network d(x) = rψ(x), (3) including τ , (4) including channel-wise affine transformation γ and β. We can observe that the inputs transformed with network rψ drops the performance without the convex combination with τ . The additional channel wise affine transformations didn’t bring further consistent improvements and can be ignored from the transformation module. Exploring other architectural choices and training (or pretraining) strategy for the input transformation module would be an interesting avenue for future work.
A.3 FROZEN LAYERS IN DIFFERENT NETWORKS
As discussed in Section 3.2.2, we freeze all trainable parameters in the top layers of the networks to prohibit “logit explosion”. That implies, we do not optimize the channel-wise affine transformations of the top layers but normalization statistics are still estimated. Similar to the hyperparameters of test time adaptation settings, the choice of these layers are made using ImageNet-C validation data. We mention the frozen layers of each architecuture below. Note that the naming convention of these layers are based on the model definition in torchvision:
• DenseNet121 - features.denseblock4, features.norm5.
• MobileNetV2 - features.16, features.17, features.18.
• ResNeXt50, ResNet50 and ResNet50 (DeepAugment+Augmix) - layer4.
A.3.1 RESULTS WITHOUT FREEZING THE TOP LAYERS
We mentioned that the proposed losses could alternatively encourage the network to scale the logits grow larger and larger and still reduce the loss. However, we did not find any considerable differences empirically in the explored settings when adapting the model with or without freezing the top layer. We found that adapting the model with and without freezing the top layers have comparable performance in both online and offline adaptation settings as shown in Table A4 respectively. However, we would still recommend freezing the top-most layers as the default choice to be on the safe side. These results indicate that the early layers capture the distribution shift sufficiently to improve the model adaptation.
Table A4: Comparing the online and offline adaptation results with and without freezing the affine parameters of top normalization layers of ResNet50 at severity 5. Here, "Freeze" and "NoFreeze" refer to the setting with and without freezing the top affine layers respectively.
Corruption Gauss Shot Impulse Defocus Glass Motion Zoom Snow Frost Fog Bright Contrast Elastic Pixel JPEG mean
Online evaluation
TENT+ NoFreeze 29.05 31.32 30.32 28.95 28.29 42.37 50.45 48.12 42.21 58.51 68.29 28.17 55.57 59.47 53.46 43.63 TENT+ Freeze 29.21 31.54 30.55 29.17 28.60 42.54 50.47 48.18 42.51 58.50 68.30 31.25 55.76 59.54 53.62 43.98
HLR NoFreeze 33.73 36.50 35.63 33.99 33.88 46.55 52.76 51.44 45.82 59.74 67.37 43.19 57.69 59.77 54.95 47.53 HLR Freeze 33.10 36.08 34.74 33.21 33.31 46.36 52.77 51.42 45.47 60.01 68.07 42.75 58.02 60.42 55.34 47.40
SLR NoFreeze 35.61 38.37 37.50 35.83 35.81 48.29 53.61 52.62 46.85 60.42 67.71 44.93 58.43 60.56 55.65 48.81 SLR Freeze 35.11 37.93 36.83 35.13 35.13 48.29 53.45 52.68 46.52 60.74 68.40 44.78 58.74 61.13 55.97 48.72
offline evaluation
TENT+ NoFreeze 32.03 35.33 35.28 31.92 31.27 49.20 53.79 53.01 40.37 61.22 68.79 19.38 59.25 62.20 56.51 45.97 TENT+ Freeze 35.19 38.12 37.43 34.82 34.95 50.33 54.24 53.88 46.28 61.50 69.07 29.87 60.01 62.61 57.09 48.35
HLR NoFreeze 41.60 43.80 43.89 42.21 41.50 53.82 56.21 56.71 50.83 62.74 67.87 51.34 60.65 62.58 57.70 52.89 HLR Freeze 41.37 44.04 43.68 41.74 41.09 54.26 56.43 57.03 50.81 63.05 68.29 50.98 61.15 63.08 58.13 53.0
SLR NoFreeze 41.45 43.95 44.26 42.56 41.60 54.25 56.13 56.72 50.92 62.97 68.02 50.99 60.90 62.83 57.86 53.02 SLR Freeze 41.52 42.90 44.07 41.69 40.78 54.76 56.59 57.35 51.01 63.53 68.72 50.65 61.49 63.46 58.32 53.12
A.4 EFFECT OF κ
Note that the running estimate of Ldiv prevents model collapsed to trivial solutions i.e., model predicts only a single or a set of classes as outputs regardless of the input samples. Ldiv encourages model to match it’s empirical distribution of predictions to class distribution of target data (uniform distribution in our experiments). Such diversity regularization is crucial as there is no direct supervision attributing to different classes and thus aids to avoid collapsed trivial solutions. In Figure A3, we investigate different values of κ on validation corruptions of ImageNet-C to study its effectiveness on our approach. It can be observed that both the HLR and SLR without Ldiv leads to collapsed solutions (e.g., accuracy drops to 0%) on some of the corruptions and the performance gains are not consistent across all the corruptions. On the other hand, Ldiv with κ = 0.9 remain consistent and improve the performance across all the corruptions.
A.5 TEST-TIME ADPTATION OF PRETRAINED MODELS WITH SHOT
Following SHOT (Liang et al., 2020), we use their pseudo labeling strategy on the ImageNet pretrained ResNet50 in combination with TENT+, HLR and SLR. Note that TENT+ and pseudo labeling strategy jointly forms the method SHOT. The pseudo labeling strategy starts after the 1st epoch and thereafter computed at every epoch. The weight for the loss computed on the pseudo labels is set to 0.3, similar to (Liang et al., 2020). Different values for this weight is explored and found 0.3 to perform best. Table A6 compares the results of the methods with and without pseudo labeling strategy. It can be observed that the results with pseudo labeling strategy perform worse than without taking this strategy into account.
We further modified the pretrained ResNet50 by following the network modifications suggested in (Liang et al., 2020), that includes adding a bottleneck layer with BatchNorm and applying weight norm on the linear classifier along with smooth label training to facilitate the pseudo labeling strategy. Table A7 shows that the pseudo labeling strategy on such network improve the results of TENT+ from epoch 1 to epoch 5. However, there are no improvements noticed in SLR. Moreover, Table A8 shows that NO pseudo labeling strategy on the same network performs better than applying the pseudo labeling strategy. Finally, the no pseduo labeling results from Table A6 and A8 shows that additional modifications to ResNet50 do not improve the performance when compared to the standard ResNet50.
A.6 DOMAIN ADAPTATION ON VISDA-C AND DIGIT CLASSIFICATION
VisDA-C: We extended our experiments to VisDA-C. We followed similar network architecture from SHOT (Liang et al., 2020) and evaluated TENT+, our SLR loss function with diversity regularizer. Similar to ImageNet-C, we adapted only the channel wise affine parameters of batchnorm layers for 5 epochs with Adam optimizer with cosine decay scheduler of the learning rate with initial value 2e− 5. Here, the batchsize is set to 64, the weight of Lconf in our loss function to δ = 0.25 and κ = 0 in the running estimate pt(y) of Ldiv, since the number of classes in this dataset (12 classes) is smaller than the batchsize. Setting κ = 0 enables the batch wise diversity regularizer. Table A9 shows
Table A5: Test-time adaptation of ResNet50 on ImageNet-C at highest severity level 5. Same as Table 1 with error bars.
name Epoch 1 Epoch 5
corruption No adaptation PL TENT TENT+ HLR SLR TENT TENT+ HLR SLR
Gauss 2.44 2.44 32.44±0.10 33.75±0.09 38.39±0.25 39.51±0.23 30.64±0.51 35.19±0.17 41.37±0.09 41.52±0.08 Shot 2.99 2.99 35.01±0.17 36.38±0.19 41.11±0.13 42.09±0.26 33.80±0.74 38.12±0.10 44.04±0.09 42.90±0.08
Impulse 1.96 1.96 34.77±0.09 35.67±0.15 40.28±0.20 41.58±0.04 34.72±1.01 37.43±0.09 43.68±0.06 44.07±0.06 Defocus 17.92 17.92 32.40±0.10 33.43±0.14 38.25±0.32 39.35±0.13 30.13±0.61 34.82±0.25 41.74±0.12 41.69±0.07
Glass 9.82 9.82 31.62±0.15 33.25±0.01 38.18±0.08 39.02±0.09 29.05±0.21 34.95±0.13 41.09±0.17 40.78±0.08 Motion 14.78 14.78 47.23±0.11 47.66±0.12 51.63±0.08 52.67±0.25 49.08±0.08 50.33±0.07 54.26±0.02 54.76±0.04 Zoom 22.50 22.50 53.09±0.06 53.20±0.07 55.55±0.06 55.80±0.07 53.63±0.16 54.24±0.06 56.43±0.07 56.59±0.05 Snow 16.89 16.89 51.61±0.05 52.06±0.09 55.45±0.11 55.92±0.06 52.86±0.13 53.88±0.07 57.03±0.12 57.35±0.03 Frost 23.31 23.31 43.26±0.30 44.85±0.20 48.96±0.07 49.64±0.14 38.47±0.50 46.28±0.27 50.81±0.08 51.01±0.02 Fog 24.43 24.43 60.42±0.08 60.60±0.05 62.19±0.03 62.62±0.04 61.13±0.08 61.50±0.05 63.05±0.04 63.53±0.08 Bright 58.93 58.93 68.85±0.02 68.93±0.03 68.17±0.01 68.47±0.05 68.81±0.06 69.07±0.06 68.29±0.09 68.72±0.10 Contrast 5.43 5.43 24.39±0.98 33.43±0.77 49.47±0.20 50.27±0.08 10.72±0.32 29.87±1.36 50.98±2.54 50.65±0.55 Elastic 16.95 16.95 58.53±0.05 58.94±0.05 60.34±0.18 60.80±0.08 59.25±0.06 60.01±0.02 61.15±0.04 61.49±0.07 Pixel 20.61 20.61 61.62±0.06 61.75±0.07 62.51±0.10 63.01±0.08 62.15±0.04 62.61±0.08 63.08±0.06 63.46±0.08 JPEG 31.65 31.65 56.00±0.09 56.21±0.05 57.42±0.13 57.80±0.04 56.44±0.07 57.09±0.02 58.13±0.09 58.32±0.05
Table A6: Test-time adaptation of ResNet50 on ImageNet-C at highest severity level 5 with and without the pseudo labeling strategy (Liang et al., 2020).
name No pseudo labeling: Epoch 5 Pseudo labeling: Epoch 5
corruption No adaptation TENT+ HLR SLR TENT+ HLR SLR
Gauss 2.44 33.97±0.17 41.37±0.09 41.52±0.08 34.08±0.11 34.88±0.35 35.58±0.06 Shot 2.99 37.95±0.10 44.04±0.09 42.90±0.08 36.74±0.26 37.61±0.49 37.98±0.19 Impulse 1.96 36.93±0.09 43.68±0.06 44.07±0.06 36.69±0.04 37.24±0.22 37.77±0.05 Defocus 17.92 32.69±0.25 41.74±0.12 41.69±0.07 33.99±0.28 34.76±0.11 35.11±0.10
Glass 9.82 33.36±0.13 41.09±0.17 40.78±0.08 34.06±0.12 34.51±0.30 34.81±0.27 Motion 14.78 51.42±0.07 54.26±0.02 54.76±0.04 50.91±0.09 48.96±0.39 49.46±0.20 Zoom 22.50 54.33±0.06 56.43±0.07 56.59±0.05 54.10±0.10 52.49±0.02 52.50±0.23 Snow 16.89 54.55±0.07 57.03±0.12 57.35±0.03 54.06±0.08 52.49±0.19 52.95±0.07 Frost 23.31 45.80±0.27 50.81±0.08 51.01±0.02 44.44±0.07 45.47±0.26 46.06±0.20 Fog 24.43 62.09±0.05 63.05±0.04 63.53±0.08 61.91±0.08 59.66±0.14 59.98±0.12 Bright 58.93 69.03±0.06 68.29±0.09 68.72±0.10 68.98±0.02 65.59±0.06 66.00±0.03 Contrast 5.43 24.08±1.36 50.98±2.54 50.65±0.55 29.37±0.95 44.58±0.38 45.64±0.47 Elastic 16.95 60.36±0.02 61.15±0.04 61.49±0.07 60.23±0.05 57.48±0.14 57.87±0.04 Pixel 20.61 63.10±0.08 63.08±0.06 63.46±0.08 62.98±0.04 59.72±0.02 60.05±0.14 JPEG 31.65 57.21±0.02 58.13±0.09 58.32±0.05 57.09±0.04 54.72±0.09 54.88±0.07
average results from three different random seeds and also shows that SLR outperforms TENT+ on this dataset.
Domain adaptation from SVHN to MNIST / MNIST-M / USPS: ResNet26 is trained on SVHN dataset for 50 epochs with batch size 128, SGD optimizer with momentum 0.9 and initial learning rate 0.01, which drops to 0.001 and 0.0001 at 25th and 40th epoch respectively. ResNet26 obtains 96.49% test accuracy on SVHN. Domain adaptation of SVHN trained ResNet26 to MNIST/MNIST-M/USPS
Table A7: Test-time adaptation of modified ResNet50 (following (Liang et al., 2020)) on ImageNet-C at highest severity level 5 with pseudo labeling strategy at epoch 1 and epoch 5.
name Pseudo labeling: Epoch 1 Pseudo labeling: Epoch 5
corruption No adaptation TENT+ HLR SLR TENT+ HLR SLR
Gauss 2.95 31.03±0.18 34.65±0.28 37.21±0.23 35.26±0.16 35.93±0.23 37.61±0.30 Shot 3.65 33.55±0.07 38.09±0.30 40.30±0.09 37.39±0.05 38.95±0.16 40.42±0.06 Impulse 2.54 32.70±0.07 36.95±0.05 39.73±0.07 38.16±0.08 38.13±0.04 40.12±0.11 Defocus 19.36 31.66±0.15 35.08±0.05 37.18±0.15 35.95±0.17 36.72±0.13 37.96±0.25
Glass 9.72 31.06±0.06 35.46±0.12 37.62±0.10 35.98±0.04 36.84±0.11 37.90±0.02 Motion 15.66 46.96±0.12 49.95±0.12 51.87±0.14 52.24±0.02 51.90±0.12 52.76±0.09 Zoom 22.20 52.45±0.02 54.15±0.22 54.84±0.18 54.80±0.07 54.84±0.09 54.95±0.14 Snow 17.56 51.79±0.05 53.98±0.06 55.44±0.04 55.15±0.02 55.27±0.20 55.75±0.02 Frost 24.11 45.59±0.06 47.87±0.03 48.96±0.11 48.10±0.20 48.52±0.11 49.13±0.20 Fog 25.59 60.33±0.03 61.55±0.10 62.21±0.16 62.39±0.03 62.38±0.12 62.38±0.11 Bright 58.30 68.84±0.04 68.44±0.04 68.60±0.10 69.13±0.04 68.50±0.02 68.47±0.09 Contrast 6.49 42.34±0.19 47.98±0.13 50.32±0.28 42.11±0.15 49.22±0.42 50.80±0.19 Elastic 17.72 58.47±0.02 59.70±0.06 60.30±0.09 60.40±0.04 60.27±0.22 60.45±0.21 Pixel 21.29 61.39±0.06 62.10±0.07 62.71±0.10 63.04±0.02 62.71±0.07 62.81±0.07 JPEG 32.13 55.22±0.03 56.49±0.07 57.04±0.07 57.21±0.06 57.25±0.07 57.37±0.05
Table A8: Test-time adaptation of modified ResNet50 (following (Liang et al., 2020)) on ImageNet-C at highest severity level 5 with and without pseudo labeling strategy.
name No Pseudo labeling: Epoch 5 Pseudo labeling: Epoch 5
corruption No adaptation TENT+ HLR SLR TENT+ HLR SLR
Gauss 2.95 34.96±0.08 38.58±0.12 39.72±0.13 35.26±0.16 35.93±0.23 37.61±0.30 Shot 3.65 37.22±0.17 41.59±0.09 42.45±0.05 37.39±0.05 38.95±0.16 40.42±0.06 Impulse 2.54 37.82±0.04 40.88±0.07 42.39±0.03 38.16±0.08 38.13±0.04 40.12±0.11 Defocus 19.36 34.46±0.12 39.22±0.15 39.78±0.09 35.95±0.17 36.72±0.13 37.96±0.25
Glass 9.72 35.12±0.05 38.83±0.13 39.37±0.07 35.98±0.04 36.84±0.11 37.90±0.02 Motion 15.66 51.91±0.09 53.23±0.05 54.00 52.24±0.02 51.90±0.12 52.76±0.09 Zoom 22.20 54.57±0.05 55.76±0.04 55.79±0.02 54.80±0.07 54.84±0.09 54.95±0.14 Snow 17.56 55.02±0.05 56.35±0.12 56.80±0.04 55.15±0.02 55.27±0.20 55.75±0.02 Frost 24.11 48.18±0.09 49.86±0.22 50.43±0.08 48.10±0.20 48.52±0.11 49.13±0.20 Fog 25.59 62.24±0.04 62.90±0.06 63.29±0.06 62.39±0.03 62 | 1. What is the focus and contribution of the paper regarding test-time adaptation?
2. What are the strengths of the proposed approach, particularly in its novel extensions?
3. What are the weaknesses of the paper, especially regarding the intersection with prior methods and marginal improvements?
4. Do you have any concerns or suggestions regarding the input transformation model, non-saturating losses, and diversity regularizer?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
Test-time adaptation by entropy minimization can help models adapt to dataset shifts like corruptions without altering training. This work extends tent, an entropy minimization method, by proposing alternative non-saturating losses, adding a diversity regularizer, and adapting the input data along with the model parameters. The input is adapted by applying a convolutional image transformation model between the input and classification model. These extensions do not need more optimization iterations or supervision than the baselines: the method adapts online and efficiently without auxiliary supervision. Experiments on the corruption benchmark ImageNet-C and the newer benchmark ImageNet-R report reduced generalization error. The improvements are there but marginal, and they are consistent across multiple baseline architectures (ResNet, DenseNet, MobileNet, etc.). However, the clean accuracy reduced, so the proposed method does not strictly dominate prior work.
Review
Strengths
The joint adaptation of input transformations and model parameters is novel for test-time adaptation. Neither TTT (Sun et al. 2020), test-time normalization (Scheider et al. 2020), nor TENT (Wang et al. 2021) adapt the input. The related work section covers prior projects that learn to transform the input during training rather than testing.
The proposed techniques—the alternative losses, the input transformation model, and diversity regularizer—improve accuracy jointly and separately. For instance, the input transformation model helps the proposed approach as well as TENT, and the diversity regularizer likewise fixes cases where TENT can fail. While these are not wholly new, it is still informative to double-check their effectiveness, ablate their combination, and show small but consistent improvement across multiple architectures and datasets (ImageNet-C, ImageNet-R, and the digit datasets MNIST/MNIST-M/USPS).
The proposed extensions are still online and efficient, so this method seemingly could be deployed as easily as TENT. The only counter to this is that small accuracy drops are reported on unshifted data, where these drops are not seen (TENT, BN) or smaller (TTT) for other test-time methods.
The results for test-time adaptation by optimization (TENT, HLR/SLR) on ImageNet-R are empirically new. Only test-time normalization (Schneider et al. 2020) had reported test-time results on this data.
Design choices are justified with experimental results (main tables), visualization (Figure 1 for losses and gradients), and toy experiments (Appendix A.1).
Weaknesses
The novelty of the proposed extensions is diminished by intersection with prior methods. The losses for HLR and SLR are from Yao et al. 2020, although this work is the first to bring them to fully test-time adaptation, and that is worthwhile. (For comparison, note that TENT argued for entropy which is obviously well-established as a loss). The diversity regularizer is a core part of SHOT (Liang et al. 2020), although this work calculates it differently with a moving average. The input transformation model is the most new, as input adaptation of this kind has not been done during testing, but there are close connections to training like ANT (Rusak et al.) and CyCADA (Tzeng et al.).
The proposed losses HLR and SLR further restrict the choice of parameters to adapt, as these losses would otherwise cause the predicted logits to grow without bound. Deriving an alternative loss without this restriction would be better to simplify the application of the method.
The amount of improvement on shifted data is marginal, at 2-3 points absolute in many cases, while there is harm on the unshifted/standard/clean data. This argues against the motivation for non-saturating losses. This is admitted on pg. 9 under "Clean Images" but is not remedied. A new loss or combination of losses that improves shifted accuracy without hurting clean accuracy would be more significant.
Input transformation (Sec. 3.1) is not closely studied or ablated. For instance, what if only the channel-wise input changes are included without the network? Do shallower networks do better or worse? Does input transformation help on all corruptions, or can it hurt?
For Rebuttal
Please provide a control experiment for the choice of optimization. How do TENT and TENT+ fare when optimized with Adam, the same solver as HLR/SLR? For TENT/TENT+, does raising the learning rate for SGD or Adam improve results by counteracting the saturation of the entropy loss? The need for non-saturating losses is a key claim so this is worth double-checking.
Please provide the results for TENT/TENT+ without freezing the weights of the deeper layers. The need for this freezing is a limitation of HLR and SLR, not TENT, so it is worth knowing if these additional parameters help the baselines.
Please comment on the learned transformation models, and in particular the learned transformation weight tau. How much do the scale gamma and shift beta help on their own? As a more novel part of this work, the input transformation module deserves more analysis.
Please explain the data subset experiments in more detail (Figure 3). Why does TENT fail as the split fraction reaches 1.0? In the TENT paper, there are generalization results with adaptation on target train and evaluation on target test, and the method still helps in that case. What is the justification for these use cases? Would it not be better to always adapt on all the data that is encountered?
Miscellaneous Feedback
[clarity] please summarize results with the mean where appropriate, for instance by including the mean over corruption types in Table 1
[clarity] consider including qualitative results of the learned image transformations, for instance in the appendix, to show the types and degrees of transformation.
[text] in the related work, change "domain adaptation train" to "domain adaptation methods train"
[text] in the related work, change "such setting refrain the cost" to "such settings spare the cost"
[text] in Sec. 3.2.1, change "One option are" to "One option is" |
ICLR | Title
Decentralized Learning for Overparameterized Problems: A Multi-Agent Kernel Approximation Approach
Abstract
This work develops a novel framework for communication-efficient distributed learning where the models to be learnt are overparameterized. We focus on a class of kernel learning problems (which includes the popular neural tangent kernel (NTK) learning as a special case) and propose a novel multi-agent kernel approximation technique that allows the agents to distributedly estimate the full kernel function, and subsequently perform distributed learning, without directly exchanging any local data or parameters. The proposed framework is a significant departure from the classical consensus-based approaches, because the agents do not exchange problem parameters, and consensus is not required. We analyze the optimization and the generalization performance of the proposed framework for the `2 loss. We show that with M agents and N total samples, when certain generalized inner-product (GIP) kernels (resp. the random features (RF) kernel) are used, each agent needs to communicate O N 2 /M bits (resp. O N p N/M real values) to achieve minimax optimal generalization performance. Further, we show that the proposed algorithms can significantly reduce the communication complexity compared with state-of-the-art algorithms, for distributedly training models to fit UCI benchmarking datasets. Moreover, each agent needs to share about 200N/M bits to closely match the performance of the centralized algorithms, and these numbers are independent of parameter and feature dimension.
N/A
N 2 /M bits (resp. O N p N/M
real values) to achieve minimax optimal generalization performance. Further, we show that the proposed algorithms can significantly reduce the communication complexity compared with state-of-the-art algorithms, for distributedly training models to fit UCI benchmarking datasets. Moreover, each agent needs to share about 200N/M bits to closely match the performance of the centralized algorithms, and these numbers are independent of parameter and feature dimension.
1 INTRODUCTION
Recently, decentralized optimization has become a mainstay of the optimization research. In decentralized optimization, multiple local agents hold small to moderately sized private datasets, and collaborate by iteratively solving their local problems while sharing some information with other agents. Most of the existing decentralized learning algorithms are deeply rooted in classical consensus-based approaches (Tsitsiklis, 1984), where the agents repetitively share the local parameters with each other to reach an optimal consensual solution. However, the recent trend of using learning models in the overparameterized regime with very high-dimensional parameters (He et al., 2016; Vaswani et al., 2017; Fedus et al., 2021) poses a significant challenge to such parameter sharing approaches, mainly because sharing model parameters iteratively becomes excessively expensive as the parameter dimension grows. If the size of local data is much smaller than that of the parameters, perhaps a more efficient way is to directly share the local data. However, this approach raises privacy concerns, and it is rarely used in practice. Therefore, a fundamental question of decentralized learning in the overparameterized regime is:
(Q) For overparameterized learning problems, how to design decentralized algorithms that achieve the best optimization/generalization performance by exchanging minimum amount of information?
We partially answer (Q) in the context of distributed kernel learning (Vert et al., 2004). We depart from the popular consensus-based algorithms and propose an optimization framework that does not require the local agents to share model parameters or raw data. We focus on kernel learning because: (i) kernel methods provide an elegant way to model non-linear learning problems with complex data
dependencies as simple linear problems (Vert et al., 2004; Hofmann et al., 2008), and (ii) kernelbased methods can be used to capture the behavior of a fully-trained deep network with large width (Jacot et al., 2018; Arora et al., 2019; 2020).
Distributed implementation of kernel learning problems is challenging. Current state-of-the-art algorithms for kernel learning either rely on sharing raw data among agents and/or imposing restrictions on the number of agents (Zhang et al., 2015; Lin et al., 2017; Koppel et al., 2018; Lin et al., 2020; Hu et al., 2020; Pradhan et al., 2021; Predd et al., 2006). Some recent approaches rely on specific random feature (RF) kernels to alleviate some of the above problems. These algorithms reformulate the (approximate) problem in the parameter domain and solve it by iteratively sharing the (potentially high-dimensional) parameters (Bouboulis et al., 2017; Richards et al., 2020; Xu et al., 2020; Liu et al., 2021). These algorithms suffer from excessive communication overhead, especially in the overparameterized regime where the number of parameters is larger than the data size N . For example, implementing the neural tangent kernel (NTK) with RF kernel requires at least O(N ), 2, random features (parameter dimension) using ReLU activation (Arora et al., 2019; Han et al., 2021)1. For such problems, in this work, we propose a novel algorithmic framework for decentralized kernel learning. Below, we list the major contributions of our work.
[GIP Kernel for Distributed Approximation] We define a new class of kernels suitable for distributed implementation, Generalized inner-product (GIP) kernel, that is fully characterized by the angle between a pair of feature vectors and their respective norms. Many kernels of practical importance including the NTK can be represented as GIP kernel. Further, we propose a multi-agent kernel approximation method for estimating the GIP and the popular RF kernels at individual agents.
[One-shot and Iterative Scheme] Based on the proposed kernel approximation, we develop two optimization algorithms, where the first one only needs one-shot information exchange, but requires sharing data labels among the agents; the second one needs iterative information exchange, but does not need to share the data labels. A key feature of these algorithms is that neither the raw data features nor the (high-dimensional) parameters are exchanged among agents.
[Performance of the Approximation Framework] We analyze the optimization and the generalization performance of the proposed approximation algorithms for `2 loss. We show that GIP kernel requires communicating O(N2/M) bits and the RF kernel requires communicating O(N p N/M) real values per agent to achieve minimax optimal generalization performance. Importantly, the required communication is independent of the function class and the optimization algorithm. We validate the performance of our approximation algorithms on UCI benchmarking datasets.
In Table 1, we compare the communication requirements of the proposed approach to popular distributed kernel learning algorithms. Specifically, DKRR-CM (Lin et al., 2020) relies on sharing data and is therefore not preferred in practical settings. For the RF kernel, the proposed algorithm outperforms other algorithms in both non-overparameterized and the overparameterized regimes when T > N/M . In the overparameterized regime, the GIP kernel is more communication efficient compared to other algorithms. Finally, note that since our analysis is developed using the multiagent-kernel-approximation, it does not impose any upper bound on the number of agents in the network.
1To achieve approximation error ✏ = O(1/ p N).
Notations: We use R, Rd, and Rn⇥m to denote the sets of real numbers, d-dimensional Euclidean space, and real matrices of size n⇥m, respectively. We use N to denote the set of natural numbers. N (0,⌃) is multivariate normal distribution with zero mean and covariance ⌃. Uniform distribution with support [a, b] is denoted by U [a, b]. ha, bi (resp. ha, biH) denotes the inner-product in Euclidean space (resp. Hilbert space H). The inner-product defines the usual norms in corresponding spaces. Norm kAk of matrix A denotes the operator norm induced by `2 vector norm. We denote by [a]i or [a](i) the ith element of a vector a. [A · a](i)
j denotes the (i · j)th element of vector A · a. Moreover,
A (:,i) is the ith column of A and [A]mk is the element corresponding to mth row and kth column. Notation m 2 [M ] denotes m 2 {1, ..,M}. Finally, [E] is the indicator function of event E.
2 PROBLEM STATEMENT
Given a probability distribution ⇡(x, y) over X ⇥ R, we want to minimize the population loss
L(f) = Ex,y⇠⇡(x,y)[`(f(x), y)], (1)
where x 2 X ⇢ Rd and y 2 R denote the features and the labels, respectively. Here, f : X ! R is an estimate of the true label y. We consider a distributed system of M agents, with each agent m 2 [M ] having access to a locally available independently and identically distributed (i.i.d) dataset Nm = {x(i)m , y(i)m }ni=1 with2 (x (i) m , y (i) m ) ⇠ ⇡(x, y). The total number of samples is N = nM . The goal of kernel learning with kernel function, k(·, ·) : X ⇥ X ! R, is to find a function f 2 H (where H is the reproducing kernel Hilbert space (RKHS) associated with k (Vert et al., 2004)) that minimizes (1). We aim to solve the following (decentralized) empirical risk minimization problem
min f2H
⇢ R̂(f) = L̂(f) +
2 kfk2 H =
1
M
MX
m=1
L̂m(f) + 2 kfk2 H , (2)
where > 0 is the regularization parameter and L̂m(f) = 1n P i2Nm `(f(x(i)m ), y (i) m ) is the local loss at each m 2 [M ]. Problem (2) can be reformulated using the Representer theorem (Schölkopf et al., 2002) with L̂m(↵) = 1n P i2Nm ` ⇥ K↵ ⇤(i) m , y (i) m , 8m 2 [M ], as
min ↵2RN
⇢ R̂(↵) = L̂(↵) +
2 k↵k2K =
1
M
MX
m=1
L̂m(↵) + 2 k↵k2K , (3)
where K 2 RN⇥N is the kernel matrix with elements k(x(i)m , x(j)m̄ ), 8m, m̄ 2 [M ], 8i 2 Nm and 8j 2 Nm̄. The supervised (centralized) learning problem (3) is a classical problem in statistical learning (Caponnetto & De Vito, 2007) and has been popularized recently due to connections with overparameterized neural network training (Jacot et al., 2018; Arora et al., 2019). An alternate way to solve problem (2) (and (3)) is by parameterizing f in (2) by ✓ 2 RD as fD(x; ✓) = h✓, D(x)i where D : X ! RD is a finite dimensional feature map. Here, D(·) is designed to approximate k(·, ·) with kD(x, x0) = h D(x), D(x0)i (Rahimi & Recht, 2008). Using this approximation, problem (2) (and (3)) can be written in the parameter domain with L̂m,D(✓) = 1n P i2Nm ` h✓, D(x(i)m )i, y(i)m , 8m 2 [M ], as
min ✓2RD
⇢ R̂D(✓) = L̂D(✓) +
2 k✓k2 = 1 M
MX
m=1
L̂m,D(✓) + 2 k✓k2 . (4)
Note that (4) is a D-dimensional problem, whereas (3) is an N -dimensional problem. Since (4) is in the standard finite-sum form, it can be solved using the standard parameter sharing decentralized optimization algorithms (e.g., DGD (Richards et al., 2020) or ADMM (Xu et al., 2020) ), which share D-dimensional vectors iteratively. However, when (4) is overparameterized with very large D (e.g., D = O(N ) with 2 for the NTK), such parameter sharing approaches are no longer feasible because of the increased communication complexity. An intuitive solution to avoid sharing these high-dimensional parameters is to directly solve (3). However, it is by no means clear if and how one can efficiently solve (3) in a decentralized manner. The key challenge is that, unlike the
2The techniques presented in this work can be easily extended to unbalanced datasets, i.e., when each agent has a dataset of different size.
conventional decentralized learning problems, here each loss term `([K↵](i)m , y (i) m ) is not separable over the agents. Instead, each agent m’s local problem is dependent on k(x(i)m , x (j) m̄ ) with m 6= m̄. Importantly, without directly transmitting the data itself (as has been done in Predd et al. (2006); Koppel et al. (2018); Lin et al. (2020)), it is not clear how one can obtain the required (m·i)th element of K↵. Therefore, to develop algorithms that avoid sharing high-dimensional parameters by directly (approximately) solving (3), it is important to identify kernels that are suitable for decentralized implementation and propose efficient algorithms for learning with such kernels.
3 THE PROPOSED ALGORITHMS
In this section, we define a general class of kernels referred to as the generalized inner product (GIP) kernels that are suitable for decentralized overparameterized learning. By focusing on GIP kernels, we aim to understand the best possible decentralized optimization/generalization performance that can be achieved for solving (3). Surprisingly, one of our proposed algorithm only shares O(nN) = O(N2/M) bits of information per node, while achieving the minimax optimal generalization performance. Such an algorithm only requires one round of communication, where the messages transmitted are independent of the actual parameter dimension (i.e., D in problem (4)); further, there is no requirement for achieving consensus among the agents. The proposed algorithm represents a significant departure from the classical consensus-based decentralized learning algorithms. We first define a class of kernels that we will focus on in this work. Definition 3.1. [Generalized inner-product (GIP) kernel] We define a GIP kernel as:
k(x, x0) = g( (x, x0), kxk, kx0k), (5)
where (x, x0) = arccos(xTx0/(kxkkx0k)) 2 [0,⇡] denotes the angle between the feature vectors x and x0; and g(·, kxk, kx0k) is assumed to be Lipschitz continuous (cf. Assumption 2). Remark 1. Note that the GIP kernel is a generalization of the inner-product kernels (Schölkopf et al., 2002), i.e., kernels of the form k(x, x0) = k(hx, x0i). Clearly, k(hx, x0i) can be represented as k(hx, x0i) = g( (x, x0), kxk, kx0k) for some function g(·). Moreover, many kernels of practical interest can be represented as GIP kernels, some examples include NTK (Jacot et al., 2018; Chizat et al., 2019; Arora et al., 2019), arccosine (Cho & Saul, 2009), polynimal, Gaussian, Laplacian, sigmoid, and inner-product kernels (Schölkopf et al., 2002).
The main reason we focus on the GIP kernels for decentralized implementation is that, this class of kernels can be fully specified at each agent if the norms of all the feature vectors and the pairwise angles between them are known at each agent. For example, consider an NTK of a single hiddenlayer ReLU neural network: k(x, x0) = xTx0(⇡ (x, x0))/2⇡ (Chizat et al., 2019). This kernel can be fully learned with just the knowledge of norms and the pairwise angles of the feature vectors. For many applications of interest (Bietti & Mairal, 2019; Geifman et al., 2020; Pedregosa et al., 2011), normalized feature vectors are used, and for such problems, the GIP kernel at each agent can be computed only by using the knowledge of the pairwise angles between the feature vectors. We show in Sec. 3.1 that such kernels can be efficiently estimated by each agent while sharing only a few bits of information. Importantly, the communication requirement for such a kernel estimation procedure is independent of the problem’s parameter dimension (i.e., D in (4)), making them suitable for decentralized learning in overparameterized regime. Next, we define the RF kernel. Definition 3.2. [Random features (RF) kernel] RF kernel is defined as (Rahimi & Recht, 2008; Rudi & Rosasco, 2017; Li et al., 2019):
k(x, x0) =
Z
!2⌦ ⇣̄(x,!) · ⇣̄(x0,!)dq(!) (6)
with (⌦, q) being the probability space and ⇣̄ : X ⇥ ⌦ ! R. Remark 2. The RF kernel can be approximated as: k(·, ·) ⇡ kP (x, x0) = h P (x), P (x0)i, with P (x) = 1p
P [⇣̄(x,!1), . . . , ⇣̄(x,!P )]T 2 RP and {!i}Pi=1 drawn i.i.d. from distribution
q(!). A popular example of the RF kernels is the shift-invariant kernels, i.e., kernels of the form k(x, x0) = k(x x0) (Rahimi & Recht, 2008). The RF kernels generalize the random Fourier features construction (Rudin, 2017) for shift-invariant kernels to general kernels. Besides the shiftinvariant kernels, important examples of the RF kernels include the inner-product (Kar & Karnick, 2012), and the homogeneous additive kernels (Vedaldi & Zisserman, 2012).
Algorithm 1 Approximation: Local Kernel Estimation 1: Initialize: Distribution p(!) over space (⌦, p) and mapping ⇣ : X ⇥ ⌦ ! R (see Section 3.1) 2: for m 2 [M ] do 3: Draw P i.i.d. random variables !i 2 Rd with !i ⇠ p(!) for i = 1, . . . , P 4: Compute ⇣(x(i)m ,!j) 8i 2 Nm and j 2 [P ] 5: Construct the matrix Am 2 RP⇥n with the (i, j)th element as ⇣(x(i)m ,!j) 6: Communicate Am to every other agent and receive Am̄ with m̄ 6= m from other agents 7: If GIP is used, and data is not normalized, then communicate kx(i)m k, 8 i 2 Nm 8: Estimate the kernel matrix KP locally using (7) for the GIP and (9) for the RF kernel 9: end for
Next, we propose a multi-agent approximation algorithm to effectively learn the GIP and the RF kernels at each agent, as well as the optimization algorithms to efficiently solve the learning problem. Our proposed algorithms will follow an approximation – optimization strategy, where the agents first exchange some information so that they can locally approximate the full kernel matrix K; then they can independently optimize the resulting approximated local problems. Below we list a number of key design issues arising from implementing such an approximation – optimization strategy:
[Kernel approximation] How to accurately approximate the kernel K, locally at each agent? For example, for the GIP kernels, how to accurately estimate the angles (x(i)m , x (j) m̄ ) at a given agent m, where j 2 Nm̄ and m̄ 6= m? This is challenging, especially when raw data sharing is not allowed. [Effective exchange of local information] How shall we design appropriate messages to be exchanged among the agents? The type of messages that gets exchanged will be dependent on the underlying kernel approximation schemes. Therefore, it is critical that proposed approximation methods are able to utilize as little information from other agents as possible.
[Iterative or one-shot scheme] It is not clear if such an approximation – optimization scheme should be one-shot or iterative – that is, whether it is favourable that the agents iteratively share information and perform local optimization (similar to classical consensus-based algorithms), or they should do it just once. Again, this will be dependent on the underlying information sharing schemes.
Next, we will formally introduce the proposed algorithms. Our presentation follows the approximation – optimization strategy outlined above. We first discuss the proposed decentralized kernel approximation algorithm, followed by two different ways of performing decentralized optimization.
3.1 MULTI-AGENT KERNEL APPROXIMATION
The kernel K is approximated locally at each agent using Algorithm 1. Note that in Step 3, each agent randomly samples {!i}Pi=1 from distribution p(!). This can be easily established via random seed sharing as in Xu et al. (2020); Richards et al. (2020). In Step 6, each agent shares a locally constructed matrix Am of size P ⇥ n, whose elements ⇣(x(i)m ,!i) will be defined shortly. The choices of p(!) and ⇣(·, ·) in Step 1 depend on the choice of kernel. Specifically, we have: [Approximation for GIP kernel] For the GIP kernel, we first assume that the feature vectors are normalized (Pedregosa et al., 2011). We then choose p(!) to be any circularly symmetric distribution, for simplicity we choose p(!) as N (0, Id). Moreover, we use ⇣(x,!) = [!Tx 0] such that Am is a binary matrix with entries {0, 1}. Note that such matrices are easy to communicate. Next, we approximate the kernel K with KP as
k(x(i) m , x (j) m̄ ) ⇡ kP (x(i)m , x (j) m̄ ) = g( P (x (i) m , x (j) m̄ ), kx(i)m k, kx (j) m̄ k), (7)
where k(x(i)m , x (j) m̄ ) and kP (x (i) m , x (j) m̄ ) 8i 2 Nm, 8m 2 [M ] and 8j 2 Nm̄ and 8m̄ 2 [M ] are the individual elements of K and KP , resp., and P (x (i) m , x (j) m̄ ) is an approximation of the angle
(x(i)m , x (j) m̄ ) evaluated using Am, Am̄ as
(x(i) m , x (j) m̄ ) ⇡ P (x(i)m , x (j) m̄ ) = ⇡ 2⇡[A(:,i)m ]T [A (:,j) m̄ ]/P , (8)
Algorithm 2 Optimization: One-Shot Communication for Kernel Learning
1: Initialize: ↵1 m 2 RN , step-sizes {⌘t m }Tm t=1 at each agent m 2 [M ] 2: for m 2 [M ] do 3: Using Algorithm 1 construct KP 4: Communicate ȳm = [y (1) m , . . . , y (n) m ]T 2 Rn 5: Using KP and ȳm construct L̂P (↵) (cf. (10)) locally using L̂m,P (↵) 6: Option I: Solve (10) exactly at each agent 7: Option II: Solve (10) inexactly using GD at each agent 8: for t = 1 to Tm 9: GD Update: ↵t+1
m = ↵t m ⌘t m rR̂P (↵tm)
10: end for 11: end for 12: Return: ↵T+1
m for all m 2 [M ]
This implies that K can be approximated for the GIP kernel by communicating only nP bits of information per agent. Note that in the general case if the feature vectors are not normalized, then (7) can be evaluated by communicating additional n real values of the norms of the feature vectors by each agent; see Step 7 in Algorithm 1.
[Approximation for RF kernel] For the RF kernel, we choose ⇣(·, ·) = ⇣̄(·, ·) and p(!) = q(!) as defined in (6) and approximate K with KP as
k(x(i) m , x (j) m̄ ) ⇡ kP (x(i)m , x (j) m̄ ) = h P (x(i)m ), P (x (j) m̄ )i, (9)
where k(x(i)m , x (j) m̄ ) and kP (x (i) m , x (j) m̄ ) are elements of K and KP , resp., P (x (i) m ) = 1/ p P [A(:,i)m ] and P (x (j) m̄ ) = 1/ p P [A(:,j)m̄ ]. Note that K can be approximated for the RF kernel by sharing only nP real values per agent. Further, the distribution q(!) and the mapping ⇣̄(·, ·) depend on the type of RF kernel used. For example, for shift-invariant kernels with random Fourier features, we can choose ⇣̄(x,!) = p 2 cos(!Tx+ b) with ! ⇠ q(!) and b ⇠ U [0, 2⇡] (Rahimi & Recht, 2008).
Now that using Algorithm 1 we have approximated the kernel matrix at all the agents, we are ready to solve (3) approximately.
3.2 THE DECENTRALIZED OPTIMIZATION STEP
The approximated kernel regression problem (3) with KP obtained using Algorithm 1, and local loss L̂m,P (↵) := 1n P i2Nm ` ⇥ KP↵ ⇤(i) m , y (i) m is
min ↵2RN
⇢ R̂P (↵) = L̂P (↵) +
2 k↵k2KP =
1
M
MX
m=1
L̂m,P (↵) + 2 k↵k2KP . (10)
Remark 3. For the approximate problem (10), we would want KP constructed using the multi-agent kernel approximation approach to be positive semi-definite (PSD), i.e., the kernel function kP (·, ·) is a positive definite (PD) kernel. From the definition of the approximate RF kernel (9), it is easy to verify that it is PD. However, it is not clear if the approximated GIP kernel is PD. Certainly, for the GIP kernel we expect that as P ! 1 we have KP ! K, i.e., asymptotically KP is PSD, since K is PSD. In the Appendix, we introduce a sufficient condition (Assumption 6) that ensures KP to be PSD for the GIP kernel. In the following, for simplicity we assume KP is PSD.
Decentralized optimization based on one-shot communication: In this setting, we share the information among all the agents in one-shot, then each agent learns its corresponding minimizer using the gathered information. We assume that each agent can communicate with every other agent either in a decentralized manner (or via a central server) before initialization. This is a common assumption in distributed learning with RF kernels where the agents need to share random seeds before initialization to determine the approximate feature mapping (Richards et al., 2020; Xu et al., 2020). Here, consensus is not enforced as each agent can learn a local minimizer which has a good global property. The label information is also exchanged among all the agents. In Algorithm 2, we list the steps of the algorithm. In Step 3, the agents learn KP (the local estimate of the kernel matrix)
using Algorithm 1. In Step 4, the agents share the labels ȳm so that each agent can (approximately) reconstruct the loss L̂(↵) (cf. (10)) locally. Then each agent can either choose Option I or Option II to solve (10). A few important properties of Algorithm 2 are: [Communication] Each agent communicates a total of O(nP ) = O(NP/M) bits (if the norms also need to be transmitted, then with an additional N/M real values) for the GIP kernel, and O(NP/M) real values for the RF kernels. Importantly, for the GIP kernel the communication is independent of the parameter dimension, making it suitable for decentralized overparameterized learning problems; see Table 1 for a comparison with other approaches.
[No consensus needed] Each agent executes Algorithm 2 independently to learn ↵m, without needing to reach any kind of consensus. They are free to choose different initializations, step-sizes, and even regularizers (i.e., in (10)). In contrast to the classical learning, where algorithms are designed to guarantee consensus (Koppel et al., 2018; Richards et al., 2020; Xu et al., 2020), our algorithms allow each agent to learn a different function.
The proposed framework relies on sharing matrices Am’s that are random functions of the local features. Note that problem (10) can also be solved by using an iterative distributed gradient tracking algorithm (Qu & Li, 2018), with the benefit that no label sharing is needed; see Appendix D. Remark 4 (Optimization performance). Note that using Algorithm 2, we can solve the approximate problem (10) to arbitrary accuracy using either Option I or Option II. However, it is by no means clear if the solution obtained by Algorithm 2 will be close to the solution of (3). Therefore, after problem (10) is solved, it is important to understand how close the solutions returned by Algorithm 2 are to the original kernel regression problem (3).
4 MAIN RESULTS
In this section, we analyze the performance of Algorithm 2. Specifically, we are interested in understanding the training loss and the generalization error incurred due to the kernel approximation (cf. Algorithm 1). For this purpose, we focus on `2 loss functions for which the kernel regression problem (10) can be solved in closed-form. Specifically, we want to minimize the loss:
L(f) = 1 2 Ex,y⇠⇡(x,y)[(f(x) y)2]. (11)
We solve the following kernel ridge regression problem with the choice L̂(↵) = 12N kȳ K↵k 2,
min ↵2RN
n R̂(↵) = 1
2N kȳ K↵k2 + 2 k↵k2K
o (12)
where we denote ȳ = [ȳT1 , . . . , ȳTM ] T 2 RN with ȳm = [y(1)m , y(2)m , . . . , y(n)m ]T 2 Rn. The above problem can be solved in closed form with ↵̂⇤ = [K+N · · I] 1ȳ. The approximated problem at each agent with the kernel KP and with the loss function L̂P (↵) = 12N kȳ KP↵k 2 is
min ↵2RN
n R̂P (↵) = 1
2N kȳ KP↵k2 + 2 k↵k2KP
o (13)
with the optimal solution returned by Option I in Algorithm 2 as ↵̂⇤ P = [KP +N · · I] 1ȳ. The goal is to analyze the impact of the approximation on the performance of Algorithm 2. Specifically, we bound the difference between the optimal losses of the exact and the approximated Kernel ridge regression. We begin with some assumptions. Assumption 1. We assume |k(x, x0)| 2 and |kP (x, x0)| 2 for some 1. Assumption 2. The function g(·) in (5) used to construct the GIP kernel is G-Lipschitz w.r.t. , i.e., 9G 0 such that: |g( , z2, z3) g( ̂, z2, z3)| G| ̂|, 8 , ̂ 2 [0,⇡] and 8z2, z3 2 R. Assumption 3. We assume that the data labels |y| R almost surely for some R > 0. Assumption 4. There exists fH 2 H such that L(fH) = infh2H L(h).
A few remarks are in order. Note that Assumptions 1, 3 and 4 are standard in the statistical learning theory (Cucker & Zhou, 2007; Caponnetto & De Vito, 2007; Ben-Hur & Weston, 2010; Rudi & Rosasco, 2017). Moreover, for RF kernel Assumption 1 is automatically satisfied if |⇣(x,!)|
almost surely (Rudi & Rosasco, 2017) (cf. (6) and (9)). Assumption 2 is required for estimating the kernel by approximating the pairwise angles between feature vectors. It is easy to verify that the popular kernels including, NTK (15), Arccosine, Gaussian and Polynomial kernels satisfy Assumption 2 with feature vectors belonging to a compact domain (this ensures that the Lipschitz constant G is independent of the feature vector norms). Now we are ready to present the results.
We analyze how well Algorithm 1 approximates the exact kernel. We are interested in the approximation error as a function of the number of random samples P . We have the following lemma. Lemma 4.1 (Kernel Approximation). For KP returned by Algorithm 1, the following holds with probability at least 1 : (i) For the GIP kernel, kK KP k GN ⇣q 32⇡2 P log 2N + 8⇡3P log 2N ⌘ . (ii) Similarly, for the RF kernel, kK KP k 2N ⇣q 8 P log 2N + 43P log 2N ⌘ .
Note that as P increases KP ! K, in particular, to achieve an approximation error of ✏ > 0, we need P = O(✏ 2). Importantly, Lemma 4.1 plays a crucial role in analyzing the optimization performance of the kernel approximation approach. Next, we state the training loss incurred as a consequence of solving the approximate decentralized problem (13) in Algorithm 2 instead of (12). Theorem 4.2 (Approximation: Optimal Loss). Suppose P 29 log 2N
, then for both the GIP and the RF kernels, the solution returned by Algorithm 2 (Option I) for solving (12) approximately (i.e, (13)), satisfies the following with probability at least 1
L̂P (↵̂⇤P ) L̂(↵̂⇤) = O ⇣q 1 P log 2N ⌘ and R̂P (↵̂⇤P ) R̂(↵̂⇤) O ⇣q 1 P log 2N ⌘ .
Theorem 4.2 states that as P increases, the optimal training loss achieved by solving approximate problem (13) via Algorithm 2 (Option I) will approach the performance of the centralized system (12) for both the GIP and the RF kernels. The proof of the above result utilizes Lemma 4.1 and the definition of the loss functions in (12) and (13). See Appendix G for a detailed proof.
The results of Lemma 4.1 and Theorem 4.2 characterize the approximation performance of the proposed approximation – optimization framework on fixed number of training samples. Of course, it is of interest to analyze how the proposed approximation algorithms will perform on unseen test data. Towards this end, it is essential to analyze the performance of the function f̂P learned from solving (13) via Algorithm 2. We have the following result. Theorem 4.3 (Generalization performance). Let us choose = 1/ p N , 2 (0, 1), and N max n
4 3kKk2 , 72
2 p N log 32 2 p N o , also choose P max n 8, 512⇡ 2 G 2
kKk2 , 288⇡2G2N
o log 16 for
the GIP kernel and P max n 82, 32 2
kKk2 , 722
p N o log 128 2 p N for the RF kernel, where K is
defined in Appendix F. Then with probability at least 1 , we have for f̂P returned by Algorithm 2 (Option I) for approximately solving (12) (i.e., (13)): L(f̂P ) infh2H L(h) = O 1/ p N .
The proof of Theorem 4.3 utilizes a result similar to Lemma 4.1 but for integral operator defined using kernels k(·, ·) and kP (·, ·). Theorem 4.3 states that with appropriate choice of (the regularization parameter), N (the number of overall samples), and P (the messages communicated per agent), the proposed algorithm achieves the minimax optimal generalization performance (Caponnetto & De Vito, 2007). Also, note that the the requirement of P = O( p N) for the RF kernel compared to P = O(N) for the GIP kernel is due to the particular structure of the RF kernel (cf. (6)). It can be seen from Lemmas H.4 and H.5 in Appendix H, that the approximation obtained with the RF kernel allows the derivation of tighter bounds compared to the GIP kernel. The next corollary precisely states the total communication required per agent to achieve this optimal performance. Corollary 1 (Communication requirements for the GIP and RF kernels). Suppose Algorithm 2 uses the choice of parameters stated in Theorem 4.3 to approximately optimize (12). Then it requires a total of O(N2/M) bits (resp. O(N p N/M) real values) of message exchanges per node when the GIP kernel (resp. the RF kernel) is used, to achieve minimax optimal generalization performance. Moreover, if unnormalized feature vectors are used, then the GIP kernel requires an additional O(N/M) real values of message exchanges per node. Compared to DKRR-RF-CM (Liu et al., 2021), Decentralized RF (Richards et al., 2020), DKLA, and COKE (Xu et al., 2020), the number of message exchanges required by the proposed algorithm
is independent of the iteration numbers, and it is much less compared to other algorithms, especially for the GIP kernel in the overparameterized regime; see Table 1 for detailed comparisons.
5 EXPERIMENTS
We compare the performance of the proposed algorithm to DKRR-RF-CM (Liu et al., 2021), Decentralized RF (Richards et al., 2020), and DKLA (Xu et al., 2020). We evaluate the performance of all the algorithms on real world datasets from the UCI repository.
Specifically, we present the results on National Advisory Committee for Aeronautics (NACA) airfoil noise dataset (Lau & López, 2009), where the goal is to predict aircraft noise based on a few measured attributes. The dataset consists of N = 1503 samples that are split equally among M = 10 nodes. Each node utilizes 70% of its data for training and 30% for testing purposes. Each feature vector x(i)m 2 R5 represents the measured attributes such as, frequency, angle, etc., and each label y(i)m represents the noise level. Additional experiments on different datasets and classification problems, as well as the detailed parameter settings, are included in the Appendix A.
We evaluate the performance of all the algorithms with the Gaussian kernel. Note that the algorithms DKRR-RF-CM, Decentralized RF, and DKLA can only be implemented using the RF approach while our proposed algorithm utilizes the GIP kernel. Also, in contrast to these benchmark algorithms that use iterative parameter exchange, the proposed Algorithm 2 uses only one-shot communication. First, in Table 2, we compare the communication required by each algorithm with the Gaussian kernel for P = 100, 500, and 1000 to achieve the same test mean squared error (MSE) for each setting, see last row of Table 2. Note that for P = 100, the communication required by Algorithm 2 is less than 50% of that required by DKLA and Decentralized RF while it is only slightly less than that of DKRR-RF-CM. Moreover, as P increases to 500 and 1000, it can be seen that Algorithm 2 only requires a fraction of communication compared to other algorithms, and this fact demonstrates the utility of the proposed algorithms for over-parameterized learning problems. In Table 3, we compare the averaged MSE achievable by different algorithms, when a fixed total communication budget (in bits) is given for each setting (see the last row of Table 3 for the budget). Note that Algorithm 2 significantly outperforms all the other methods as P increases. This is expected since Algorithm 2 essentially solves a centralized problem (cf. Problem (10)) after the multi-agent kernel approximation (cf. Algorithm 1), and a large P provides a better approximation of the kernel (cf. Lemma 4.1). In contrast, for the parameter sharing based algorithms the performance deteriorates even though the kernel approximation improves with large P as learning a high-dimensional parameter naturally requires more communication rounds as well as a higher communication budget per communication round.
Please note that we also compare the performance of Algorithm 2 with the benchmarking algorithms discussed above for the NTK. We further benchmark the performance of Algorithm 2 against the centralized algorithms for the Gaussian, the Polynomial, and the NTK. However, due to space limitations, we relegate these numerical results to the Appendix A.
ACKNOWLEDGEMENTS
We thank the anonymous reviewers for their valuable comments and suggestions. The work of Prashant Khanduri and Mingyi Hong is supported in part by NSF grant CMMI-1727757, AFOSR grant 19RT0424, ARO grant W911NF-19-1-0247 and Meta research award on “Mathematical modeling and optimization for large-scale distributed systems”. The work of Mingyi Hong is also supported by an IBM Faculty Research award. The work of Jia Liu is supported in part by NSF grants CAREER CNS-2110259, CNS-2112471, CNS-2102233, CCF-2110252, ECCS-2140277, and a Google Faculty Research Award. The work of Hoi-To Wai was supported by CUHK Direct Grant #4055113. | 1. What is the focus and contribution of the paper regarding multi-agent kernel learning?
2. What are the strengths and weaknesses of the proposed random feature-based approach compared to prior works?
3. How does the reviewer assess the significance and novelty of the theoretical results and numerical experiments presented in the paper?
4. What are the concerns regarding privacy considerations in the proposed approach, particularly in Algorithm 2?
5. How does the reviewer evaluate the clarity and impact of the paper's content after the rebuttal? | Summary Of The Paper
Review | Summary Of The Paper
This paper discusses a random feature-based multi-agent kernel learning approach. For both generalized inner-product (GIP) and random feature (RF) kernels, the authors propose, in each agent, to exchange the random feature matrix (instead of the model parameters). By considering the problem of kernel ridge regression, some theoretical results including the kernel matrix approximation error (Lemma 4.1), training (Theorem 4.2), and generalization performance (Theorem 4.3) are obtained in Section 4. Some numerical experiments on UCI datasets are provided in Section 5.
The authors argue (e.g., in Corollary 1) that the proposed approach is more efficient as it requires less communication to achieve min-max optimal generalization performance.
Review
I find it difficult to position this paper in the existing literature on distributed kernel learning.
For example, if the focus is on privacy issues and on not exchanging local data or labels, then more on privacy considerations should be discussed, e.g., in Algorithm 2 it is still required to exchange labels, the authors mention on page 7 that there are ways to avoid such privacy leakage, but details are deferred to the appendix. If the major contribution is the proposed method needs less communication, then it is then necessary to compare explicitly in which case/regime the proposed method improves previous results such as [Liu et al., 2021]. For the moment, it is not clear, at least to me as a reader, to which extend the results are significant.
I would consider changing my scores if the authors could clarify the major contribution in this paper.
After rebuttal: I thank the authors for their clarification and their efforts in updating the paper. I believe the contribution of this paper is now much clearer and I've updated my score accordingly. |
ICLR | Title
Decentralized Learning for Overparameterized Problems: A Multi-Agent Kernel Approximation Approach
Abstract
This work develops a novel framework for communication-efficient distributed learning where the models to be learnt are overparameterized. We focus on a class of kernel learning problems (which includes the popular neural tangent kernel (NTK) learning as a special case) and propose a novel multi-agent kernel approximation technique that allows the agents to distributedly estimate the full kernel function, and subsequently perform distributed learning, without directly exchanging any local data or parameters. The proposed framework is a significant departure from the classical consensus-based approaches, because the agents do not exchange problem parameters, and consensus is not required. We analyze the optimization and the generalization performance of the proposed framework for the `2 loss. We show that with M agents and N total samples, when certain generalized inner-product (GIP) kernels (resp. the random features (RF) kernel) are used, each agent needs to communicate O N 2 /M bits (resp. O N p N/M real values) to achieve minimax optimal generalization performance. Further, we show that the proposed algorithms can significantly reduce the communication complexity compared with state-of-the-art algorithms, for distributedly training models to fit UCI benchmarking datasets. Moreover, each agent needs to share about 200N/M bits to closely match the performance of the centralized algorithms, and these numbers are independent of parameter and feature dimension.
N/A
N 2 /M bits (resp. O N p N/M
real values) to achieve minimax optimal generalization performance. Further, we show that the proposed algorithms can significantly reduce the communication complexity compared with state-of-the-art algorithms, for distributedly training models to fit UCI benchmarking datasets. Moreover, each agent needs to share about 200N/M bits to closely match the performance of the centralized algorithms, and these numbers are independent of parameter and feature dimension.
1 INTRODUCTION
Recently, decentralized optimization has become a mainstay of the optimization research. In decentralized optimization, multiple local agents hold small to moderately sized private datasets, and collaborate by iteratively solving their local problems while sharing some information with other agents. Most of the existing decentralized learning algorithms are deeply rooted in classical consensus-based approaches (Tsitsiklis, 1984), where the agents repetitively share the local parameters with each other to reach an optimal consensual solution. However, the recent trend of using learning models in the overparameterized regime with very high-dimensional parameters (He et al., 2016; Vaswani et al., 2017; Fedus et al., 2021) poses a significant challenge to such parameter sharing approaches, mainly because sharing model parameters iteratively becomes excessively expensive as the parameter dimension grows. If the size of local data is much smaller than that of the parameters, perhaps a more efficient way is to directly share the local data. However, this approach raises privacy concerns, and it is rarely used in practice. Therefore, a fundamental question of decentralized learning in the overparameterized regime is:
(Q) For overparameterized learning problems, how to design decentralized algorithms that achieve the best optimization/generalization performance by exchanging minimum amount of information?
We partially answer (Q) in the context of distributed kernel learning (Vert et al., 2004). We depart from the popular consensus-based algorithms and propose an optimization framework that does not require the local agents to share model parameters or raw data. We focus on kernel learning because: (i) kernel methods provide an elegant way to model non-linear learning problems with complex data
dependencies as simple linear problems (Vert et al., 2004; Hofmann et al., 2008), and (ii) kernelbased methods can be used to capture the behavior of a fully-trained deep network with large width (Jacot et al., 2018; Arora et al., 2019; 2020).
Distributed implementation of kernel learning problems is challenging. Current state-of-the-art algorithms for kernel learning either rely on sharing raw data among agents and/or imposing restrictions on the number of agents (Zhang et al., 2015; Lin et al., 2017; Koppel et al., 2018; Lin et al., 2020; Hu et al., 2020; Pradhan et al., 2021; Predd et al., 2006). Some recent approaches rely on specific random feature (RF) kernels to alleviate some of the above problems. These algorithms reformulate the (approximate) problem in the parameter domain and solve it by iteratively sharing the (potentially high-dimensional) parameters (Bouboulis et al., 2017; Richards et al., 2020; Xu et al., 2020; Liu et al., 2021). These algorithms suffer from excessive communication overhead, especially in the overparameterized regime where the number of parameters is larger than the data size N . For example, implementing the neural tangent kernel (NTK) with RF kernel requires at least O(N ), 2, random features (parameter dimension) using ReLU activation (Arora et al., 2019; Han et al., 2021)1. For such problems, in this work, we propose a novel algorithmic framework for decentralized kernel learning. Below, we list the major contributions of our work.
[GIP Kernel for Distributed Approximation] We define a new class of kernels suitable for distributed implementation, Generalized inner-product (GIP) kernel, that is fully characterized by the angle between a pair of feature vectors and their respective norms. Many kernels of practical importance including the NTK can be represented as GIP kernel. Further, we propose a multi-agent kernel approximation method for estimating the GIP and the popular RF kernels at individual agents.
[One-shot and Iterative Scheme] Based on the proposed kernel approximation, we develop two optimization algorithms, where the first one only needs one-shot information exchange, but requires sharing data labels among the agents; the second one needs iterative information exchange, but does not need to share the data labels. A key feature of these algorithms is that neither the raw data features nor the (high-dimensional) parameters are exchanged among agents.
[Performance of the Approximation Framework] We analyze the optimization and the generalization performance of the proposed approximation algorithms for `2 loss. We show that GIP kernel requires communicating O(N2/M) bits and the RF kernel requires communicating O(N p N/M) real values per agent to achieve minimax optimal generalization performance. Importantly, the required communication is independent of the function class and the optimization algorithm. We validate the performance of our approximation algorithms on UCI benchmarking datasets.
In Table 1, we compare the communication requirements of the proposed approach to popular distributed kernel learning algorithms. Specifically, DKRR-CM (Lin et al., 2020) relies on sharing data and is therefore not preferred in practical settings. For the RF kernel, the proposed algorithm outperforms other algorithms in both non-overparameterized and the overparameterized regimes when T > N/M . In the overparameterized regime, the GIP kernel is more communication efficient compared to other algorithms. Finally, note that since our analysis is developed using the multiagent-kernel-approximation, it does not impose any upper bound on the number of agents in the network.
1To achieve approximation error ✏ = O(1/ p N).
Notations: We use R, Rd, and Rn⇥m to denote the sets of real numbers, d-dimensional Euclidean space, and real matrices of size n⇥m, respectively. We use N to denote the set of natural numbers. N (0,⌃) is multivariate normal distribution with zero mean and covariance ⌃. Uniform distribution with support [a, b] is denoted by U [a, b]. ha, bi (resp. ha, biH) denotes the inner-product in Euclidean space (resp. Hilbert space H). The inner-product defines the usual norms in corresponding spaces. Norm kAk of matrix A denotes the operator norm induced by `2 vector norm. We denote by [a]i or [a](i) the ith element of a vector a. [A · a](i)
j denotes the (i · j)th element of vector A · a. Moreover,
A (:,i) is the ith column of A and [A]mk is the element corresponding to mth row and kth column. Notation m 2 [M ] denotes m 2 {1, ..,M}. Finally, [E] is the indicator function of event E.
2 PROBLEM STATEMENT
Given a probability distribution ⇡(x, y) over X ⇥ R, we want to minimize the population loss
L(f) = Ex,y⇠⇡(x,y)[`(f(x), y)], (1)
where x 2 X ⇢ Rd and y 2 R denote the features and the labels, respectively. Here, f : X ! R is an estimate of the true label y. We consider a distributed system of M agents, with each agent m 2 [M ] having access to a locally available independently and identically distributed (i.i.d) dataset Nm = {x(i)m , y(i)m }ni=1 with2 (x (i) m , y (i) m ) ⇠ ⇡(x, y). The total number of samples is N = nM . The goal of kernel learning with kernel function, k(·, ·) : X ⇥ X ! R, is to find a function f 2 H (where H is the reproducing kernel Hilbert space (RKHS) associated with k (Vert et al., 2004)) that minimizes (1). We aim to solve the following (decentralized) empirical risk minimization problem
min f2H
⇢ R̂(f) = L̂(f) +
2 kfk2 H =
1
M
MX
m=1
L̂m(f) + 2 kfk2 H , (2)
where > 0 is the regularization parameter and L̂m(f) = 1n P i2Nm `(f(x(i)m ), y (i) m ) is the local loss at each m 2 [M ]. Problem (2) can be reformulated using the Representer theorem (Schölkopf et al., 2002) with L̂m(↵) = 1n P i2Nm ` ⇥ K↵ ⇤(i) m , y (i) m , 8m 2 [M ], as
min ↵2RN
⇢ R̂(↵) = L̂(↵) +
2 k↵k2K =
1
M
MX
m=1
L̂m(↵) + 2 k↵k2K , (3)
where K 2 RN⇥N is the kernel matrix with elements k(x(i)m , x(j)m̄ ), 8m, m̄ 2 [M ], 8i 2 Nm and 8j 2 Nm̄. The supervised (centralized) learning problem (3) is a classical problem in statistical learning (Caponnetto & De Vito, 2007) and has been popularized recently due to connections with overparameterized neural network training (Jacot et al., 2018; Arora et al., 2019). An alternate way to solve problem (2) (and (3)) is by parameterizing f in (2) by ✓ 2 RD as fD(x; ✓) = h✓, D(x)i where D : X ! RD is a finite dimensional feature map. Here, D(·) is designed to approximate k(·, ·) with kD(x, x0) = h D(x), D(x0)i (Rahimi & Recht, 2008). Using this approximation, problem (2) (and (3)) can be written in the parameter domain with L̂m,D(✓) = 1n P i2Nm ` h✓, D(x(i)m )i, y(i)m , 8m 2 [M ], as
min ✓2RD
⇢ R̂D(✓) = L̂D(✓) +
2 k✓k2 = 1 M
MX
m=1
L̂m,D(✓) + 2 k✓k2 . (4)
Note that (4) is a D-dimensional problem, whereas (3) is an N -dimensional problem. Since (4) is in the standard finite-sum form, it can be solved using the standard parameter sharing decentralized optimization algorithms (e.g., DGD (Richards et al., 2020) or ADMM (Xu et al., 2020) ), which share D-dimensional vectors iteratively. However, when (4) is overparameterized with very large D (e.g., D = O(N ) with 2 for the NTK), such parameter sharing approaches are no longer feasible because of the increased communication complexity. An intuitive solution to avoid sharing these high-dimensional parameters is to directly solve (3). However, it is by no means clear if and how one can efficiently solve (3) in a decentralized manner. The key challenge is that, unlike the
2The techniques presented in this work can be easily extended to unbalanced datasets, i.e., when each agent has a dataset of different size.
conventional decentralized learning problems, here each loss term `([K↵](i)m , y (i) m ) is not separable over the agents. Instead, each agent m’s local problem is dependent on k(x(i)m , x (j) m̄ ) with m 6= m̄. Importantly, without directly transmitting the data itself (as has been done in Predd et al. (2006); Koppel et al. (2018); Lin et al. (2020)), it is not clear how one can obtain the required (m·i)th element of K↵. Therefore, to develop algorithms that avoid sharing high-dimensional parameters by directly (approximately) solving (3), it is important to identify kernels that are suitable for decentralized implementation and propose efficient algorithms for learning with such kernels.
3 THE PROPOSED ALGORITHMS
In this section, we define a general class of kernels referred to as the generalized inner product (GIP) kernels that are suitable for decentralized overparameterized learning. By focusing on GIP kernels, we aim to understand the best possible decentralized optimization/generalization performance that can be achieved for solving (3). Surprisingly, one of our proposed algorithm only shares O(nN) = O(N2/M) bits of information per node, while achieving the minimax optimal generalization performance. Such an algorithm only requires one round of communication, where the messages transmitted are independent of the actual parameter dimension (i.e., D in problem (4)); further, there is no requirement for achieving consensus among the agents. The proposed algorithm represents a significant departure from the classical consensus-based decentralized learning algorithms. We first define a class of kernels that we will focus on in this work. Definition 3.1. [Generalized inner-product (GIP) kernel] We define a GIP kernel as:
k(x, x0) = g( (x, x0), kxk, kx0k), (5)
where (x, x0) = arccos(xTx0/(kxkkx0k)) 2 [0,⇡] denotes the angle between the feature vectors x and x0; and g(·, kxk, kx0k) is assumed to be Lipschitz continuous (cf. Assumption 2). Remark 1. Note that the GIP kernel is a generalization of the inner-product kernels (Schölkopf et al., 2002), i.e., kernels of the form k(x, x0) = k(hx, x0i). Clearly, k(hx, x0i) can be represented as k(hx, x0i) = g( (x, x0), kxk, kx0k) for some function g(·). Moreover, many kernels of practical interest can be represented as GIP kernels, some examples include NTK (Jacot et al., 2018; Chizat et al., 2019; Arora et al., 2019), arccosine (Cho & Saul, 2009), polynimal, Gaussian, Laplacian, sigmoid, and inner-product kernels (Schölkopf et al., 2002).
The main reason we focus on the GIP kernels for decentralized implementation is that, this class of kernels can be fully specified at each agent if the norms of all the feature vectors and the pairwise angles between them are known at each agent. For example, consider an NTK of a single hiddenlayer ReLU neural network: k(x, x0) = xTx0(⇡ (x, x0))/2⇡ (Chizat et al., 2019). This kernel can be fully learned with just the knowledge of norms and the pairwise angles of the feature vectors. For many applications of interest (Bietti & Mairal, 2019; Geifman et al., 2020; Pedregosa et al., 2011), normalized feature vectors are used, and for such problems, the GIP kernel at each agent can be computed only by using the knowledge of the pairwise angles between the feature vectors. We show in Sec. 3.1 that such kernels can be efficiently estimated by each agent while sharing only a few bits of information. Importantly, the communication requirement for such a kernel estimation procedure is independent of the problem’s parameter dimension (i.e., D in (4)), making them suitable for decentralized learning in overparameterized regime. Next, we define the RF kernel. Definition 3.2. [Random features (RF) kernel] RF kernel is defined as (Rahimi & Recht, 2008; Rudi & Rosasco, 2017; Li et al., 2019):
k(x, x0) =
Z
!2⌦ ⇣̄(x,!) · ⇣̄(x0,!)dq(!) (6)
with (⌦, q) being the probability space and ⇣̄ : X ⇥ ⌦ ! R. Remark 2. The RF kernel can be approximated as: k(·, ·) ⇡ kP (x, x0) = h P (x), P (x0)i, with P (x) = 1p
P [⇣̄(x,!1), . . . , ⇣̄(x,!P )]T 2 RP and {!i}Pi=1 drawn i.i.d. from distribution
q(!). A popular example of the RF kernels is the shift-invariant kernels, i.e., kernels of the form k(x, x0) = k(x x0) (Rahimi & Recht, 2008). The RF kernels generalize the random Fourier features construction (Rudin, 2017) for shift-invariant kernels to general kernels. Besides the shiftinvariant kernels, important examples of the RF kernels include the inner-product (Kar & Karnick, 2012), and the homogeneous additive kernels (Vedaldi & Zisserman, 2012).
Algorithm 1 Approximation: Local Kernel Estimation 1: Initialize: Distribution p(!) over space (⌦, p) and mapping ⇣ : X ⇥ ⌦ ! R (see Section 3.1) 2: for m 2 [M ] do 3: Draw P i.i.d. random variables !i 2 Rd with !i ⇠ p(!) for i = 1, . . . , P 4: Compute ⇣(x(i)m ,!j) 8i 2 Nm and j 2 [P ] 5: Construct the matrix Am 2 RP⇥n with the (i, j)th element as ⇣(x(i)m ,!j) 6: Communicate Am to every other agent and receive Am̄ with m̄ 6= m from other agents 7: If GIP is used, and data is not normalized, then communicate kx(i)m k, 8 i 2 Nm 8: Estimate the kernel matrix KP locally using (7) for the GIP and (9) for the RF kernel 9: end for
Next, we propose a multi-agent approximation algorithm to effectively learn the GIP and the RF kernels at each agent, as well as the optimization algorithms to efficiently solve the learning problem. Our proposed algorithms will follow an approximation – optimization strategy, where the agents first exchange some information so that they can locally approximate the full kernel matrix K; then they can independently optimize the resulting approximated local problems. Below we list a number of key design issues arising from implementing such an approximation – optimization strategy:
[Kernel approximation] How to accurately approximate the kernel K, locally at each agent? For example, for the GIP kernels, how to accurately estimate the angles (x(i)m , x (j) m̄ ) at a given agent m, where j 2 Nm̄ and m̄ 6= m? This is challenging, especially when raw data sharing is not allowed. [Effective exchange of local information] How shall we design appropriate messages to be exchanged among the agents? The type of messages that gets exchanged will be dependent on the underlying kernel approximation schemes. Therefore, it is critical that proposed approximation methods are able to utilize as little information from other agents as possible.
[Iterative or one-shot scheme] It is not clear if such an approximation – optimization scheme should be one-shot or iterative – that is, whether it is favourable that the agents iteratively share information and perform local optimization (similar to classical consensus-based algorithms), or they should do it just once. Again, this will be dependent on the underlying information sharing schemes.
Next, we will formally introduce the proposed algorithms. Our presentation follows the approximation – optimization strategy outlined above. We first discuss the proposed decentralized kernel approximation algorithm, followed by two different ways of performing decentralized optimization.
3.1 MULTI-AGENT KERNEL APPROXIMATION
The kernel K is approximated locally at each agent using Algorithm 1. Note that in Step 3, each agent randomly samples {!i}Pi=1 from distribution p(!). This can be easily established via random seed sharing as in Xu et al. (2020); Richards et al. (2020). In Step 6, each agent shares a locally constructed matrix Am of size P ⇥ n, whose elements ⇣(x(i)m ,!i) will be defined shortly. The choices of p(!) and ⇣(·, ·) in Step 1 depend on the choice of kernel. Specifically, we have: [Approximation for GIP kernel] For the GIP kernel, we first assume that the feature vectors are normalized (Pedregosa et al., 2011). We then choose p(!) to be any circularly symmetric distribution, for simplicity we choose p(!) as N (0, Id). Moreover, we use ⇣(x,!) = [!Tx 0] such that Am is a binary matrix with entries {0, 1}. Note that such matrices are easy to communicate. Next, we approximate the kernel K with KP as
k(x(i) m , x (j) m̄ ) ⇡ kP (x(i)m , x (j) m̄ ) = g( P (x (i) m , x (j) m̄ ), kx(i)m k, kx (j) m̄ k), (7)
where k(x(i)m , x (j) m̄ ) and kP (x (i) m , x (j) m̄ ) 8i 2 Nm, 8m 2 [M ] and 8j 2 Nm̄ and 8m̄ 2 [M ] are the individual elements of K and KP , resp., and P (x (i) m , x (j) m̄ ) is an approximation of the angle
(x(i)m , x (j) m̄ ) evaluated using Am, Am̄ as
(x(i) m , x (j) m̄ ) ⇡ P (x(i)m , x (j) m̄ ) = ⇡ 2⇡[A(:,i)m ]T [A (:,j) m̄ ]/P , (8)
Algorithm 2 Optimization: One-Shot Communication for Kernel Learning
1: Initialize: ↵1 m 2 RN , step-sizes {⌘t m }Tm t=1 at each agent m 2 [M ] 2: for m 2 [M ] do 3: Using Algorithm 1 construct KP 4: Communicate ȳm = [y (1) m , . . . , y (n) m ]T 2 Rn 5: Using KP and ȳm construct L̂P (↵) (cf. (10)) locally using L̂m,P (↵) 6: Option I: Solve (10) exactly at each agent 7: Option II: Solve (10) inexactly using GD at each agent 8: for t = 1 to Tm 9: GD Update: ↵t+1
m = ↵t m ⌘t m rR̂P (↵tm)
10: end for 11: end for 12: Return: ↵T+1
m for all m 2 [M ]
This implies that K can be approximated for the GIP kernel by communicating only nP bits of information per agent. Note that in the general case if the feature vectors are not normalized, then (7) can be evaluated by communicating additional n real values of the norms of the feature vectors by each agent; see Step 7 in Algorithm 1.
[Approximation for RF kernel] For the RF kernel, we choose ⇣(·, ·) = ⇣̄(·, ·) and p(!) = q(!) as defined in (6) and approximate K with KP as
k(x(i) m , x (j) m̄ ) ⇡ kP (x(i)m , x (j) m̄ ) = h P (x(i)m ), P (x (j) m̄ )i, (9)
where k(x(i)m , x (j) m̄ ) and kP (x (i) m , x (j) m̄ ) are elements of K and KP , resp., P (x (i) m ) = 1/ p P [A(:,i)m ] and P (x (j) m̄ ) = 1/ p P [A(:,j)m̄ ]. Note that K can be approximated for the RF kernel by sharing only nP real values per agent. Further, the distribution q(!) and the mapping ⇣̄(·, ·) depend on the type of RF kernel used. For example, for shift-invariant kernels with random Fourier features, we can choose ⇣̄(x,!) = p 2 cos(!Tx+ b) with ! ⇠ q(!) and b ⇠ U [0, 2⇡] (Rahimi & Recht, 2008).
Now that using Algorithm 1 we have approximated the kernel matrix at all the agents, we are ready to solve (3) approximately.
3.2 THE DECENTRALIZED OPTIMIZATION STEP
The approximated kernel regression problem (3) with KP obtained using Algorithm 1, and local loss L̂m,P (↵) := 1n P i2Nm ` ⇥ KP↵ ⇤(i) m , y (i) m is
min ↵2RN
⇢ R̂P (↵) = L̂P (↵) +
2 k↵k2KP =
1
M
MX
m=1
L̂m,P (↵) + 2 k↵k2KP . (10)
Remark 3. For the approximate problem (10), we would want KP constructed using the multi-agent kernel approximation approach to be positive semi-definite (PSD), i.e., the kernel function kP (·, ·) is a positive definite (PD) kernel. From the definition of the approximate RF kernel (9), it is easy to verify that it is PD. However, it is not clear if the approximated GIP kernel is PD. Certainly, for the GIP kernel we expect that as P ! 1 we have KP ! K, i.e., asymptotically KP is PSD, since K is PSD. In the Appendix, we introduce a sufficient condition (Assumption 6) that ensures KP to be PSD for the GIP kernel. In the following, for simplicity we assume KP is PSD.
Decentralized optimization based on one-shot communication: In this setting, we share the information among all the agents in one-shot, then each agent learns its corresponding minimizer using the gathered information. We assume that each agent can communicate with every other agent either in a decentralized manner (or via a central server) before initialization. This is a common assumption in distributed learning with RF kernels where the agents need to share random seeds before initialization to determine the approximate feature mapping (Richards et al., 2020; Xu et al., 2020). Here, consensus is not enforced as each agent can learn a local minimizer which has a good global property. The label information is also exchanged among all the agents. In Algorithm 2, we list the steps of the algorithm. In Step 3, the agents learn KP (the local estimate of the kernel matrix)
using Algorithm 1. In Step 4, the agents share the labels ȳm so that each agent can (approximately) reconstruct the loss L̂(↵) (cf. (10)) locally. Then each agent can either choose Option I or Option II to solve (10). A few important properties of Algorithm 2 are: [Communication] Each agent communicates a total of O(nP ) = O(NP/M) bits (if the norms also need to be transmitted, then with an additional N/M real values) for the GIP kernel, and O(NP/M) real values for the RF kernels. Importantly, for the GIP kernel the communication is independent of the parameter dimension, making it suitable for decentralized overparameterized learning problems; see Table 1 for a comparison with other approaches.
[No consensus needed] Each agent executes Algorithm 2 independently to learn ↵m, without needing to reach any kind of consensus. They are free to choose different initializations, step-sizes, and even regularizers (i.e., in (10)). In contrast to the classical learning, where algorithms are designed to guarantee consensus (Koppel et al., 2018; Richards et al., 2020; Xu et al., 2020), our algorithms allow each agent to learn a different function.
The proposed framework relies on sharing matrices Am’s that are random functions of the local features. Note that problem (10) can also be solved by using an iterative distributed gradient tracking algorithm (Qu & Li, 2018), with the benefit that no label sharing is needed; see Appendix D. Remark 4 (Optimization performance). Note that using Algorithm 2, we can solve the approximate problem (10) to arbitrary accuracy using either Option I or Option II. However, it is by no means clear if the solution obtained by Algorithm 2 will be close to the solution of (3). Therefore, after problem (10) is solved, it is important to understand how close the solutions returned by Algorithm 2 are to the original kernel regression problem (3).
4 MAIN RESULTS
In this section, we analyze the performance of Algorithm 2. Specifically, we are interested in understanding the training loss and the generalization error incurred due to the kernel approximation (cf. Algorithm 1). For this purpose, we focus on `2 loss functions for which the kernel regression problem (10) can be solved in closed-form. Specifically, we want to minimize the loss:
L(f) = 1 2 Ex,y⇠⇡(x,y)[(f(x) y)2]. (11)
We solve the following kernel ridge regression problem with the choice L̂(↵) = 12N kȳ K↵k 2,
min ↵2RN
n R̂(↵) = 1
2N kȳ K↵k2 + 2 k↵k2K
o (12)
where we denote ȳ = [ȳT1 , . . . , ȳTM ] T 2 RN with ȳm = [y(1)m , y(2)m , . . . , y(n)m ]T 2 Rn. The above problem can be solved in closed form with ↵̂⇤ = [K+N · · I] 1ȳ. The approximated problem at each agent with the kernel KP and with the loss function L̂P (↵) = 12N kȳ KP↵k 2 is
min ↵2RN
n R̂P (↵) = 1
2N kȳ KP↵k2 + 2 k↵k2KP
o (13)
with the optimal solution returned by Option I in Algorithm 2 as ↵̂⇤ P = [KP +N · · I] 1ȳ. The goal is to analyze the impact of the approximation on the performance of Algorithm 2. Specifically, we bound the difference between the optimal losses of the exact and the approximated Kernel ridge regression. We begin with some assumptions. Assumption 1. We assume |k(x, x0)| 2 and |kP (x, x0)| 2 for some 1. Assumption 2. The function g(·) in (5) used to construct the GIP kernel is G-Lipschitz w.r.t. , i.e., 9G 0 such that: |g( , z2, z3) g( ̂, z2, z3)| G| ̂|, 8 , ̂ 2 [0,⇡] and 8z2, z3 2 R. Assumption 3. We assume that the data labels |y| R almost surely for some R > 0. Assumption 4. There exists fH 2 H such that L(fH) = infh2H L(h).
A few remarks are in order. Note that Assumptions 1, 3 and 4 are standard in the statistical learning theory (Cucker & Zhou, 2007; Caponnetto & De Vito, 2007; Ben-Hur & Weston, 2010; Rudi & Rosasco, 2017). Moreover, for RF kernel Assumption 1 is automatically satisfied if |⇣(x,!)|
almost surely (Rudi & Rosasco, 2017) (cf. (6) and (9)). Assumption 2 is required for estimating the kernel by approximating the pairwise angles between feature vectors. It is easy to verify that the popular kernels including, NTK (15), Arccosine, Gaussian and Polynomial kernels satisfy Assumption 2 with feature vectors belonging to a compact domain (this ensures that the Lipschitz constant G is independent of the feature vector norms). Now we are ready to present the results.
We analyze how well Algorithm 1 approximates the exact kernel. We are interested in the approximation error as a function of the number of random samples P . We have the following lemma. Lemma 4.1 (Kernel Approximation). For KP returned by Algorithm 1, the following holds with probability at least 1 : (i) For the GIP kernel, kK KP k GN ⇣q 32⇡2 P log 2N + 8⇡3P log 2N ⌘ . (ii) Similarly, for the RF kernel, kK KP k 2N ⇣q 8 P log 2N + 43P log 2N ⌘ .
Note that as P increases KP ! K, in particular, to achieve an approximation error of ✏ > 0, we need P = O(✏ 2). Importantly, Lemma 4.1 plays a crucial role in analyzing the optimization performance of the kernel approximation approach. Next, we state the training loss incurred as a consequence of solving the approximate decentralized problem (13) in Algorithm 2 instead of (12). Theorem 4.2 (Approximation: Optimal Loss). Suppose P 29 log 2N
, then for both the GIP and the RF kernels, the solution returned by Algorithm 2 (Option I) for solving (12) approximately (i.e, (13)), satisfies the following with probability at least 1
L̂P (↵̂⇤P ) L̂(↵̂⇤) = O ⇣q 1 P log 2N ⌘ and R̂P (↵̂⇤P ) R̂(↵̂⇤) O ⇣q 1 P log 2N ⌘ .
Theorem 4.2 states that as P increases, the optimal training loss achieved by solving approximate problem (13) via Algorithm 2 (Option I) will approach the performance of the centralized system (12) for both the GIP and the RF kernels. The proof of the above result utilizes Lemma 4.1 and the definition of the loss functions in (12) and (13). See Appendix G for a detailed proof.
The results of Lemma 4.1 and Theorem 4.2 characterize the approximation performance of the proposed approximation – optimization framework on fixed number of training samples. Of course, it is of interest to analyze how the proposed approximation algorithms will perform on unseen test data. Towards this end, it is essential to analyze the performance of the function f̂P learned from solving (13) via Algorithm 2. We have the following result. Theorem 4.3 (Generalization performance). Let us choose = 1/ p N , 2 (0, 1), and N max n
4 3kKk2 , 72
2 p N log 32 2 p N o , also choose P max n 8, 512⇡ 2 G 2
kKk2 , 288⇡2G2N
o log 16 for
the GIP kernel and P max n 82, 32 2
kKk2 , 722
p N o log 128 2 p N for the RF kernel, where K is
defined in Appendix F. Then with probability at least 1 , we have for f̂P returned by Algorithm 2 (Option I) for approximately solving (12) (i.e., (13)): L(f̂P ) infh2H L(h) = O 1/ p N .
The proof of Theorem 4.3 utilizes a result similar to Lemma 4.1 but for integral operator defined using kernels k(·, ·) and kP (·, ·). Theorem 4.3 states that with appropriate choice of (the regularization parameter), N (the number of overall samples), and P (the messages communicated per agent), the proposed algorithm achieves the minimax optimal generalization performance (Caponnetto & De Vito, 2007). Also, note that the the requirement of P = O( p N) for the RF kernel compared to P = O(N) for the GIP kernel is due to the particular structure of the RF kernel (cf. (6)). It can be seen from Lemmas H.4 and H.5 in Appendix H, that the approximation obtained with the RF kernel allows the derivation of tighter bounds compared to the GIP kernel. The next corollary precisely states the total communication required per agent to achieve this optimal performance. Corollary 1 (Communication requirements for the GIP and RF kernels). Suppose Algorithm 2 uses the choice of parameters stated in Theorem 4.3 to approximately optimize (12). Then it requires a total of O(N2/M) bits (resp. O(N p N/M) real values) of message exchanges per node when the GIP kernel (resp. the RF kernel) is used, to achieve minimax optimal generalization performance. Moreover, if unnormalized feature vectors are used, then the GIP kernel requires an additional O(N/M) real values of message exchanges per node. Compared to DKRR-RF-CM (Liu et al., 2021), Decentralized RF (Richards et al., 2020), DKLA, and COKE (Xu et al., 2020), the number of message exchanges required by the proposed algorithm
is independent of the iteration numbers, and it is much less compared to other algorithms, especially for the GIP kernel in the overparameterized regime; see Table 1 for detailed comparisons.
5 EXPERIMENTS
We compare the performance of the proposed algorithm to DKRR-RF-CM (Liu et al., 2021), Decentralized RF (Richards et al., 2020), and DKLA (Xu et al., 2020). We evaluate the performance of all the algorithms on real world datasets from the UCI repository.
Specifically, we present the results on National Advisory Committee for Aeronautics (NACA) airfoil noise dataset (Lau & López, 2009), where the goal is to predict aircraft noise based on a few measured attributes. The dataset consists of N = 1503 samples that are split equally among M = 10 nodes. Each node utilizes 70% of its data for training and 30% for testing purposes. Each feature vector x(i)m 2 R5 represents the measured attributes such as, frequency, angle, etc., and each label y(i)m represents the noise level. Additional experiments on different datasets and classification problems, as well as the detailed parameter settings, are included in the Appendix A.
We evaluate the performance of all the algorithms with the Gaussian kernel. Note that the algorithms DKRR-RF-CM, Decentralized RF, and DKLA can only be implemented using the RF approach while our proposed algorithm utilizes the GIP kernel. Also, in contrast to these benchmark algorithms that use iterative parameter exchange, the proposed Algorithm 2 uses only one-shot communication. First, in Table 2, we compare the communication required by each algorithm with the Gaussian kernel for P = 100, 500, and 1000 to achieve the same test mean squared error (MSE) for each setting, see last row of Table 2. Note that for P = 100, the communication required by Algorithm 2 is less than 50% of that required by DKLA and Decentralized RF while it is only slightly less than that of DKRR-RF-CM. Moreover, as P increases to 500 and 1000, it can be seen that Algorithm 2 only requires a fraction of communication compared to other algorithms, and this fact demonstrates the utility of the proposed algorithms for over-parameterized learning problems. In Table 3, we compare the averaged MSE achievable by different algorithms, when a fixed total communication budget (in bits) is given for each setting (see the last row of Table 3 for the budget). Note that Algorithm 2 significantly outperforms all the other methods as P increases. This is expected since Algorithm 2 essentially solves a centralized problem (cf. Problem (10)) after the multi-agent kernel approximation (cf. Algorithm 1), and a large P provides a better approximation of the kernel (cf. Lemma 4.1). In contrast, for the parameter sharing based algorithms the performance deteriorates even though the kernel approximation improves with large P as learning a high-dimensional parameter naturally requires more communication rounds as well as a higher communication budget per communication round.
Please note that we also compare the performance of Algorithm 2 with the benchmarking algorithms discussed above for the NTK. We further benchmark the performance of Algorithm 2 against the centralized algorithms for the Gaussian, the Polynomial, and the NTK. However, due to space limitations, we relegate these numerical results to the Appendix A.
ACKNOWLEDGEMENTS
We thank the anonymous reviewers for their valuable comments and suggestions. The work of Prashant Khanduri and Mingyi Hong is supported in part by NSF grant CMMI-1727757, AFOSR grant 19RT0424, ARO grant W911NF-19-1-0247 and Meta research award on “Mathematical modeling and optimization for large-scale distributed systems”. The work of Mingyi Hong is also supported by an IBM Faculty Research award. The work of Jia Liu is supported in part by NSF grants CAREER CNS-2110259, CNS-2112471, CNS-2102233, CCF-2110252, ECCS-2140277, and a Google Faculty Research Award. The work of Hoi-To Wai was supported by CUHK Direct Grant #4055113. | 1. What is the primary contribution of the paper regarding multi-agent optimization and random feature approximations?
2. What are the strengths of the proposed technique in terms of generalization performance, and how does it compare to other decentralized methods for RKHS optimization?
3. How does the use of the generalized inner-product kernel contribute to privacy preservation, and is sharing the matrix A more communication efficient than sharing local kernelized function evaluations?
4. What are some concerns regarding Assumption 2's Lipschitz continuity assumption, and how does it relate to the presence of norms in the denominator in equation (4)?
5. How does the convergence theory presented in the paper differ from or improve upon previous discussions in the introduction for multi-agent optimization over RKHS?
6. Why is it difficult to assess whether the results of the paper sharpen the state of the art in any meaningful way without comparing against other decentralized methods experimentally?
7. Are there any minor comments or suggestions for rephrasing certain sentences or clarifying statements in the paper? | Summary Of The Paper
Review | Summary Of The Paper
This work considers multi-agent optimization over RKHS together with random feature approximations. The crux is the development of two different techniques for decentralized computation through a novel introduction of a generalized inner-product kernel. Convergence analysis and numerical validation are provided.
Review
The main technical contribution in my reading is establishing minimax optimal generalization performance of the proposed technique. My main curiosity comes from whether the strength of this convergence result is through simply applying a stronger analysis technique, or if there is something specific about the derived algorithm that permits this strong result? Put another way, is the generalization performance an artifact of the analysis or a result of a carefully calibrated algorithm? This point is obscured in the current writing.
One of the technical innovations of this work is the use of the generalized inner-product kernel. This subsumes a number of common choices such has polynomial (polynimal ?), Gaussian, Laplace, sigmoid, etc. This is critical to obtaining a degree of privacy preservation as it allows agents to only require sharing knowledge of pairwise angles between feature vectors.
Is sharing the matrix
A
m
actually more communication efficient than sharing local kernelized function evaluations? This seems like it would actually require more network throughput. Put another way, sharing the parametric representation of each agent's local function is computationally costly, and potentially worse than simply sharing estimates of the local label/target variable. In that way, Algorithm 2 seems much more practical, even it requires label exchange.
In Assumption 2, I am a little concerned about the Lipschitz continuity assumption because of the presence of the norms in the denominator in equation (4). Can we be sure that this is independent of
z
2
,
z
3
? Some comment about this seems warranted.
In what sense is the convergence theory presented here stronger or sharper than that which is discussed in the introduction for multi-agent optimization over RKHS? Such a granular contrastive discussion is missing from Section 4.
Along the lines of the previous comment, the authors have not compared against any of the other decentralized methods for RKHS optimization experimentally, which makes it difficult to assess whether these results actually sharpen the state of the art in any meaningful way.
Minor comments:
``However this approach raises privacy concerns, thus almost never being used in practice." Is awkward syntax. Consider rephrasing.
Also, the main boldface question at the top of page 2 is a sentence fragment.
The statement "where the first one only needs one-shot information exchange, but requires sharing data labels among the agents; the second one needs iterative information exchange, but does not need to share the data labels." Is not accurate in the sense that if label exchange is required, then this is mathematically equivalent to data exchange, i.e., realizations of random variables are required... Therefore it is suspicious to state this and immediately afterward state that raw data exchange is not required. |
ICLR | Title
Decentralized Learning for Overparameterized Problems: A Multi-Agent Kernel Approximation Approach
Abstract
This work develops a novel framework for communication-efficient distributed learning where the models to be learnt are overparameterized. We focus on a class of kernel learning problems (which includes the popular neural tangent kernel (NTK) learning as a special case) and propose a novel multi-agent kernel approximation technique that allows the agents to distributedly estimate the full kernel function, and subsequently perform distributed learning, without directly exchanging any local data or parameters. The proposed framework is a significant departure from the classical consensus-based approaches, because the agents do not exchange problem parameters, and consensus is not required. We analyze the optimization and the generalization performance of the proposed framework for the `2 loss. We show that with M agents and N total samples, when certain generalized inner-product (GIP) kernels (resp. the random features (RF) kernel) are used, each agent needs to communicate O N 2 /M bits (resp. O N p N/M real values) to achieve minimax optimal generalization performance. Further, we show that the proposed algorithms can significantly reduce the communication complexity compared with state-of-the-art algorithms, for distributedly training models to fit UCI benchmarking datasets. Moreover, each agent needs to share about 200N/M bits to closely match the performance of the centralized algorithms, and these numbers are independent of parameter and feature dimension.
N/A
N 2 /M bits (resp. O N p N/M
real values) to achieve minimax optimal generalization performance. Further, we show that the proposed algorithms can significantly reduce the communication complexity compared with state-of-the-art algorithms, for distributedly training models to fit UCI benchmarking datasets. Moreover, each agent needs to share about 200N/M bits to closely match the performance of the centralized algorithms, and these numbers are independent of parameter and feature dimension.
1 INTRODUCTION
Recently, decentralized optimization has become a mainstay of the optimization research. In decentralized optimization, multiple local agents hold small to moderately sized private datasets, and collaborate by iteratively solving their local problems while sharing some information with other agents. Most of the existing decentralized learning algorithms are deeply rooted in classical consensus-based approaches (Tsitsiklis, 1984), where the agents repetitively share the local parameters with each other to reach an optimal consensual solution. However, the recent trend of using learning models in the overparameterized regime with very high-dimensional parameters (He et al., 2016; Vaswani et al., 2017; Fedus et al., 2021) poses a significant challenge to such parameter sharing approaches, mainly because sharing model parameters iteratively becomes excessively expensive as the parameter dimension grows. If the size of local data is much smaller than that of the parameters, perhaps a more efficient way is to directly share the local data. However, this approach raises privacy concerns, and it is rarely used in practice. Therefore, a fundamental question of decentralized learning in the overparameterized regime is:
(Q) For overparameterized learning problems, how to design decentralized algorithms that achieve the best optimization/generalization performance by exchanging minimum amount of information?
We partially answer (Q) in the context of distributed kernel learning (Vert et al., 2004). We depart from the popular consensus-based algorithms and propose an optimization framework that does not require the local agents to share model parameters or raw data. We focus on kernel learning because: (i) kernel methods provide an elegant way to model non-linear learning problems with complex data
dependencies as simple linear problems (Vert et al., 2004; Hofmann et al., 2008), and (ii) kernelbased methods can be used to capture the behavior of a fully-trained deep network with large width (Jacot et al., 2018; Arora et al., 2019; 2020).
Distributed implementation of kernel learning problems is challenging. Current state-of-the-art algorithms for kernel learning either rely on sharing raw data among agents and/or imposing restrictions on the number of agents (Zhang et al., 2015; Lin et al., 2017; Koppel et al., 2018; Lin et al., 2020; Hu et al., 2020; Pradhan et al., 2021; Predd et al., 2006). Some recent approaches rely on specific random feature (RF) kernels to alleviate some of the above problems. These algorithms reformulate the (approximate) problem in the parameter domain and solve it by iteratively sharing the (potentially high-dimensional) parameters (Bouboulis et al., 2017; Richards et al., 2020; Xu et al., 2020; Liu et al., 2021). These algorithms suffer from excessive communication overhead, especially in the overparameterized regime where the number of parameters is larger than the data size N . For example, implementing the neural tangent kernel (NTK) with RF kernel requires at least O(N ), 2, random features (parameter dimension) using ReLU activation (Arora et al., 2019; Han et al., 2021)1. For such problems, in this work, we propose a novel algorithmic framework for decentralized kernel learning. Below, we list the major contributions of our work.
[GIP Kernel for Distributed Approximation] We define a new class of kernels suitable for distributed implementation, Generalized inner-product (GIP) kernel, that is fully characterized by the angle between a pair of feature vectors and their respective norms. Many kernels of practical importance including the NTK can be represented as GIP kernel. Further, we propose a multi-agent kernel approximation method for estimating the GIP and the popular RF kernels at individual agents.
[One-shot and Iterative Scheme] Based on the proposed kernel approximation, we develop two optimization algorithms, where the first one only needs one-shot information exchange, but requires sharing data labels among the agents; the second one needs iterative information exchange, but does not need to share the data labels. A key feature of these algorithms is that neither the raw data features nor the (high-dimensional) parameters are exchanged among agents.
[Performance of the Approximation Framework] We analyze the optimization and the generalization performance of the proposed approximation algorithms for `2 loss. We show that GIP kernel requires communicating O(N2/M) bits and the RF kernel requires communicating O(N p N/M) real values per agent to achieve minimax optimal generalization performance. Importantly, the required communication is independent of the function class and the optimization algorithm. We validate the performance of our approximation algorithms on UCI benchmarking datasets.
In Table 1, we compare the communication requirements of the proposed approach to popular distributed kernel learning algorithms. Specifically, DKRR-CM (Lin et al., 2020) relies on sharing data and is therefore not preferred in practical settings. For the RF kernel, the proposed algorithm outperforms other algorithms in both non-overparameterized and the overparameterized regimes when T > N/M . In the overparameterized regime, the GIP kernel is more communication efficient compared to other algorithms. Finally, note that since our analysis is developed using the multiagent-kernel-approximation, it does not impose any upper bound on the number of agents in the network.
1To achieve approximation error ✏ = O(1/ p N).
Notations: We use R, Rd, and Rn⇥m to denote the sets of real numbers, d-dimensional Euclidean space, and real matrices of size n⇥m, respectively. We use N to denote the set of natural numbers. N (0,⌃) is multivariate normal distribution with zero mean and covariance ⌃. Uniform distribution with support [a, b] is denoted by U [a, b]. ha, bi (resp. ha, biH) denotes the inner-product in Euclidean space (resp. Hilbert space H). The inner-product defines the usual norms in corresponding spaces. Norm kAk of matrix A denotes the operator norm induced by `2 vector norm. We denote by [a]i or [a](i) the ith element of a vector a. [A · a](i)
j denotes the (i · j)th element of vector A · a. Moreover,
A (:,i) is the ith column of A and [A]mk is the element corresponding to mth row and kth column. Notation m 2 [M ] denotes m 2 {1, ..,M}. Finally, [E] is the indicator function of event E.
2 PROBLEM STATEMENT
Given a probability distribution ⇡(x, y) over X ⇥ R, we want to minimize the population loss
L(f) = Ex,y⇠⇡(x,y)[`(f(x), y)], (1)
where x 2 X ⇢ Rd and y 2 R denote the features and the labels, respectively. Here, f : X ! R is an estimate of the true label y. We consider a distributed system of M agents, with each agent m 2 [M ] having access to a locally available independently and identically distributed (i.i.d) dataset Nm = {x(i)m , y(i)m }ni=1 with2 (x (i) m , y (i) m ) ⇠ ⇡(x, y). The total number of samples is N = nM . The goal of kernel learning with kernel function, k(·, ·) : X ⇥ X ! R, is to find a function f 2 H (where H is the reproducing kernel Hilbert space (RKHS) associated with k (Vert et al., 2004)) that minimizes (1). We aim to solve the following (decentralized) empirical risk minimization problem
min f2H
⇢ R̂(f) = L̂(f) +
2 kfk2 H =
1
M
MX
m=1
L̂m(f) + 2 kfk2 H , (2)
where > 0 is the regularization parameter and L̂m(f) = 1n P i2Nm `(f(x(i)m ), y (i) m ) is the local loss at each m 2 [M ]. Problem (2) can be reformulated using the Representer theorem (Schölkopf et al., 2002) with L̂m(↵) = 1n P i2Nm ` ⇥ K↵ ⇤(i) m , y (i) m , 8m 2 [M ], as
min ↵2RN
⇢ R̂(↵) = L̂(↵) +
2 k↵k2K =
1
M
MX
m=1
L̂m(↵) + 2 k↵k2K , (3)
where K 2 RN⇥N is the kernel matrix with elements k(x(i)m , x(j)m̄ ), 8m, m̄ 2 [M ], 8i 2 Nm and 8j 2 Nm̄. The supervised (centralized) learning problem (3) is a classical problem in statistical learning (Caponnetto & De Vito, 2007) and has been popularized recently due to connections with overparameterized neural network training (Jacot et al., 2018; Arora et al., 2019). An alternate way to solve problem (2) (and (3)) is by parameterizing f in (2) by ✓ 2 RD as fD(x; ✓) = h✓, D(x)i where D : X ! RD is a finite dimensional feature map. Here, D(·) is designed to approximate k(·, ·) with kD(x, x0) = h D(x), D(x0)i (Rahimi & Recht, 2008). Using this approximation, problem (2) (and (3)) can be written in the parameter domain with L̂m,D(✓) = 1n P i2Nm ` h✓, D(x(i)m )i, y(i)m , 8m 2 [M ], as
min ✓2RD
⇢ R̂D(✓) = L̂D(✓) +
2 k✓k2 = 1 M
MX
m=1
L̂m,D(✓) + 2 k✓k2 . (4)
Note that (4) is a D-dimensional problem, whereas (3) is an N -dimensional problem. Since (4) is in the standard finite-sum form, it can be solved using the standard parameter sharing decentralized optimization algorithms (e.g., DGD (Richards et al., 2020) or ADMM (Xu et al., 2020) ), which share D-dimensional vectors iteratively. However, when (4) is overparameterized with very large D (e.g., D = O(N ) with 2 for the NTK), such parameter sharing approaches are no longer feasible because of the increased communication complexity. An intuitive solution to avoid sharing these high-dimensional parameters is to directly solve (3). However, it is by no means clear if and how one can efficiently solve (3) in a decentralized manner. The key challenge is that, unlike the
2The techniques presented in this work can be easily extended to unbalanced datasets, i.e., when each agent has a dataset of different size.
conventional decentralized learning problems, here each loss term `([K↵](i)m , y (i) m ) is not separable over the agents. Instead, each agent m’s local problem is dependent on k(x(i)m , x (j) m̄ ) with m 6= m̄. Importantly, without directly transmitting the data itself (as has been done in Predd et al. (2006); Koppel et al. (2018); Lin et al. (2020)), it is not clear how one can obtain the required (m·i)th element of K↵. Therefore, to develop algorithms that avoid sharing high-dimensional parameters by directly (approximately) solving (3), it is important to identify kernels that are suitable for decentralized implementation and propose efficient algorithms for learning with such kernels.
3 THE PROPOSED ALGORITHMS
In this section, we define a general class of kernels referred to as the generalized inner product (GIP) kernels that are suitable for decentralized overparameterized learning. By focusing on GIP kernels, we aim to understand the best possible decentralized optimization/generalization performance that can be achieved for solving (3). Surprisingly, one of our proposed algorithm only shares O(nN) = O(N2/M) bits of information per node, while achieving the minimax optimal generalization performance. Such an algorithm only requires one round of communication, where the messages transmitted are independent of the actual parameter dimension (i.e., D in problem (4)); further, there is no requirement for achieving consensus among the agents. The proposed algorithm represents a significant departure from the classical consensus-based decentralized learning algorithms. We first define a class of kernels that we will focus on in this work. Definition 3.1. [Generalized inner-product (GIP) kernel] We define a GIP kernel as:
k(x, x0) = g( (x, x0), kxk, kx0k), (5)
where (x, x0) = arccos(xTx0/(kxkkx0k)) 2 [0,⇡] denotes the angle between the feature vectors x and x0; and g(·, kxk, kx0k) is assumed to be Lipschitz continuous (cf. Assumption 2). Remark 1. Note that the GIP kernel is a generalization of the inner-product kernels (Schölkopf et al., 2002), i.e., kernels of the form k(x, x0) = k(hx, x0i). Clearly, k(hx, x0i) can be represented as k(hx, x0i) = g( (x, x0), kxk, kx0k) for some function g(·). Moreover, many kernels of practical interest can be represented as GIP kernels, some examples include NTK (Jacot et al., 2018; Chizat et al., 2019; Arora et al., 2019), arccosine (Cho & Saul, 2009), polynimal, Gaussian, Laplacian, sigmoid, and inner-product kernels (Schölkopf et al., 2002).
The main reason we focus on the GIP kernels for decentralized implementation is that, this class of kernels can be fully specified at each agent if the norms of all the feature vectors and the pairwise angles between them are known at each agent. For example, consider an NTK of a single hiddenlayer ReLU neural network: k(x, x0) = xTx0(⇡ (x, x0))/2⇡ (Chizat et al., 2019). This kernel can be fully learned with just the knowledge of norms and the pairwise angles of the feature vectors. For many applications of interest (Bietti & Mairal, 2019; Geifman et al., 2020; Pedregosa et al., 2011), normalized feature vectors are used, and for such problems, the GIP kernel at each agent can be computed only by using the knowledge of the pairwise angles between the feature vectors. We show in Sec. 3.1 that such kernels can be efficiently estimated by each agent while sharing only a few bits of information. Importantly, the communication requirement for such a kernel estimation procedure is independent of the problem’s parameter dimension (i.e., D in (4)), making them suitable for decentralized learning in overparameterized regime. Next, we define the RF kernel. Definition 3.2. [Random features (RF) kernel] RF kernel is defined as (Rahimi & Recht, 2008; Rudi & Rosasco, 2017; Li et al., 2019):
k(x, x0) =
Z
!2⌦ ⇣̄(x,!) · ⇣̄(x0,!)dq(!) (6)
with (⌦, q) being the probability space and ⇣̄ : X ⇥ ⌦ ! R. Remark 2. The RF kernel can be approximated as: k(·, ·) ⇡ kP (x, x0) = h P (x), P (x0)i, with P (x) = 1p
P [⇣̄(x,!1), . . . , ⇣̄(x,!P )]T 2 RP and {!i}Pi=1 drawn i.i.d. from distribution
q(!). A popular example of the RF kernels is the shift-invariant kernels, i.e., kernels of the form k(x, x0) = k(x x0) (Rahimi & Recht, 2008). The RF kernels generalize the random Fourier features construction (Rudin, 2017) for shift-invariant kernels to general kernels. Besides the shiftinvariant kernels, important examples of the RF kernels include the inner-product (Kar & Karnick, 2012), and the homogeneous additive kernels (Vedaldi & Zisserman, 2012).
Algorithm 1 Approximation: Local Kernel Estimation 1: Initialize: Distribution p(!) over space (⌦, p) and mapping ⇣ : X ⇥ ⌦ ! R (see Section 3.1) 2: for m 2 [M ] do 3: Draw P i.i.d. random variables !i 2 Rd with !i ⇠ p(!) for i = 1, . . . , P 4: Compute ⇣(x(i)m ,!j) 8i 2 Nm and j 2 [P ] 5: Construct the matrix Am 2 RP⇥n with the (i, j)th element as ⇣(x(i)m ,!j) 6: Communicate Am to every other agent and receive Am̄ with m̄ 6= m from other agents 7: If GIP is used, and data is not normalized, then communicate kx(i)m k, 8 i 2 Nm 8: Estimate the kernel matrix KP locally using (7) for the GIP and (9) for the RF kernel 9: end for
Next, we propose a multi-agent approximation algorithm to effectively learn the GIP and the RF kernels at each agent, as well as the optimization algorithms to efficiently solve the learning problem. Our proposed algorithms will follow an approximation – optimization strategy, where the agents first exchange some information so that they can locally approximate the full kernel matrix K; then they can independently optimize the resulting approximated local problems. Below we list a number of key design issues arising from implementing such an approximation – optimization strategy:
[Kernel approximation] How to accurately approximate the kernel K, locally at each agent? For example, for the GIP kernels, how to accurately estimate the angles (x(i)m , x (j) m̄ ) at a given agent m, where j 2 Nm̄ and m̄ 6= m? This is challenging, especially when raw data sharing is not allowed. [Effective exchange of local information] How shall we design appropriate messages to be exchanged among the agents? The type of messages that gets exchanged will be dependent on the underlying kernel approximation schemes. Therefore, it is critical that proposed approximation methods are able to utilize as little information from other agents as possible.
[Iterative or one-shot scheme] It is not clear if such an approximation – optimization scheme should be one-shot or iterative – that is, whether it is favourable that the agents iteratively share information and perform local optimization (similar to classical consensus-based algorithms), or they should do it just once. Again, this will be dependent on the underlying information sharing schemes.
Next, we will formally introduce the proposed algorithms. Our presentation follows the approximation – optimization strategy outlined above. We first discuss the proposed decentralized kernel approximation algorithm, followed by two different ways of performing decentralized optimization.
3.1 MULTI-AGENT KERNEL APPROXIMATION
The kernel K is approximated locally at each agent using Algorithm 1. Note that in Step 3, each agent randomly samples {!i}Pi=1 from distribution p(!). This can be easily established via random seed sharing as in Xu et al. (2020); Richards et al. (2020). In Step 6, each agent shares a locally constructed matrix Am of size P ⇥ n, whose elements ⇣(x(i)m ,!i) will be defined shortly. The choices of p(!) and ⇣(·, ·) in Step 1 depend on the choice of kernel. Specifically, we have: [Approximation for GIP kernel] For the GIP kernel, we first assume that the feature vectors are normalized (Pedregosa et al., 2011). We then choose p(!) to be any circularly symmetric distribution, for simplicity we choose p(!) as N (0, Id). Moreover, we use ⇣(x,!) = [!Tx 0] such that Am is a binary matrix with entries {0, 1}. Note that such matrices are easy to communicate. Next, we approximate the kernel K with KP as
k(x(i) m , x (j) m̄ ) ⇡ kP (x(i)m , x (j) m̄ ) = g( P (x (i) m , x (j) m̄ ), kx(i)m k, kx (j) m̄ k), (7)
where k(x(i)m , x (j) m̄ ) and kP (x (i) m , x (j) m̄ ) 8i 2 Nm, 8m 2 [M ] and 8j 2 Nm̄ and 8m̄ 2 [M ] are the individual elements of K and KP , resp., and P (x (i) m , x (j) m̄ ) is an approximation of the angle
(x(i)m , x (j) m̄ ) evaluated using Am, Am̄ as
(x(i) m , x (j) m̄ ) ⇡ P (x(i)m , x (j) m̄ ) = ⇡ 2⇡[A(:,i)m ]T [A (:,j) m̄ ]/P , (8)
Algorithm 2 Optimization: One-Shot Communication for Kernel Learning
1: Initialize: ↵1 m 2 RN , step-sizes {⌘t m }Tm t=1 at each agent m 2 [M ] 2: for m 2 [M ] do 3: Using Algorithm 1 construct KP 4: Communicate ȳm = [y (1) m , . . . , y (n) m ]T 2 Rn 5: Using KP and ȳm construct L̂P (↵) (cf. (10)) locally using L̂m,P (↵) 6: Option I: Solve (10) exactly at each agent 7: Option II: Solve (10) inexactly using GD at each agent 8: for t = 1 to Tm 9: GD Update: ↵t+1
m = ↵t m ⌘t m rR̂P (↵tm)
10: end for 11: end for 12: Return: ↵T+1
m for all m 2 [M ]
This implies that K can be approximated for the GIP kernel by communicating only nP bits of information per agent. Note that in the general case if the feature vectors are not normalized, then (7) can be evaluated by communicating additional n real values of the norms of the feature vectors by each agent; see Step 7 in Algorithm 1.
[Approximation for RF kernel] For the RF kernel, we choose ⇣(·, ·) = ⇣̄(·, ·) and p(!) = q(!) as defined in (6) and approximate K with KP as
k(x(i) m , x (j) m̄ ) ⇡ kP (x(i)m , x (j) m̄ ) = h P (x(i)m ), P (x (j) m̄ )i, (9)
where k(x(i)m , x (j) m̄ ) and kP (x (i) m , x (j) m̄ ) are elements of K and KP , resp., P (x (i) m ) = 1/ p P [A(:,i)m ] and P (x (j) m̄ ) = 1/ p P [A(:,j)m̄ ]. Note that K can be approximated for the RF kernel by sharing only nP real values per agent. Further, the distribution q(!) and the mapping ⇣̄(·, ·) depend on the type of RF kernel used. For example, for shift-invariant kernels with random Fourier features, we can choose ⇣̄(x,!) = p 2 cos(!Tx+ b) with ! ⇠ q(!) and b ⇠ U [0, 2⇡] (Rahimi & Recht, 2008).
Now that using Algorithm 1 we have approximated the kernel matrix at all the agents, we are ready to solve (3) approximately.
3.2 THE DECENTRALIZED OPTIMIZATION STEP
The approximated kernel regression problem (3) with KP obtained using Algorithm 1, and local loss L̂m,P (↵) := 1n P i2Nm ` ⇥ KP↵ ⇤(i) m , y (i) m is
min ↵2RN
⇢ R̂P (↵) = L̂P (↵) +
2 k↵k2KP =
1
M
MX
m=1
L̂m,P (↵) + 2 k↵k2KP . (10)
Remark 3. For the approximate problem (10), we would want KP constructed using the multi-agent kernel approximation approach to be positive semi-definite (PSD), i.e., the kernel function kP (·, ·) is a positive definite (PD) kernel. From the definition of the approximate RF kernel (9), it is easy to verify that it is PD. However, it is not clear if the approximated GIP kernel is PD. Certainly, for the GIP kernel we expect that as P ! 1 we have KP ! K, i.e., asymptotically KP is PSD, since K is PSD. In the Appendix, we introduce a sufficient condition (Assumption 6) that ensures KP to be PSD for the GIP kernel. In the following, for simplicity we assume KP is PSD.
Decentralized optimization based on one-shot communication: In this setting, we share the information among all the agents in one-shot, then each agent learns its corresponding minimizer using the gathered information. We assume that each agent can communicate with every other agent either in a decentralized manner (or via a central server) before initialization. This is a common assumption in distributed learning with RF kernels where the agents need to share random seeds before initialization to determine the approximate feature mapping (Richards et al., 2020; Xu et al., 2020). Here, consensus is not enforced as each agent can learn a local minimizer which has a good global property. The label information is also exchanged among all the agents. In Algorithm 2, we list the steps of the algorithm. In Step 3, the agents learn KP (the local estimate of the kernel matrix)
using Algorithm 1. In Step 4, the agents share the labels ȳm so that each agent can (approximately) reconstruct the loss L̂(↵) (cf. (10)) locally. Then each agent can either choose Option I or Option II to solve (10). A few important properties of Algorithm 2 are: [Communication] Each agent communicates a total of O(nP ) = O(NP/M) bits (if the norms also need to be transmitted, then with an additional N/M real values) for the GIP kernel, and O(NP/M) real values for the RF kernels. Importantly, for the GIP kernel the communication is independent of the parameter dimension, making it suitable for decentralized overparameterized learning problems; see Table 1 for a comparison with other approaches.
[No consensus needed] Each agent executes Algorithm 2 independently to learn ↵m, without needing to reach any kind of consensus. They are free to choose different initializations, step-sizes, and even regularizers (i.e., in (10)). In contrast to the classical learning, where algorithms are designed to guarantee consensus (Koppel et al., 2018; Richards et al., 2020; Xu et al., 2020), our algorithms allow each agent to learn a different function.
The proposed framework relies on sharing matrices Am’s that are random functions of the local features. Note that problem (10) can also be solved by using an iterative distributed gradient tracking algorithm (Qu & Li, 2018), with the benefit that no label sharing is needed; see Appendix D. Remark 4 (Optimization performance). Note that using Algorithm 2, we can solve the approximate problem (10) to arbitrary accuracy using either Option I or Option II. However, it is by no means clear if the solution obtained by Algorithm 2 will be close to the solution of (3). Therefore, after problem (10) is solved, it is important to understand how close the solutions returned by Algorithm 2 are to the original kernel regression problem (3).
4 MAIN RESULTS
In this section, we analyze the performance of Algorithm 2. Specifically, we are interested in understanding the training loss and the generalization error incurred due to the kernel approximation (cf. Algorithm 1). For this purpose, we focus on `2 loss functions for which the kernel regression problem (10) can be solved in closed-form. Specifically, we want to minimize the loss:
L(f) = 1 2 Ex,y⇠⇡(x,y)[(f(x) y)2]. (11)
We solve the following kernel ridge regression problem with the choice L̂(↵) = 12N kȳ K↵k 2,
min ↵2RN
n R̂(↵) = 1
2N kȳ K↵k2 + 2 k↵k2K
o (12)
where we denote ȳ = [ȳT1 , . . . , ȳTM ] T 2 RN with ȳm = [y(1)m , y(2)m , . . . , y(n)m ]T 2 Rn. The above problem can be solved in closed form with ↵̂⇤ = [K+N · · I] 1ȳ. The approximated problem at each agent with the kernel KP and with the loss function L̂P (↵) = 12N kȳ KP↵k 2 is
min ↵2RN
n R̂P (↵) = 1
2N kȳ KP↵k2 + 2 k↵k2KP
o (13)
with the optimal solution returned by Option I in Algorithm 2 as ↵̂⇤ P = [KP +N · · I] 1ȳ. The goal is to analyze the impact of the approximation on the performance of Algorithm 2. Specifically, we bound the difference between the optimal losses of the exact and the approximated Kernel ridge regression. We begin with some assumptions. Assumption 1. We assume |k(x, x0)| 2 and |kP (x, x0)| 2 for some 1. Assumption 2. The function g(·) in (5) used to construct the GIP kernel is G-Lipschitz w.r.t. , i.e., 9G 0 such that: |g( , z2, z3) g( ̂, z2, z3)| G| ̂|, 8 , ̂ 2 [0,⇡] and 8z2, z3 2 R. Assumption 3. We assume that the data labels |y| R almost surely for some R > 0. Assumption 4. There exists fH 2 H such that L(fH) = infh2H L(h).
A few remarks are in order. Note that Assumptions 1, 3 and 4 are standard in the statistical learning theory (Cucker & Zhou, 2007; Caponnetto & De Vito, 2007; Ben-Hur & Weston, 2010; Rudi & Rosasco, 2017). Moreover, for RF kernel Assumption 1 is automatically satisfied if |⇣(x,!)|
almost surely (Rudi & Rosasco, 2017) (cf. (6) and (9)). Assumption 2 is required for estimating the kernel by approximating the pairwise angles between feature vectors. It is easy to verify that the popular kernels including, NTK (15), Arccosine, Gaussian and Polynomial kernels satisfy Assumption 2 with feature vectors belonging to a compact domain (this ensures that the Lipschitz constant G is independent of the feature vector norms). Now we are ready to present the results.
We analyze how well Algorithm 1 approximates the exact kernel. We are interested in the approximation error as a function of the number of random samples P . We have the following lemma. Lemma 4.1 (Kernel Approximation). For KP returned by Algorithm 1, the following holds with probability at least 1 : (i) For the GIP kernel, kK KP k GN ⇣q 32⇡2 P log 2N + 8⇡3P log 2N ⌘ . (ii) Similarly, for the RF kernel, kK KP k 2N ⇣q 8 P log 2N + 43P log 2N ⌘ .
Note that as P increases KP ! K, in particular, to achieve an approximation error of ✏ > 0, we need P = O(✏ 2). Importantly, Lemma 4.1 plays a crucial role in analyzing the optimization performance of the kernel approximation approach. Next, we state the training loss incurred as a consequence of solving the approximate decentralized problem (13) in Algorithm 2 instead of (12). Theorem 4.2 (Approximation: Optimal Loss). Suppose P 29 log 2N
, then for both the GIP and the RF kernels, the solution returned by Algorithm 2 (Option I) for solving (12) approximately (i.e, (13)), satisfies the following with probability at least 1
L̂P (↵̂⇤P ) L̂(↵̂⇤) = O ⇣q 1 P log 2N ⌘ and R̂P (↵̂⇤P ) R̂(↵̂⇤) O ⇣q 1 P log 2N ⌘ .
Theorem 4.2 states that as P increases, the optimal training loss achieved by solving approximate problem (13) via Algorithm 2 (Option I) will approach the performance of the centralized system (12) for both the GIP and the RF kernels. The proof of the above result utilizes Lemma 4.1 and the definition of the loss functions in (12) and (13). See Appendix G for a detailed proof.
The results of Lemma 4.1 and Theorem 4.2 characterize the approximation performance of the proposed approximation – optimization framework on fixed number of training samples. Of course, it is of interest to analyze how the proposed approximation algorithms will perform on unseen test data. Towards this end, it is essential to analyze the performance of the function f̂P learned from solving (13) via Algorithm 2. We have the following result. Theorem 4.3 (Generalization performance). Let us choose = 1/ p N , 2 (0, 1), and N max n
4 3kKk2 , 72
2 p N log 32 2 p N o , also choose P max n 8, 512⇡ 2 G 2
kKk2 , 288⇡2G2N
o log 16 for
the GIP kernel and P max n 82, 32 2
kKk2 , 722
p N o log 128 2 p N for the RF kernel, where K is
defined in Appendix F. Then with probability at least 1 , we have for f̂P returned by Algorithm 2 (Option I) for approximately solving (12) (i.e., (13)): L(f̂P ) infh2H L(h) = O 1/ p N .
The proof of Theorem 4.3 utilizes a result similar to Lemma 4.1 but for integral operator defined using kernels k(·, ·) and kP (·, ·). Theorem 4.3 states that with appropriate choice of (the regularization parameter), N (the number of overall samples), and P (the messages communicated per agent), the proposed algorithm achieves the minimax optimal generalization performance (Caponnetto & De Vito, 2007). Also, note that the the requirement of P = O( p N) for the RF kernel compared to P = O(N) for the GIP kernel is due to the particular structure of the RF kernel (cf. (6)). It can be seen from Lemmas H.4 and H.5 in Appendix H, that the approximation obtained with the RF kernel allows the derivation of tighter bounds compared to the GIP kernel. The next corollary precisely states the total communication required per agent to achieve this optimal performance. Corollary 1 (Communication requirements for the GIP and RF kernels). Suppose Algorithm 2 uses the choice of parameters stated in Theorem 4.3 to approximately optimize (12). Then it requires a total of O(N2/M) bits (resp. O(N p N/M) real values) of message exchanges per node when the GIP kernel (resp. the RF kernel) is used, to achieve minimax optimal generalization performance. Moreover, if unnormalized feature vectors are used, then the GIP kernel requires an additional O(N/M) real values of message exchanges per node. Compared to DKRR-RF-CM (Liu et al., 2021), Decentralized RF (Richards et al., 2020), DKLA, and COKE (Xu et al., 2020), the number of message exchanges required by the proposed algorithm
is independent of the iteration numbers, and it is much less compared to other algorithms, especially for the GIP kernel in the overparameterized regime; see Table 1 for detailed comparisons.
5 EXPERIMENTS
We compare the performance of the proposed algorithm to DKRR-RF-CM (Liu et al., 2021), Decentralized RF (Richards et al., 2020), and DKLA (Xu et al., 2020). We evaluate the performance of all the algorithms on real world datasets from the UCI repository.
Specifically, we present the results on National Advisory Committee for Aeronautics (NACA) airfoil noise dataset (Lau & López, 2009), where the goal is to predict aircraft noise based on a few measured attributes. The dataset consists of N = 1503 samples that are split equally among M = 10 nodes. Each node utilizes 70% of its data for training and 30% for testing purposes. Each feature vector x(i)m 2 R5 represents the measured attributes such as, frequency, angle, etc., and each label y(i)m represents the noise level. Additional experiments on different datasets and classification problems, as well as the detailed parameter settings, are included in the Appendix A.
We evaluate the performance of all the algorithms with the Gaussian kernel. Note that the algorithms DKRR-RF-CM, Decentralized RF, and DKLA can only be implemented using the RF approach while our proposed algorithm utilizes the GIP kernel. Also, in contrast to these benchmark algorithms that use iterative parameter exchange, the proposed Algorithm 2 uses only one-shot communication. First, in Table 2, we compare the communication required by each algorithm with the Gaussian kernel for P = 100, 500, and 1000 to achieve the same test mean squared error (MSE) for each setting, see last row of Table 2. Note that for P = 100, the communication required by Algorithm 2 is less than 50% of that required by DKLA and Decentralized RF while it is only slightly less than that of DKRR-RF-CM. Moreover, as P increases to 500 and 1000, it can be seen that Algorithm 2 only requires a fraction of communication compared to other algorithms, and this fact demonstrates the utility of the proposed algorithms for over-parameterized learning problems. In Table 3, we compare the averaged MSE achievable by different algorithms, when a fixed total communication budget (in bits) is given for each setting (see the last row of Table 3 for the budget). Note that Algorithm 2 significantly outperforms all the other methods as P increases. This is expected since Algorithm 2 essentially solves a centralized problem (cf. Problem (10)) after the multi-agent kernel approximation (cf. Algorithm 1), and a large P provides a better approximation of the kernel (cf. Lemma 4.1). In contrast, for the parameter sharing based algorithms the performance deteriorates even though the kernel approximation improves with large P as learning a high-dimensional parameter naturally requires more communication rounds as well as a higher communication budget per communication round.
Please note that we also compare the performance of Algorithm 2 with the benchmarking algorithms discussed above for the NTK. We further benchmark the performance of Algorithm 2 against the centralized algorithms for the Gaussian, the Polynomial, and the NTK. However, due to space limitations, we relegate these numerical results to the Appendix A.
ACKNOWLEDGEMENTS
We thank the anonymous reviewers for their valuable comments and suggestions. The work of Prashant Khanduri and Mingyi Hong is supported in part by NSF grant CMMI-1727757, AFOSR grant 19RT0424, ARO grant W911NF-19-1-0247 and Meta research award on “Mathematical modeling and optimization for large-scale distributed systems”. The work of Mingyi Hong is also supported by an IBM Faculty Research award. The work of Jia Liu is supported in part by NSF grants CAREER CNS-2110259, CNS-2112471, CNS-2102233, CCF-2110252, ECCS-2140277, and a Google Faculty Research Award. The work of Hoi-To Wai was supported by CUHK Direct Grant #4055113. | 1. What is the focus and contribution of the paper regarding decentralized empirical risk minimization?
2. What are the strengths of the proposed algorithms and theoretical analyses?
3. What are the weaknesses of the paper, particularly in terms of experiments and comparisons with other works?
4. Do you have any suggestions for improving the paper, such as adding experiments or comparing with other methods?
5. Are there any minor issues or typos in the paper that could be corrected? | Summary Of The Paper
Review | Summary Of The Paper
The decentralization problem has data distributed among many agents and each agent is wanted to maintain some privacy. In this paper, the authors study the decentralized empirical risk minimization problem with reproducing kernel hilbert space. Two large classes of kernels are considered: (1) generalized inner-product (GIP) kernel based on arccosine kernel (proposed in this work), (2) random feature (RF) kernel. In order to attain decentralization, authors approximate kernels based on the inner product of two finite vectors, and propose algorithms (one-shot and iterative) to optimize private models. In addition, authors study in theory the approximation error for kernels, optimization algorithm performance and generalization error. Finally, experiments are presented to validate the algorithms and theoretical results.
Review
Strengths
The paper is clearly written. The contents are well organized and easy to follow.
The algorithms and theoretical analyses are technically sound and novel. I read through the algorithms and corresponding discussions. I checked their details and everything looks correct to me. The rkhs setting is new and interesting. The theoretical analyses under the new settings are sound. I read the proof for theorem 4.1 and it looks good to me. These performance and error analyses are classical in the kernel field. Although I didn't read all proofs and there may be mistakes, I think they can be easy to fix.
Weaknesses
The experiment and comparison to other methods are weak. As mentioned in the introduction, communicating high-dimensional parameters is problematic and might leading to slow convergence, but I feel this is not convincing because of no sound evidence. I would suggest adding a simple experiment to demonstrate this. It is also unclear how previous approaches perform compared to this work. This work relies on kernels so there can be some shortcomings, e.g. kernels may not be a good choice for large-scale datasets. I would suggest (1) adding experiments to compare this work with previous ones to see when this approach is preferable or to highlight the shortcomings (or advantages) of other methods; (2) adding a table summarizing the difference between this work and others, e.g. whether they use neural network and what are pros and cons.
Minor:
Above Eqn. (5) duplicate ''as''
Below Eqn. (3) '' ... represent (m . i)th element of vector [K \alpha] .. '' redundant to the notation section
Page 5 ''will dependent on the underlying kernel approximation schemes'' -> ''will be dependent ...'' |
ICLR | Title
Causal Discovery via Cholesky Factorization
Abstract
Discovering the causal relationship via recovering the directed acyclic graph (DAG) structure from the observed data is a challenging combinatorial problem. This paper proposes an extremely fast, easy to implement, and high-performance DAG structure recovering algorithm. The algorithm is based on the Cholesky factorization of the covariance/precision matrix. The time complexity of the algorithm is O(pn + p), where p and n are the numbers of nodes and samples, respectively. Under proper assumptions, we show that our algorithm takes O(log(p)) or O(p) samples to exactly recover the DAG structure under proper assumptions. In both time and sample complexities, our algorithm is better than previous algorithms. On synthetic and real-world data sets, our algorithm is significantly faster than previous methods and achieves state-of-the-art performance.
1 INTRODUCTION
As Schelling had said: “The whole world is thoroughly to caught in reason, but the question is: how did it get caught in the network of reason in the first place?” (Kuhn, 1942; Žižek & von Schelling, 1997), people found that learning the causal inferences between the variables is a fundamental problem and has many applications in biology, machine learning, medicine, and economics. The problem usually is considered as finding a directed acyclic graph (DAG) from an observational joint distribution. Unfortunately, learning the DAG structure from the observations is proved to be an NP-hard problem (Chickering, 1995; Chickering et al., 2004).
The problem is generally formulated as the structural equation model (SEM), where the variable of a child node is a function of its parents with additional noises. Depending on the types of functions (linear or non-linear) and noises (Gaussian, Gumbel, etc.), there are several SEM families, e.g., Spirtes et al. (2000); Geiger & Heckerman (1994); Shimizu et al. (2006). In general, the graph can be identified from the joint distribution only up to Markov equivalence classes. Zhang & Hyvarinen (2012); Peters et al. (2014); Peters & Bühlmann (2014); Gao et al. (2020) propose several SEM forms that make the graph fully identifiable from the observed data.
Various algorithms had been proposed to deal with the problem. Search-based algorithms (Chickering, 2002; Friedman & Koller, 2003; Ramsey et al., 2017; Tsamardinos et al., 2006; Aragam & Zhou, 2015; Teyssier & Koller, 2005; Ye et al., 2019; Lv et al., 2021) generally adopt a score (e.g., BIC (Peters et al., 2014) score, Cholesky score (Ye et al., 2019), remove-fill score (Squires et al., 2020)) to measure the fitness of different graphs over data and then search over the legal DAG space to find the structure that achieves the highest score. However, exhaustive search over the legal DAG space is infeasible when p is large (e.g., there are 4.1e18 DAGs for p = 10 (Sloane et al., 2003)). Those algorithms go in quest of a trade-off between the performance and the time complexity.
Since Zheng et al. (2018) proposed an approach that converts the traditional combinatorial optimization problem into a continuous program, many methods (Yu et al., 2019; Lee et al., 2019; Ng et al., 2019a;b; Zheng et al., 2020; Lachapelle et al., 2020; Squires et al., 2020; Zhu et al., 2021) have been proposed. Those algorithms formalize the problem as a data reconstruction task with various differentiable constraints on the DAG adjacent matrix and solve it via the augmented Lagrangian method. These algorithms are able to utilize neural networks to approximate the complicated relations between the features in the observed data and achieve good performances. Recently, reinforcement learning based algorithms (Zhu et al., 2020; Wang et al., 2021) also improved the performance by exploring the possible DAG structure candidates. The algorithms update the parameters of the model
via policy gradient as long as it explored a better DAG structure with a higher reward which measures how well an explored structure meets the requirement of DAG and the observed data.
Topology order search algorithms (TOSA) (Ghoshal & Honorio, 2017; 2018; Chen et al., 2019; Gao et al., 2020; Park, 2020) decompose the DAG learning problem into two phases: (i) Topology order learning via conditional variance of the observed data; (ii) Graph estimation depends on the learned topology order. Those algorithms reduce the computation complexity into polynomial time and are guaranteed to recover the DAG structure under some identifiable assumptions. Our method in this paper is also a topology order search algorithm and it merges the two phases in TOSA into one. In each iteration, it attempts to find a child or a contemporary of the current node. Meanwhile, it also determines the corresponding column vector of the adjacent matrix. The mergence brings three main differences: First, the topology order in TOSA is recovered purely based on the conditional variance of the observed data, whereas our method may also take the sparsity of the adjacent matrix into account; Second, the graph LASSO methods, which are commonly adopted to estimate the graph in the second phase in TOSA, encourage the sparsity of the precision matrix, whereas our method is able to encourage the sparsity of the adjacent matrix; Third, the time complexity is reduced significantly. To be specific, the time complexity of our algorithm is O(p2n + p3), while the fastest algorithm before is O(p5n) (Park, 2020; Gao et al., 2020). Here p and n are the numbers of nodes and samples, respectively. In addition, under proper assumptions, we show that our algorithm takes O(log(p)) or O(p) samples to exactly recover the DAG structure. Compared with previous TOSA algorithms, the sample complexity of our method is much better. Experimental results on synthetic data sets, proteins data sets, and knowledge base data set demonstrate the efficiency and effectiveness of our algorithm. For synthetic data sets, compared with previous baselines, our algorithm improves the performance with a significant margin and at least tens or hundreds of times faster. For the proteins data set, we achieve state-of-the-art performance. For the knowledge base data set, we can observe many reasonable structures of the discovered DAG. Our code is uploaded as supplementary material and will be open-sourced upon the acceptance of this paper.
The rest of this paper is organized as follows. In Section 2, we present our algorithm together with the theoretical analysis. In Section 3, numerical results on synthetic data sets, proteins data set, and knowledge base data set are given. Finally, the paper is concluded in Section 4.
Notations. The symbol ‖ · ‖ stands for the Euclid norm of a vector or the spectral norm of a matrix. For a vector x = [x1,x2, . . . ,xp] ∈ Rp, ‖ · ‖1 stands for the `1-norm, i.e., ‖x1‖ = ∑p i=1 |xi|. For a matrix X = [Xij ] ∈ Rm×n, ‖ · ‖2,∞ stands for the two-to-infinity norm, i.e., ‖X‖2,∞ = max1≤i≤m ‖Xi,:‖; ‖ · ‖max stands for the max norm, ‖X‖max = maxi,j |Xij |.
2 CAUSAL DISCOVERY VIA CHOLESKY FACTORIZATION (CDCF)
In this section, we first present some preliminaries on DAG, then motivating our algorithm. Next, the detailed algorithm and theoretical guarantees for the exact recovery of the algorithm are given.
2.1 PRELIMINARIES
We assume the observed data is entailed by a DAG G = (p, V,E), where p is the number of nodes, V = {v1, ..., vp} and E = {(vi, vj)|i, j ∈ {1, ...p}} represent the set of nodes and edges, respectively. Each node vi is corresponding to a random variable Xi. The observed data matrix X = [x1, ...,xp] ∈ Rn×p where xi is consisting of n i.i.d observations of the random variable Xi. The joint distribution of X is P (X) = ∏p i=1 P (Xi|PaG(Xi)), where PaG(Xi) := {Xj |(vi, vj) ∈ E} is the parents of node Xi.
Given X , we seek to recover the latent DAG topology structure for the joint probability distribution (Hoyer et al., 2008; Peters et al., 2017). Generally, X is modeled via a structural equation model (SEM) with the form
Xi = fi(PaG(Xi)) +Ni, (i = 1, ..., p),
where fi is an arbitrary function representing the relation between Xi and its parents, Ni is the jointly independent noise variable.
In this paper, we focus on the linear SEM defined by Xi = Xwi +Ni, (i = 1, ..., p),
where wi ∈ Rp is a weighted column vector. Let W = [w1, . . . ,wp] ∈ Rp×p be the weighted adjacency matrix, N = [n1, . . . ,np] ∈ Rn×p be an additive independent noise matrix, where ni is n i.i.d observations following the noise variable Ni. Then the linear SEM model can be formulated as
X = XW +N . (1)
We assume the noise deviation of the child variable is approximately larger than that of its parents (see Theorem 2.1 for details). Following this assumption, a classical identifiable form of SEM is the linear-Gaussian SEM, where all Ni are i.i.d. and homoscedastic (Peters & Bühlmann, 2014).
2.2 ALGORITHM MOTIVATION
As proposed in McKay et al. (2003); Nicholson (1975), a graph is DAG if and only if the corresponding weighted adjacent matrix W can be decomposed into
W = PTPT, (2)
where P is a permutation matrix, T is a strict upper triangular matrix, i.e., Tij = 0 for all i ≤ j.
We denote the scaled permuted data matrix as X̂ = 1√ n XP , the scaled permuted noise matrix as N̂ = 1√ n NP , and the permutation order [i∗1, i ∗ 2 . . . , i ∗ p] = [1, 2, . . . , p]P . We can rewrite (1) as
X̂ = X̂T + N̂ .
Then it follows that X̂ = N̂(I − T )−1. (3)
Let E(N̂TN̂) = Σ̂2∗ = Σ̂TΣ̂, (4)
where Σ̂2∗ is the covariance matrix of the noise variables, Σ̂ is upper triangular – the Cholesky factor of Σ̂2∗. Let the diagonal entries of Σ̂ be σ
2 i∗1 , σ2i∗2 , . . . , σ 2 i∗p . We know that σ2i∗k is the conditional variance of Ni∗k .
Now using (3) and (4), we have the covariance matrix of the permuted data:
Ĉ∗ = E(X̂TX̂) = (I − T )−TE(N̂TN̂)(I − T )−1 = (I − T )−TΣ̂TΣ̂(I − T )−1. (5)
Let L = (I − T )−TΣ̂T, then Ĉ∗ = LLT , which is the Cholesky factorization of the covariance matrix Ĉ∗ since L is lower triangular. Furthermore, we can see that the diagonal entries of L are the same as that of Σ̂, i.e., Lkk = σi∗k , the conditional variances of Xi∗k and Ni∗k are the same.
The task becomes to find the permutation i∗ = [i∗1, i ∗ 2, . . . , i ∗ p] and an upper triangular matrix U such that U−TU−1 is a good approximation of the empirical estimation of the permuted covariance matrix Ĉ = 1nX T :,i∗X:,i∗ , and U satisfies some additional constraints, such as the sparsity, etc.
2.3 ALGORITHM
We iteratively find the permutation i and calculate U via the Cholesky factorization. Assume that ik−1 = [i1, . . . , ik−1] and Uk−1 = U1:k−1,1:k−1 are settled, and we have
C1:k−1,1:k−1 = 1
n XT:,ik−1X:,ik−1 + λI = U −T k−1U −1 k−1, (6)
where λ > 0 is a diagonal augmentation parameter which we will give detailed discussion latter. Next, we show how to find ik and the last column of Uk.
For the time being, let us assume ik is known, we show how to compute the last column of Uk. Let U−1k =
[ U−1k−1 yk
0 αk ] , then[
U−1k−1 yk 0 αk
]T [ U−1k−1 yk
0 αk
] = [ U−Tk−1U −1 k−1 U −T k−1yk
yTkU −1 k−1 α 2 k+‖yk‖ 2
] = 1
n
[ XT:,ik−1 X:,ik−1+λI X T :,ik−1 X:,ik
XT:,ik X:,ik−1 ‖X:,ik‖ 2+λ
] ,
Algorithm 1 Causal Discovery via Cholesky Factorization (CDCF) 1: input: Data matrix X ∈ Rn×p, Truncate Threshold ω > 0, and tuning parameter γ. 2: output: Adjacent Matrix A. 3: Set i = [1, 2, . . . , p], R = ‖X‖22,∞ and λ = γ log p n R;
4: Set ` = argmin {‖X:,i1‖, ‖X:,i2‖, . . . , ‖X:,ip‖}; 5: Exchange i1 and i` in i; Set U1 =
√ n
‖X:,i`‖2+λ ;
6: for k = 2, 3, . . . , p do 7: for j = k, k + 1, . . . , p do 8: yj = 1 nU T k−1X T :,ik−1 X:,ij ;
9: αj = √ 1 n‖X:,ij‖2 + λ− ‖yj‖2;
10: end for 11: (V) ` = argmink≤j≤p α2j ;
(S) ` = argmink≤j≤p ‖Uk−1yj‖1; (VS) ` = argmink≤j≤p ‖Uk−1yj‖1 √∣∣α2j − 1k−1 ∑k−1h=1 1[Uk−1]2hh ∣∣; 12: Exchange ik and i` in i;
13: Set Uk = [ Uk−1 − 1α`Uk−1y`
0 1α`
] ;
14: end for 15: return A = [TRIU(TRUNCATE(Up, ω))]REVERSE(i),REVERSE(i).
where the last equality dues to (6). It follows that
yk = 1
n UTk−1X T :,ik−1 X:,ik , αk =
√ 1
n ‖X:,ik‖2 + λ− ‖yk‖2. (7)
And direct calculation gives rise to Uk = [ U−1k−1 yk
0 αk
]−1 = [ Uk−1 − 1αkUk−1yk
0 1αk
] . (8)
By (8), once ik is settled, we can obtain the last column of Uk. Our task remains to select ik from {1, . . . , p} \ {i1, . . . , ik−1}. There are several ways to accomplish this task. We propose three criteria to select ik. First, we need to compute αj and yj by (7) for all possible j (ij ∈ {1, . . . , p} \ {i1, . . . , ik−1}). Then we select ik according to one of the following criteria:
(V) ik = argmink≤j≤p α2j . Under the assumption that the noise variance of the child variable is approximately larger than that of its parents, it is reasonable/natural to select the index that has the lowest estimation of the noise variance. This criterion is guaranteed to find the correct permutation i∗ with high probability, which is shown in Section 2.4.
(S) ik = argmink≤j≤p ‖Uk−1yj‖1. Using (3) and (6), we know that Up intends to estimate (I − T )Σ̂−1. When the adjacent matrix T is sparse and the noise variables are independent (i.e., Σ̂ is diagonal), we would like to select the index that leading to the most sparse column of Uk. This criterion is especially useful when the number of samples is small, see Tables B.1, B.2 and B.3 in appendix.
(VS) ik = argmink≤j≤p ‖Uk−1yj‖1 √∣∣α2j − 1k−1 ∑k−1h=1 1[Uk−1]2hh ∣∣. We empirically combine
criterion (V) and criterion (S) together to take both aspects (variance and sparsity) into account. Numerically, we found that this criterion achieves the best performance in real-world data.
The diagonal augmentation trick in (6) is commonly used for an invertible and good conditioned estimation of the covariance matrix (see e.g., (Ledoit & Wolf, 2004)). Such a trick not only ensures that our algorithm does not break down due to the singularity of the sample covariance matrix, but also stabilizes the Cholesky factorization, especially when the sample is insufficient. In addition, by setting λ = O( log pn ), the error bound between the population covariance matrix and the augmented sample covariance matrix does not become worse (see Lemma ?? in the appendix). This trick
significantly improves the ability to recover the DAG, especially when the samples are insufficient, see Tables B.4, B.5 and B.6 in appendix.
The detailed algorithm is summarized in Algorithm 1. Some comments and implementation details follow. Line 4, we select the very initial value ` = argmin {‖X:,i1‖, ‖X:,i2‖, . . . , ‖X:,ip‖}. Line 5, we exchange i1 and i` in i and calculate U1 = √ n
‖X:,i`‖2+λ . Lines 6 to 14, we iteratively calculate
Uk and update permutation order i until all the indices are settled. Line 15, we truncate U , take its strict upper triangular part (denoted by “TRIU”) and re-permute the predicted adjacent matrix back to the original order according to the permutation order i. Specifically, the truncation is done column-wisely. By (8), the value of [Up]:,k is inversely proportional to αk. So, for column k, we set ωk =
ω αk , and do the truncation: [Up]ik is set to zero if |[Up]ik| < ωk. On output, node i connects to node j in G if |Aij | > 0.
Time Complexity Note that we do not have to re-calculate the matrix multiplication of XT:,ik−1X:,ij in line 8 since we can calculate C at the cost of O(p
2n) at first. Besides, at step k, we have already calculate UTk−2X T :,ik−1
X:,ij at previous step, we only need to calculate the last entry of yj , which is the inner product between two k dimensional vectors, at the cost ofO(p) in worst case. Overall, the time complexity of CDCF is O(p3 + p2n). When n > p, the complexity becomes O(p2n), which is equivalent to the complexity of calculating the covariance matrix. Additionally, the inner loop (lines 7 to 10) of CDCF can be done in parallel, which makes the algorithm friendly to run on GPU and suitable for large scale calculations.
2.4 EXACT DAG STRUCTURE RECOVERY
The following theorem tells that our algorithm is able to recover the DAG exactly with high probability under proper assumptions.
Theorem 2.1 Let x ∈ Rp be a zero-mean random vector, C = E(xxT) ∈ Rp×p be the covariance matrix. Let x1, . . . ,xn be n independent samples, Ĉ = 1n ∑n k=1 xkx T k be the sample covariance estimator. Assume ‖C − Ĉ‖ ≤ for some > 0. Denote Ĉλ = Ĉ + λI , where λ = O( ) ≥ 0 is a parameter. Let the Cholesky factorizations of C = ExxT and Ĉλ be C = LLT and Ĉλ = L̂L̂T, respectively, where L and L̂ are both lower triangular. For the linear SEM model (1), assume (2) and (4), and for k ∈ PaG(j), δ = infk∈PaG(j) δjk > 0, where
δjk = σ 2 i∗j + ‖Σ̂n[(I − T )−1]k:j−1,k‖2 − σ2i∗k .
If δ ≥ 4( + λ) and ‖L−1‖2( + λ) < 34 , then CDCF-V is able to recover P exactly. In addition, it holds that
‖TRIU(Up)− T ‖max ≤ 4‖Σ̂−1∗ (I − T )T‖22,∞‖(I − T )Σ̂−T∗ ‖2,∞( + λ), where TRIU(Up) stands for the strictly upper triangular part of Up, Up is the output of outer loop of Algorithm 1 with criterion (V).
we know that when T is sparse, we may recover its topology structure by truncating Up.
Proposition 1 Let Ni,: be independent bounded, or sub-Gaussian, or regular polynomial-tail, then for n > N( ), it holds ‖Ĉxx −Cxx‖ ≤ , w.h.p. Specifically,
N( ) ≥ C1 log p (‖(I − T )−1‖2‖Cnn‖ )2 , for bounded class;
N( ) ≥ C2 p (‖(I − T )−1‖2‖Cnn‖ )2 , for the sub-Gaussian class;
N( ) ≥ C3 p (‖(I − T )−1‖2‖Cnn‖ )2(1+r−1) , for the regular polynomial tail class.
The proofs of are provided in the Appendix A. This theorem and proposition also indicates the sample complexity of our algorithm is O(p). This sample complexity is better than the sample complexities of previous methods, see Table 2.1 for a detailed comparison.
3 EXPERIMENTS
In this section, we apply our algorithm to synthetic data sets, proteins data set and knowledge base data set, respectively, to illustrate the efficiency and effectiveness of our algorithm.
3.1 LINEAR SEM
We evaluate the proposed methods on simulated graphs from two well-known ensembles of random graph types: Erdös–Rényi (ER) (Gilbert, 1959) and Scale-free (SF) (Barabási & Albert, 1999). The average edge number per node is denoted after the graph type. For example, ER2 represents two edges per node on average. After the graph structure is settled, we assign uniformly random edge weights to obtain a weight matrix W . We generate the observation data X from the linear SEM with three noise distributions: Gaussian, Gumbel, Exponential.
We chose our baseline methods as NOTEARS (Zheng et al., 2018), DAG-GNN (Yu et al., 2019), CORL (Wang et al., 2021), NPVAR (Gao et al., 2020), and EQVAR (Chen et al., 2019). Other methods such as PC algorithm (Spirtes et al., 2000), LiNGAM (Shimizu et al., 2006), FGS (Ramsey et al., 2017), MMHC (Tsamardinos et al., 2006), L1OBS (Schmidt et al., 2007), CAM (Bühlmann et al., 2013), RL-BIC2 (Zhu et al., 2020), A*LASSO (Xiang & Kim, 2013), LISTEN (Ghoshal & Honorio, 2018), US (Park, 2020) perform worse than or approximately equal to the selected baselines, and the results can be found in the corresponding papers.
Table 3.1 presents the structural Hamming distance (SHD) of baseline methods and our method on 3000 samples (n = 3000). Nodes number p is noted in the first column. Graph type and edge level are noted in the second column. We only report the SHD of different algorithms due to page limitation, and we find that other metrics such as true positive rate (TPR), false discovery rate (FDR), false positive rate (FPR), and F1 score have the similar comparative performance with SHD. We also test bottom-up EQVAR which is equivalent to LISTEN, the result is worse than top-down EQVAR (EV-TD) in this synthesis experiment, so we do not include the result in the table. For p = 1000 graphs, we only report the result of EV-TD and CDCF since other algorithms spend too much time (longer than a week) to recover a DAG. We test our algorithms with different variations according to criteria (V, S, VS) introduced in Section 2.3, and with diagonal augmentation trick noted by a “+” as postfix. For example, "CDCF-V" means CDCF with V criterion and λ = 0, and "CDCF-V+" means CDCF with V criterion and λ = O( log pn ). The implementation details are in the Appendix B. We report the result of CDCF-V+ here, and the results of other CDCF variations can be found in Appendix Table B.4. We run our methods on ten randomly generated graphs and report the mean and variance in the table. Figure 3.1 plots the SHD results tested on 100 nodes graph recovering from different sample sizes. We choose EV-TD and high dimension top down (EV-HTD) as baselines when p > n and p ≤ n, respectively. We can see from the results, CDCF-V+ achieves significantly better performance comparing with previous baselines.
Table 3.2 shows the running time which is tested on a 2.3 GHz single Intel Core i5 CPU. Besides, parallel calculation of the matrix multiplication on GPU makes the algorithm even faster. Recovering 5000 and 10000 nodes graph from 3000 samples on an A100 Nvidia GPU is approximately 400 and 2400 seconds, respectively. For comparison, EV-TD costs approximately 100 hours to recover a 1000 nodes DAG from 3000 samples. As illustrated in the table, CDCF is approximately dozens or hundreds of times faster than EV-TD and LISTEN, and tens of thousands times faster than NOTEARS as CDCF does not have to update the parameters with gradients.
Due to the page limitation, further experiments and discussions of the ablation study (Figures B.3 to B.14, Tables B.1 to B.6), choice of λ (Tables B.7 to B.10), and performances on different noise distribution (Figures B.1, B.2) and deviation (Tables B.11, B.12, B.13) are given in Appendix B.
3.2 PROTEINS DATA SET
We consider a bioinformatics data set (Sachs et al., 2005) consisting of continuous measurements of expression levels of proteins and phospholipids in the human immune system cells. This is a widely used data set for research on graphical models, with experimental annotations accepted by the biological research community. Following the previous algorithms setting, we noticed that different previous papers adopted different observations. To included them all, we considered the observational 853 samples from the "CD3, CD28" simulation tested by Teyssier & Koller (2005); Lachapelle et al. (2020); Zhu et al. (2020) and all 7466 samples from nine different simulations tested by Zheng et al. (2018; 2020); Yu et al. (2019).
We report the experimental results on both settings in Table 3.3. The implementation codes of the baselines are introduced in the appendix, and we use the default settings of the hyper-parameters provided in their codes. The evaluate metric is FDR, TPR, FPR, SHD, predicted nodes number (N), precision (P), F1 score. As the recall score is equal to TPR, we do not include it in the table. In both settings, CDCF-VS+ achieves state-of-the-art performance. 1 Several reasons make the recovered graph not exactly the same as the expected one. The ground truth graph suggested by the paper is mixed with directed and indirect edges. Under the settings of SEM, the node "PKA" is quite similar to the leaf nodes since most of its edges are indirect while the ground truth graph notes it as the out edges. Non-linear would not be an impact issue here since NOTEARS and our algorithm both achieve decent results. In the meantime, we do not deny that further extension of our algorithm to non-linear representation would witness an improvement on this data set.
3.3 KNOWLEDGE BASE DATA SET
We test our algorithm on FB15K-237 data set (Toutanova et al., 2015) in which the knowledge is organized as {Subject, Predicate,Object} triplets. The data set has 15K triplets and 237 types of predicates. In this experiment, we only consider the single jump predicate between the entities, which
1For NOTEARS-MLP, Table 3.3 reported the results reproduced by the code provided in Zheng et al. (2020).
have 97 predicates remained. We want to discover the causal relationships between the predicates. We organize the observation data as each sample corresponds to an entity with awareness of the position (Subject or Object), and each variable corresponds to a predicate in this knowledge base.
In Figure 3.2, we give the adjacent weighted matrix of the generated graph and several examples with high confidence (larger than 0.5). In the left figure, the label of the axis notes the first capital letter of the domain of the relations. Some of them are replaced with a dot to save space. The exact domain name and the picture with the full predicate name are provided in the appendix. The domain clusters are denoted in black boxes at the diagonal of the adjacent matrix. The red boxes denoted the cross-domain relations that are worth paying attention to. Consistent with the innateness of human sense, the recovered relationships inside a domain are denser than those across domains. In the cross-domain relations, we found that the predicate in domain "TV" ("T") has many relations with the domain "Film" ("F"), the domain "Broadcast" (last row) have many relations with the domain "Music" ("M"). Several cases of the predicted causal relationships are listed on the right side of Figure 3.2, we can see that the discovered indication relations between predicates are quite reasonable.
4 CONCLUSION AND FUTURE WORK
In this paper, we proposed a topology search algorithm for the DAG structure recovery problem. Our algorithm is better than the existing methods in both time and sample complexities. To be specific, the time complexity of our algorithm isO(p2n+ p3), while the fastest algorithm before isO(p5n) (Park, 2020; Gao et al., 2020), where p and n are the numbers of nodes and samples, respectively. Under different assumptions, our algorithm takes O(log(p)) or O(p) samples to exactly recover the DAG structure. Experimental results on synthetic data sets, proteins data sets, and knowledge base data set demonstrate the efficiency and effectiveness of our algorithm. For synthetic data sets, compared with previous baselines, our algorithm improves the performance with a significant margin and at least tens or hundreds of times faster. For the proteins data set, we achieve state-of-the-art performance. For the knowledge base data set, we can observe many reasonable structures of the discovered DAG.
The proposed algorithm is under the assumption of linear SEM. Generalization of CDCF to nonlinear SEM would be a valuable and important research topic. Learning the representation of the observed data for better structure reconstruction via the CDCF algorithm, which requires the algorithm differentiable, is also an attractive problem. To deal with the extremely large-scale problems, such as millions of nodes, implementing CDCF via sparse matrix storage and calculation on the GPU is a promising way to further improve computational performance.
A PROOF OF THEOREM 2.1
In this section, we first give several lemmas, then prove Theorem 2.1.
Lemma A.1 Let x ∈ Rp be a zero-mean random vector, C = E(xxT) ∈ Rp×p be the covariance matrix. Let x1, . . . ,xn be n independent samples, Ĉ = 1n ∑n k=1 xkx T k be the sample covariance estimator. Assume ‖C − Ĉ‖ ≤ for some > 0. Denote Ĉλ = Ĉ + λI , where λ = O( ) ≥ 0 is a parameter. Let the Cholesky factorizations of C = ExxT and Ĉλ be C = LLT and Ĉλ = L̂L̂T, respectively, where L and L̂ are both lower triangular. If ‖L−1‖2( + λ) < 34 , then
|‖Li,:‖2 − ‖[L̂λ]i,:‖2| ≤ + λ = O( ), for 1 ≤ i ≤ p; (9)
|[L−1]ij − [L̂−1]ij | ≤ 4‖L−1‖22,∞‖L−T‖2,∞( + λ) = O( ), for i > j. (10)
Proof. For all 1 ≤ i ≤ p, we have
|‖Li,:‖2 − ‖L̂i,:‖2| = |Cii − [Ĉλ]ii| ≤ ‖C − Ĉλ‖ ≤ ‖C − Ĉ‖+ λ ≤ + λ, (11)
which completes the proof for (9).
Next, we show (10). Let
L−1L̂ = I + F , (I + F )(I + F )T = I +E. (12)
We know that
L̂−1 −L−1 = [(I + F )−1 − I]L−1 = −F (I + F )−1L−1, (13)
E = L−1L̂L̂TL−T − I = L−1(Ĉλ −C)L−T. (14)
Then it follows from (13) that for i > j
|[L−1]ij−[L̂−1]ij | ≤ ‖Fi,1:i−1‖‖[(I+F )−1L−1]:,j‖ ≤ ‖Fi,1:i−1‖‖(I+F )−1‖‖L−T‖2,∞. (15)
First, we give an upper bound for ‖(I + F )−1‖. Using (12), we have (I + F )−T(I + F )−1 = (I +E)−1. It follows
‖(I + F )−1‖ = ‖(I + F )−T(I + F )−1‖ 12 = ‖(I +E)−1‖ 12
≤ 1√ 1− ‖E‖ ≤ 1√ 1− ‖L−1‖2‖Ĉλ −C‖ , (16)
where the last inequality uses (14).
Second, we give upper bound for ‖Fi,1:i−1‖. It follows from the second equality of (12) that
(1 + Fii) 2 + ‖Fi,1:i−1‖2 = 1 +Eii. (17)
Therefore,
‖Fi,1:i−1‖2 ≤ |(1 + Fii)2 − 1|+Eii (a) ≤ L̂ 2 ii −L2ii L2ii +Eii
(b) ≤ + λ L2ii + ‖L−1‖22,∞‖Ĉλ −C‖ (c) ≤ 2‖L−1‖22,∞( + λ), (18)
where (a) uses (12), (b) uses (9) and (14), (c) uses ‖C − Ĉ‖ ≤ . Substituting (18) and (16) into (15), we get
|[L−1]ij − [L̂−1]ij | ≤ 2‖L−1‖22,∞‖L−T‖2,∞ + λ√
1− ‖L−1‖2( + λ) . (19)
The conclusion follows since ‖L−1‖2( + λ) < 34 .
Theorem A.2 Let x ∈ Rp be a zero-mean random vector, C = E(xxT) ∈ Rp×p be the covariance matrix. Let x1, . . . ,xn be n independent samples, Ĉ = 1n ∑n k=1 xkx T k be the sample covariance estimator. Assume ‖C − Ĉ‖ ≤ for some > 0. Denote Ĉλ = Ĉ + λI , where λ = O( ) ≥ 0 is a parameter. Let the Cholesky factorizations of C = ExxT and Ĉλ be C = LLT and Ĉλ = L̂L̂T, respectively, where L and L̂ are both lower triangular. For the linear SEM model (1), assume (2) and (4), and for k ∈ PaG(j), δ = infk∈PaG(j) δjk > 0, where
δjk = σ 2 i∗j + ‖Σ̂n[(I − T )−1]k:j−1,k‖2 − σ2i∗k .
If δ ≥ 4( + λ) and ‖L−1‖2( + λ) < 34 , then CDCF-V is able to recover P exactly. In addition, it holds that
‖TRIU(Up)− T ‖max ≤ 4‖Σ̂−1∗ (I − T )T‖22,∞‖(I − T )Σ̂−T∗ ‖2,∞( + λ),
where TRIU(Up) stands for the strictly upper triangular part of Up, Up is the output of outer loop of Algorithm 1 with criterion (V).
Proof. For SEM model (1), denote Ĉ∗ = E( 1nX̂ TX̂), Σ̂2∗ = E( 1nN̂ TN̂) = Σ̂Tn Σ̂n, we have (5), i.e.,
Ĉ∗ = (I − T )−TΣ̂2∗(I − T )−1 = (I − T )−TΣ̂Tn Σ̂n(I − T )−1. (20)
When the permutation i∗ = [i∗1, . . . , i ∗ p] is exactly recovered, then Up in CDCF-V satisfies
Ĉλ = 1
n XT:,i∗X:,i∗ + λI = U −T p U −1 p . (21)
Denote i∗j = [i ∗ 1, . . . , i ∗ j ] for all j = 1, . . . , p. Consider the kth diagonal entries of (20) and (21). By calculations, we get
[Ĉ∗]kk = [(I − T )−1]T:,kΣ̂Tn Σ̂n[(I − T )−1]:,k = σ2i∗k + ‖uk‖ 2, (22) [Ĉλ]kk = 1
n ‖Xi∗k‖
2 + λ = 1
U2kk + ‖ûk‖2, (23)
where
uk = [Σ̂n]1:k−1,1:k−1(Ik−1 − T1:k−1,1:k−1)−1T1:k−1,k, ûk = 1
n UTk−1X T :,i∗k−1 X:,i∗k . (24)
Using ‖C − Ĉ‖ ≤ , we have
|[Ĉ∗]kk − [Ĉλ]kk| ≤ ‖C − Ĉλ‖ ≤ ‖C − Ĉ‖+ λ ≤ + λ. (25)
By Lemma A.1, we have
|‖uk‖2 − ‖ûk‖2| ≤ + λ. (26)
Using (22), (23), (25) and (26), we get
|σ2i∗k − 1
U2kk | ≤ 2( + λ). (27)
Assume that i∗1, . . . , i ∗ k−1 (k ≥ 1) are all correctly recovered. And without loss of generality, for k ∈ PaG(j), we also assume Tk:j−1,j 6= 0 (otherwise, jth and kth columns are exchangeable, and i forms another equivalence topology order to the same DAG (Sedgewick & Wayne, 2011)). Then we
have for k ∈ PaG(j) that 1
n ‖Xi∗j ‖
2 + λ− ‖[ûj ]1:k−1‖2 (a) = [Ĉ∗]jj + [Ĉλ]jj − [Ĉ∗]jj − ‖[ûj ]1:k−1‖2
(b) ≥ [Ĉ∗]jj − ( + λ)− ‖[uj ]1:k−1‖2 − ( + λ) (c) = σi∗j + ‖[uj ]k:j−1‖
2 − 2( + λ) (d) ≥ σi∗k + δ − 2( + λ) (e) = [Ĉ∗]kk − ‖uk‖2 + δ − 2( + λ) (f) ≥ [Ĉλ]kk − ‖ûk‖2 + δ − 4( + λ) (g) = 1
n ‖Xi∗k‖ 2 + λ− ‖ûk‖2 + δ − 4( + λ),
where (a) uses (23), (b) and (f) uses (25) and Lemma A.1, (c) uses (22), (d) dues to the assumption σi∗j ≥ σi∗k for k ∈ PaG(j), (e) uses (22), (g) uses (23). Therefore, using δ > 4( + λ), we have
1 n ‖Xi∗j ‖ 2 + λ− ‖[ûj ]1:k−1‖2 > 1 n ‖Xi∗k‖ 2 + λ− ‖ûk‖2,
which implies that i∗k can be correctly recovered. So, overall speaking, CDCF-V is able to recover the permutation P .
The upper bound for ‖TRIU(Up)− T ‖max follows from Lemma A.1. The proof is completed.
Proposition 2 Let Ni,: be independent bounded, or sub-Gaussian, 2 or regular polynomial-tail, 3 then for n > N( ), it holds ‖Ĉxx −Cxx‖ ≤ , w.h.p. Specifically,
N( ) ≥ C1 log p (‖(I − T )−1‖2‖Cnn‖ )2 , for bounded class;
N( ) ≥ C2 p (‖(I − T )−1‖2‖Cnn‖ )2 , for the sub-Gaussian class;
N( ) ≥ C3 p (‖(I − T )−1‖2‖Cnn‖ )2(1+r−1) , for the regular polynomial tail class.
Proof. For SEM model (1), we have
‖Ĉxx−Cxx‖ ≤ ‖(I−T )−1‖2‖Ĉnn−Cnn‖ ≤ ‖(I−T )−1‖2‖Cnn‖‖C − 12 nn ĈnnC − 12 nn −I‖, (28)
where Cxx = ExxT, Cnn = EnnT are the covariance matrices for x and n, respectively, Ĉxx, Ĉnn are the sample covariance matrices for x and n, respectively. The three results listed above follow from Corollary 5.52, Theorem 5.39 in Vershynin (2010), Theorem 1.1 in Srivastava & Vershynin (2013), respectively.
2A random vector z is isotropic and sub-Gaussian if EzzT = I and and there exists constant C > 0 such that P(|vTz| > t) ≤ exp(−Ct2) for any unit vector v. Here by “Ni,: is sub-Gaussian” we mean that C − 1 2 nn N T i,: is an isotropic and sub-Gaussian random vector. 3A random vector z is isotropic and regular polynomial-tail if EzzT = I and there exist constants r > 1, C > 0 such that P(‖V z‖2 > t) ≤ Ct−1−r for any orthogonal projection V and any t > C · rank(V ). Here by “Ni,: is regular polynomial-tail” we mean that C − 1 2 nn N T i,: is an isotropic and regular polynomial-tail random vector.
B ADDITIONAL EXPERIMENTS
Here we provide implementation details and additional experiment results.
Figures B.1, B.2 provide the results of Gumbel and Exponential noises, respectively. As we can see from the result, our algorithm still performs better than Eqvar method in different noise types.
Tables B.1, B.2 , B.3, B.4, B.5, B.6 give results on 100 nodes over different sample sizes and variances of our CDCF methods. As noted in Algorithm 1, we have V, S, VS as different criteria to select the current column, "+" representing the sample covariance matrix augmented with the scalar matrix log p n I . The truncation threshold on column i is ωi = 3.5/αi, where αi is the diagonal value of the Cholesky factor. According to the results, the algorithm "V+" achieves the best performance as the sample size is relatively large. When the sample size is small, the criterion according to sparsity shows very effective performance improvement. We also test different choices over λ = β log pn , β ∈ {0.0, 1.0..., 9.0}, the result is given in Table B.7, B.8, B.9, B.10. Empirically, β ∈ {1.0, 2.0} achieves better results. In practice, one can sample a relatively small and labeled sub-graph of the DAG to test the hyper-parameter setting then apply to large unlabeled the DAG graph.
To test the performance limitation of our methods, we provide the results of SHD on different sample number and node number in Figures B.3 to B.14 where the x-axis represents the sample number (in thousand), the y-axis denotes the node number, the color represents the value of log2(SHD + 1) (the brighter the better). We provide the figures for CDCF-V+, CDCF-S+, and CDCF-VS+ on variances graph and noise types. The figures are drawn on the mean results over ten random seeds. The figures show that the graph can be exactly recovered on 800 nodes at approximately 6000 samples. Comparing CDCF-V+ with CDCF-S+, we find that criterion (S) damages the performance when the sample number is relatively large. When sample number ∈ {1500, 3000} and node number ∈ {400, 800}, CDCF-S+ achieves better performance. Such trend can also be demonstrated in Tables B.1, B.2, B.3. CDCF-VS+ alleviates the poor performance of CDCF-S when the data is sufficient and achieves good performance on real-world data set.
We also test the performance on linear SEM with monotonously increased noise variance. Concretely, assume the topology order is i = {i1, ..., ip}, we set the noise variance of node k as σk = 1 + ik/p. We test the results on Gaussian, Gumbel, and Exponential noise with monotonous noise variance. The results are reported in Tables B.11, B.12 and B.13. As the results indicated, even with different noise levels, our algorithms achieve good performance and are able to exactly recover the DAG structure when the data is sufficient.
In the result for knowledge base data set, the axis labels of Figure 3.2 are ‘Film’, ‘People’, ‘Location’, ‘Music’, ‘Education’, ‘Tv’, ‘Medicine’, ‘Sports’, ‘Olympics’, ‘Award’, ‘Time’, ‘Organization’, ‘Language’, ‘MediaCommon’, ‘Influence’, ‘Dataworld’, ‘Business’, ‘Broadcast’ from left to right for x-axis and top to bottom for y-axis, respectively. The adjacent matrix plotted here is re-permuted to make the relations in the same domain close to each other. We keep the adjacent matrix inside a domain an upper triangular matrix. Such typology is equivalent to the generated matrix with the original order.
Baseline Implementations The baselines are implemented via the codes provided from the following links:
• NOTEARS, NOTEARS-MLP: https://github.com/xunzheng/notears • NPVAR: https://github.com/MingGao97/NPVAR • EQVAR, LISTEN: https://github.com/WY-Chen/EqVarDAG • CORL: https://github.com/huawei-noah/trustworthyAI/tree/master/gcastle • DAG-GNN: https://github.com/fishmoon1234/DAG-GNN | 1. What is the focus and contribution of the paper on learning linear structural equation models (SEMs)?
2. What are the strengths and weaknesses of the proposed algorithm compared to prior works like Ghoshal and Honorio 2018 and Chen et al 2019?
3. How does the reviewer assess the sample complexity of the proposed method, and how does it compare to other algorithms in terms of applicability in high-dimensional settings?
4. What additional experimental comparisons or analyses would the reviewer suggest to improve the understanding of the proposed method's performance? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a new algorithm for learning linear SEMs using Cholesky factorization of the covariance matrix induced by the SEM. While the paper essentially follows the ideas in Ghoshal and Honorio 2018 and Chen et al 2019, the main innovation is combining the order search step followed by parent set recovery into a single step through Cholesky factorization.
Review
The paper is well motivated and the contributions of the paper is clear — the authors develop the fastest algorithm for learning linear SEMs. However given that existing algorithms from Ghoshal and Honorio 2018 and Chen et al 2019 are already polynomial time and are comparable to the proposed algorithm in terms of running time I feel this is not significant. The paper doesn't contribute anything new towards identifiability of linear SEMs. The experiments are extensive and the clearly demonstrate the effectiveness of the method.
However my main criticism is the sample complexity. The sample complexity depends on the
ℓ
2
norm of the
p
dimensional random vectors
X
which can be
O
(
p
)
even when the underlying SEM is sparse (has degree
d
that is constant). This is not the case with the algorithm proposed in Ghoshal and Honorio whose sample complexity grows as
O
(
poly
(
d
)
log
p
)
. So the improved computational complexity come at the cost of increased sample complexity and the algorithm is not applicable in high dimensional settings.
In the experiments, the authors compare the performance of their algorithm against LISTEN (Ghoshal and Honorio 2018) only at 3000 number of samples. I would have liked to see learning curves (SHD vs varying number of samples) for both the algorithms. It is likely that LISTEN outperforms the proposed algorithm when the number of samples is low. |
ICLR | Title
Causal Discovery via Cholesky Factorization
Abstract
Discovering the causal relationship via recovering the directed acyclic graph (DAG) structure from the observed data is a challenging combinatorial problem. This paper proposes an extremely fast, easy to implement, and high-performance DAG structure recovering algorithm. The algorithm is based on the Cholesky factorization of the covariance/precision matrix. The time complexity of the algorithm is O(pn + p), where p and n are the numbers of nodes and samples, respectively. Under proper assumptions, we show that our algorithm takes O(log(p)) or O(p) samples to exactly recover the DAG structure under proper assumptions. In both time and sample complexities, our algorithm is better than previous algorithms. On synthetic and real-world data sets, our algorithm is significantly faster than previous methods and achieves state-of-the-art performance.
1 INTRODUCTION
As Schelling had said: “The whole world is thoroughly to caught in reason, but the question is: how did it get caught in the network of reason in the first place?” (Kuhn, 1942; Žižek & von Schelling, 1997), people found that learning the causal inferences between the variables is a fundamental problem and has many applications in biology, machine learning, medicine, and economics. The problem usually is considered as finding a directed acyclic graph (DAG) from an observational joint distribution. Unfortunately, learning the DAG structure from the observations is proved to be an NP-hard problem (Chickering, 1995; Chickering et al., 2004).
The problem is generally formulated as the structural equation model (SEM), where the variable of a child node is a function of its parents with additional noises. Depending on the types of functions (linear or non-linear) and noises (Gaussian, Gumbel, etc.), there are several SEM families, e.g., Spirtes et al. (2000); Geiger & Heckerman (1994); Shimizu et al. (2006). In general, the graph can be identified from the joint distribution only up to Markov equivalence classes. Zhang & Hyvarinen (2012); Peters et al. (2014); Peters & Bühlmann (2014); Gao et al. (2020) propose several SEM forms that make the graph fully identifiable from the observed data.
Various algorithms had been proposed to deal with the problem. Search-based algorithms (Chickering, 2002; Friedman & Koller, 2003; Ramsey et al., 2017; Tsamardinos et al., 2006; Aragam & Zhou, 2015; Teyssier & Koller, 2005; Ye et al., 2019; Lv et al., 2021) generally adopt a score (e.g., BIC (Peters et al., 2014) score, Cholesky score (Ye et al., 2019), remove-fill score (Squires et al., 2020)) to measure the fitness of different graphs over data and then search over the legal DAG space to find the structure that achieves the highest score. However, exhaustive search over the legal DAG space is infeasible when p is large (e.g., there are 4.1e18 DAGs for p = 10 (Sloane et al., 2003)). Those algorithms go in quest of a trade-off between the performance and the time complexity.
Since Zheng et al. (2018) proposed an approach that converts the traditional combinatorial optimization problem into a continuous program, many methods (Yu et al., 2019; Lee et al., 2019; Ng et al., 2019a;b; Zheng et al., 2020; Lachapelle et al., 2020; Squires et al., 2020; Zhu et al., 2021) have been proposed. Those algorithms formalize the problem as a data reconstruction task with various differentiable constraints on the DAG adjacent matrix and solve it via the augmented Lagrangian method. These algorithms are able to utilize neural networks to approximate the complicated relations between the features in the observed data and achieve good performances. Recently, reinforcement learning based algorithms (Zhu et al., 2020; Wang et al., 2021) also improved the performance by exploring the possible DAG structure candidates. The algorithms update the parameters of the model
via policy gradient as long as it explored a better DAG structure with a higher reward which measures how well an explored structure meets the requirement of DAG and the observed data.
Topology order search algorithms (TOSA) (Ghoshal & Honorio, 2017; 2018; Chen et al., 2019; Gao et al., 2020; Park, 2020) decompose the DAG learning problem into two phases: (i) Topology order learning via conditional variance of the observed data; (ii) Graph estimation depends on the learned topology order. Those algorithms reduce the computation complexity into polynomial time and are guaranteed to recover the DAG structure under some identifiable assumptions. Our method in this paper is also a topology order search algorithm and it merges the two phases in TOSA into one. In each iteration, it attempts to find a child or a contemporary of the current node. Meanwhile, it also determines the corresponding column vector of the adjacent matrix. The mergence brings three main differences: First, the topology order in TOSA is recovered purely based on the conditional variance of the observed data, whereas our method may also take the sparsity of the adjacent matrix into account; Second, the graph LASSO methods, which are commonly adopted to estimate the graph in the second phase in TOSA, encourage the sparsity of the precision matrix, whereas our method is able to encourage the sparsity of the adjacent matrix; Third, the time complexity is reduced significantly. To be specific, the time complexity of our algorithm is O(p2n + p3), while the fastest algorithm before is O(p5n) (Park, 2020; Gao et al., 2020). Here p and n are the numbers of nodes and samples, respectively. In addition, under proper assumptions, we show that our algorithm takes O(log(p)) or O(p) samples to exactly recover the DAG structure. Compared with previous TOSA algorithms, the sample complexity of our method is much better. Experimental results on synthetic data sets, proteins data sets, and knowledge base data set demonstrate the efficiency and effectiveness of our algorithm. For synthetic data sets, compared with previous baselines, our algorithm improves the performance with a significant margin and at least tens or hundreds of times faster. For the proteins data set, we achieve state-of-the-art performance. For the knowledge base data set, we can observe many reasonable structures of the discovered DAG. Our code is uploaded as supplementary material and will be open-sourced upon the acceptance of this paper.
The rest of this paper is organized as follows. In Section 2, we present our algorithm together with the theoretical analysis. In Section 3, numerical results on synthetic data sets, proteins data set, and knowledge base data set are given. Finally, the paper is concluded in Section 4.
Notations. The symbol ‖ · ‖ stands for the Euclid norm of a vector or the spectral norm of a matrix. For a vector x = [x1,x2, . . . ,xp] ∈ Rp, ‖ · ‖1 stands for the `1-norm, i.e., ‖x1‖ = ∑p i=1 |xi|. For a matrix X = [Xij ] ∈ Rm×n, ‖ · ‖2,∞ stands for the two-to-infinity norm, i.e., ‖X‖2,∞ = max1≤i≤m ‖Xi,:‖; ‖ · ‖max stands for the max norm, ‖X‖max = maxi,j |Xij |.
2 CAUSAL DISCOVERY VIA CHOLESKY FACTORIZATION (CDCF)
In this section, we first present some preliminaries on DAG, then motivating our algorithm. Next, the detailed algorithm and theoretical guarantees for the exact recovery of the algorithm are given.
2.1 PRELIMINARIES
We assume the observed data is entailed by a DAG G = (p, V,E), where p is the number of nodes, V = {v1, ..., vp} and E = {(vi, vj)|i, j ∈ {1, ...p}} represent the set of nodes and edges, respectively. Each node vi is corresponding to a random variable Xi. The observed data matrix X = [x1, ...,xp] ∈ Rn×p where xi is consisting of n i.i.d observations of the random variable Xi. The joint distribution of X is P (X) = ∏p i=1 P (Xi|PaG(Xi)), where PaG(Xi) := {Xj |(vi, vj) ∈ E} is the parents of node Xi.
Given X , we seek to recover the latent DAG topology structure for the joint probability distribution (Hoyer et al., 2008; Peters et al., 2017). Generally, X is modeled via a structural equation model (SEM) with the form
Xi = fi(PaG(Xi)) +Ni, (i = 1, ..., p),
where fi is an arbitrary function representing the relation between Xi and its parents, Ni is the jointly independent noise variable.
In this paper, we focus on the linear SEM defined by Xi = Xwi +Ni, (i = 1, ..., p),
where wi ∈ Rp is a weighted column vector. Let W = [w1, . . . ,wp] ∈ Rp×p be the weighted adjacency matrix, N = [n1, . . . ,np] ∈ Rn×p be an additive independent noise matrix, where ni is n i.i.d observations following the noise variable Ni. Then the linear SEM model can be formulated as
X = XW +N . (1)
We assume the noise deviation of the child variable is approximately larger than that of its parents (see Theorem 2.1 for details). Following this assumption, a classical identifiable form of SEM is the linear-Gaussian SEM, where all Ni are i.i.d. and homoscedastic (Peters & Bühlmann, 2014).
2.2 ALGORITHM MOTIVATION
As proposed in McKay et al. (2003); Nicholson (1975), a graph is DAG if and only if the corresponding weighted adjacent matrix W can be decomposed into
W = PTPT, (2)
where P is a permutation matrix, T is a strict upper triangular matrix, i.e., Tij = 0 for all i ≤ j.
We denote the scaled permuted data matrix as X̂ = 1√ n XP , the scaled permuted noise matrix as N̂ = 1√ n NP , and the permutation order [i∗1, i ∗ 2 . . . , i ∗ p] = [1, 2, . . . , p]P . We can rewrite (1) as
X̂ = X̂T + N̂ .
Then it follows that X̂ = N̂(I − T )−1. (3)
Let E(N̂TN̂) = Σ̂2∗ = Σ̂TΣ̂, (4)
where Σ̂2∗ is the covariance matrix of the noise variables, Σ̂ is upper triangular – the Cholesky factor of Σ̂2∗. Let the diagonal entries of Σ̂ be σ
2 i∗1 , σ2i∗2 , . . . , σ 2 i∗p . We know that σ2i∗k is the conditional variance of Ni∗k .
Now using (3) and (4), we have the covariance matrix of the permuted data:
Ĉ∗ = E(X̂TX̂) = (I − T )−TE(N̂TN̂)(I − T )−1 = (I − T )−TΣ̂TΣ̂(I − T )−1. (5)
Let L = (I − T )−TΣ̂T, then Ĉ∗ = LLT , which is the Cholesky factorization of the covariance matrix Ĉ∗ since L is lower triangular. Furthermore, we can see that the diagonal entries of L are the same as that of Σ̂, i.e., Lkk = σi∗k , the conditional variances of Xi∗k and Ni∗k are the same.
The task becomes to find the permutation i∗ = [i∗1, i ∗ 2, . . . , i ∗ p] and an upper triangular matrix U such that U−TU−1 is a good approximation of the empirical estimation of the permuted covariance matrix Ĉ = 1nX T :,i∗X:,i∗ , and U satisfies some additional constraints, such as the sparsity, etc.
2.3 ALGORITHM
We iteratively find the permutation i and calculate U via the Cholesky factorization. Assume that ik−1 = [i1, . . . , ik−1] and Uk−1 = U1:k−1,1:k−1 are settled, and we have
C1:k−1,1:k−1 = 1
n XT:,ik−1X:,ik−1 + λI = U −T k−1U −1 k−1, (6)
where λ > 0 is a diagonal augmentation parameter which we will give detailed discussion latter. Next, we show how to find ik and the last column of Uk.
For the time being, let us assume ik is known, we show how to compute the last column of Uk. Let U−1k =
[ U−1k−1 yk
0 αk ] , then[
U−1k−1 yk 0 αk
]T [ U−1k−1 yk
0 αk
] = [ U−Tk−1U −1 k−1 U −T k−1yk
yTkU −1 k−1 α 2 k+‖yk‖ 2
] = 1
n
[ XT:,ik−1 X:,ik−1+λI X T :,ik−1 X:,ik
XT:,ik X:,ik−1 ‖X:,ik‖ 2+λ
] ,
Algorithm 1 Causal Discovery via Cholesky Factorization (CDCF) 1: input: Data matrix X ∈ Rn×p, Truncate Threshold ω > 0, and tuning parameter γ. 2: output: Adjacent Matrix A. 3: Set i = [1, 2, . . . , p], R = ‖X‖22,∞ and λ = γ log p n R;
4: Set ` = argmin {‖X:,i1‖, ‖X:,i2‖, . . . , ‖X:,ip‖}; 5: Exchange i1 and i` in i; Set U1 =
√ n
‖X:,i`‖2+λ ;
6: for k = 2, 3, . . . , p do 7: for j = k, k + 1, . . . , p do 8: yj = 1 nU T k−1X T :,ik−1 X:,ij ;
9: αj = √ 1 n‖X:,ij‖2 + λ− ‖yj‖2;
10: end for 11: (V) ` = argmink≤j≤p α2j ;
(S) ` = argmink≤j≤p ‖Uk−1yj‖1; (VS) ` = argmink≤j≤p ‖Uk−1yj‖1 √∣∣α2j − 1k−1 ∑k−1h=1 1[Uk−1]2hh ∣∣; 12: Exchange ik and i` in i;
13: Set Uk = [ Uk−1 − 1α`Uk−1y`
0 1α`
] ;
14: end for 15: return A = [TRIU(TRUNCATE(Up, ω))]REVERSE(i),REVERSE(i).
where the last equality dues to (6). It follows that
yk = 1
n UTk−1X T :,ik−1 X:,ik , αk =
√ 1
n ‖X:,ik‖2 + λ− ‖yk‖2. (7)
And direct calculation gives rise to Uk = [ U−1k−1 yk
0 αk
]−1 = [ Uk−1 − 1αkUk−1yk
0 1αk
] . (8)
By (8), once ik is settled, we can obtain the last column of Uk. Our task remains to select ik from {1, . . . , p} \ {i1, . . . , ik−1}. There are several ways to accomplish this task. We propose three criteria to select ik. First, we need to compute αj and yj by (7) for all possible j (ij ∈ {1, . . . , p} \ {i1, . . . , ik−1}). Then we select ik according to one of the following criteria:
(V) ik = argmink≤j≤p α2j . Under the assumption that the noise variance of the child variable is approximately larger than that of its parents, it is reasonable/natural to select the index that has the lowest estimation of the noise variance. This criterion is guaranteed to find the correct permutation i∗ with high probability, which is shown in Section 2.4.
(S) ik = argmink≤j≤p ‖Uk−1yj‖1. Using (3) and (6), we know that Up intends to estimate (I − T )Σ̂−1. When the adjacent matrix T is sparse and the noise variables are independent (i.e., Σ̂ is diagonal), we would like to select the index that leading to the most sparse column of Uk. This criterion is especially useful when the number of samples is small, see Tables B.1, B.2 and B.3 in appendix.
(VS) ik = argmink≤j≤p ‖Uk−1yj‖1 √∣∣α2j − 1k−1 ∑k−1h=1 1[Uk−1]2hh ∣∣. We empirically combine
criterion (V) and criterion (S) together to take both aspects (variance and sparsity) into account. Numerically, we found that this criterion achieves the best performance in real-world data.
The diagonal augmentation trick in (6) is commonly used for an invertible and good conditioned estimation of the covariance matrix (see e.g., (Ledoit & Wolf, 2004)). Such a trick not only ensures that our algorithm does not break down due to the singularity of the sample covariance matrix, but also stabilizes the Cholesky factorization, especially when the sample is insufficient. In addition, by setting λ = O( log pn ), the error bound between the population covariance matrix and the augmented sample covariance matrix does not become worse (see Lemma ?? in the appendix). This trick
significantly improves the ability to recover the DAG, especially when the samples are insufficient, see Tables B.4, B.5 and B.6 in appendix.
The detailed algorithm is summarized in Algorithm 1. Some comments and implementation details follow. Line 4, we select the very initial value ` = argmin {‖X:,i1‖, ‖X:,i2‖, . . . , ‖X:,ip‖}. Line 5, we exchange i1 and i` in i and calculate U1 = √ n
‖X:,i`‖2+λ . Lines 6 to 14, we iteratively calculate
Uk and update permutation order i until all the indices are settled. Line 15, we truncate U , take its strict upper triangular part (denoted by “TRIU”) and re-permute the predicted adjacent matrix back to the original order according to the permutation order i. Specifically, the truncation is done column-wisely. By (8), the value of [Up]:,k is inversely proportional to αk. So, for column k, we set ωk =
ω αk , and do the truncation: [Up]ik is set to zero if |[Up]ik| < ωk. On output, node i connects to node j in G if |Aij | > 0.
Time Complexity Note that we do not have to re-calculate the matrix multiplication of XT:,ik−1X:,ij in line 8 since we can calculate C at the cost of O(p
2n) at first. Besides, at step k, we have already calculate UTk−2X T :,ik−1
X:,ij at previous step, we only need to calculate the last entry of yj , which is the inner product between two k dimensional vectors, at the cost ofO(p) in worst case. Overall, the time complexity of CDCF is O(p3 + p2n). When n > p, the complexity becomes O(p2n), which is equivalent to the complexity of calculating the covariance matrix. Additionally, the inner loop (lines 7 to 10) of CDCF can be done in parallel, which makes the algorithm friendly to run on GPU and suitable for large scale calculations.
2.4 EXACT DAG STRUCTURE RECOVERY
The following theorem tells that our algorithm is able to recover the DAG exactly with high probability under proper assumptions.
Theorem 2.1 Let x ∈ Rp be a zero-mean random vector, C = E(xxT) ∈ Rp×p be the covariance matrix. Let x1, . . . ,xn be n independent samples, Ĉ = 1n ∑n k=1 xkx T k be the sample covariance estimator. Assume ‖C − Ĉ‖ ≤ for some > 0. Denote Ĉλ = Ĉ + λI , where λ = O( ) ≥ 0 is a parameter. Let the Cholesky factorizations of C = ExxT and Ĉλ be C = LLT and Ĉλ = L̂L̂T, respectively, where L and L̂ are both lower triangular. For the linear SEM model (1), assume (2) and (4), and for k ∈ PaG(j), δ = infk∈PaG(j) δjk > 0, where
δjk = σ 2 i∗j + ‖Σ̂n[(I − T )−1]k:j−1,k‖2 − σ2i∗k .
If δ ≥ 4( + λ) and ‖L−1‖2( + λ) < 34 , then CDCF-V is able to recover P exactly. In addition, it holds that
‖TRIU(Up)− T ‖max ≤ 4‖Σ̂−1∗ (I − T )T‖22,∞‖(I − T )Σ̂−T∗ ‖2,∞( + λ), where TRIU(Up) stands for the strictly upper triangular part of Up, Up is the output of outer loop of Algorithm 1 with criterion (V).
we know that when T is sparse, we may recover its topology structure by truncating Up.
Proposition 1 Let Ni,: be independent bounded, or sub-Gaussian, or regular polynomial-tail, then for n > N( ), it holds ‖Ĉxx −Cxx‖ ≤ , w.h.p. Specifically,
N( ) ≥ C1 log p (‖(I − T )−1‖2‖Cnn‖ )2 , for bounded class;
N( ) ≥ C2 p (‖(I − T )−1‖2‖Cnn‖ )2 , for the sub-Gaussian class;
N( ) ≥ C3 p (‖(I − T )−1‖2‖Cnn‖ )2(1+r−1) , for the regular polynomial tail class.
The proofs of are provided in the Appendix A. This theorem and proposition also indicates the sample complexity of our algorithm is O(p). This sample complexity is better than the sample complexities of previous methods, see Table 2.1 for a detailed comparison.
3 EXPERIMENTS
In this section, we apply our algorithm to synthetic data sets, proteins data set and knowledge base data set, respectively, to illustrate the efficiency and effectiveness of our algorithm.
3.1 LINEAR SEM
We evaluate the proposed methods on simulated graphs from two well-known ensembles of random graph types: Erdös–Rényi (ER) (Gilbert, 1959) and Scale-free (SF) (Barabási & Albert, 1999). The average edge number per node is denoted after the graph type. For example, ER2 represents two edges per node on average. After the graph structure is settled, we assign uniformly random edge weights to obtain a weight matrix W . We generate the observation data X from the linear SEM with three noise distributions: Gaussian, Gumbel, Exponential.
We chose our baseline methods as NOTEARS (Zheng et al., 2018), DAG-GNN (Yu et al., 2019), CORL (Wang et al., 2021), NPVAR (Gao et al., 2020), and EQVAR (Chen et al., 2019). Other methods such as PC algorithm (Spirtes et al., 2000), LiNGAM (Shimizu et al., 2006), FGS (Ramsey et al., 2017), MMHC (Tsamardinos et al., 2006), L1OBS (Schmidt et al., 2007), CAM (Bühlmann et al., 2013), RL-BIC2 (Zhu et al., 2020), A*LASSO (Xiang & Kim, 2013), LISTEN (Ghoshal & Honorio, 2018), US (Park, 2020) perform worse than or approximately equal to the selected baselines, and the results can be found in the corresponding papers.
Table 3.1 presents the structural Hamming distance (SHD) of baseline methods and our method on 3000 samples (n = 3000). Nodes number p is noted in the first column. Graph type and edge level are noted in the second column. We only report the SHD of different algorithms due to page limitation, and we find that other metrics such as true positive rate (TPR), false discovery rate (FDR), false positive rate (FPR), and F1 score have the similar comparative performance with SHD. We also test bottom-up EQVAR which is equivalent to LISTEN, the result is worse than top-down EQVAR (EV-TD) in this synthesis experiment, so we do not include the result in the table. For p = 1000 graphs, we only report the result of EV-TD and CDCF since other algorithms spend too much time (longer than a week) to recover a DAG. We test our algorithms with different variations according to criteria (V, S, VS) introduced in Section 2.3, and with diagonal augmentation trick noted by a “+” as postfix. For example, "CDCF-V" means CDCF with V criterion and λ = 0, and "CDCF-V+" means CDCF with V criterion and λ = O( log pn ). The implementation details are in the Appendix B. We report the result of CDCF-V+ here, and the results of other CDCF variations can be found in Appendix Table B.4. We run our methods on ten randomly generated graphs and report the mean and variance in the table. Figure 3.1 plots the SHD results tested on 100 nodes graph recovering from different sample sizes. We choose EV-TD and high dimension top down (EV-HTD) as baselines when p > n and p ≤ n, respectively. We can see from the results, CDCF-V+ achieves significantly better performance comparing with previous baselines.
Table 3.2 shows the running time which is tested on a 2.3 GHz single Intel Core i5 CPU. Besides, parallel calculation of the matrix multiplication on GPU makes the algorithm even faster. Recovering 5000 and 10000 nodes graph from 3000 samples on an A100 Nvidia GPU is approximately 400 and 2400 seconds, respectively. For comparison, EV-TD costs approximately 100 hours to recover a 1000 nodes DAG from 3000 samples. As illustrated in the table, CDCF is approximately dozens or hundreds of times faster than EV-TD and LISTEN, and tens of thousands times faster than NOTEARS as CDCF does not have to update the parameters with gradients.
Due to the page limitation, further experiments and discussions of the ablation study (Figures B.3 to B.14, Tables B.1 to B.6), choice of λ (Tables B.7 to B.10), and performances on different noise distribution (Figures B.1, B.2) and deviation (Tables B.11, B.12, B.13) are given in Appendix B.
3.2 PROTEINS DATA SET
We consider a bioinformatics data set (Sachs et al., 2005) consisting of continuous measurements of expression levels of proteins and phospholipids in the human immune system cells. This is a widely used data set for research on graphical models, with experimental annotations accepted by the biological research community. Following the previous algorithms setting, we noticed that different previous papers adopted different observations. To included them all, we considered the observational 853 samples from the "CD3, CD28" simulation tested by Teyssier & Koller (2005); Lachapelle et al. (2020); Zhu et al. (2020) and all 7466 samples from nine different simulations tested by Zheng et al. (2018; 2020); Yu et al. (2019).
We report the experimental results on both settings in Table 3.3. The implementation codes of the baselines are introduced in the appendix, and we use the default settings of the hyper-parameters provided in their codes. The evaluate metric is FDR, TPR, FPR, SHD, predicted nodes number (N), precision (P), F1 score. As the recall score is equal to TPR, we do not include it in the table. In both settings, CDCF-VS+ achieves state-of-the-art performance. 1 Several reasons make the recovered graph not exactly the same as the expected one. The ground truth graph suggested by the paper is mixed with directed and indirect edges. Under the settings of SEM, the node "PKA" is quite similar to the leaf nodes since most of its edges are indirect while the ground truth graph notes it as the out edges. Non-linear would not be an impact issue here since NOTEARS and our algorithm both achieve decent results. In the meantime, we do not deny that further extension of our algorithm to non-linear representation would witness an improvement on this data set.
3.3 KNOWLEDGE BASE DATA SET
We test our algorithm on FB15K-237 data set (Toutanova et al., 2015) in which the knowledge is organized as {Subject, Predicate,Object} triplets. The data set has 15K triplets and 237 types of predicates. In this experiment, we only consider the single jump predicate between the entities, which
1For NOTEARS-MLP, Table 3.3 reported the results reproduced by the code provided in Zheng et al. (2020).
have 97 predicates remained. We want to discover the causal relationships between the predicates. We organize the observation data as each sample corresponds to an entity with awareness of the position (Subject or Object), and each variable corresponds to a predicate in this knowledge base.
In Figure 3.2, we give the adjacent weighted matrix of the generated graph and several examples with high confidence (larger than 0.5). In the left figure, the label of the axis notes the first capital letter of the domain of the relations. Some of them are replaced with a dot to save space. The exact domain name and the picture with the full predicate name are provided in the appendix. The domain clusters are denoted in black boxes at the diagonal of the adjacent matrix. The red boxes denoted the cross-domain relations that are worth paying attention to. Consistent with the innateness of human sense, the recovered relationships inside a domain are denser than those across domains. In the cross-domain relations, we found that the predicate in domain "TV" ("T") has many relations with the domain "Film" ("F"), the domain "Broadcast" (last row) have many relations with the domain "Music" ("M"). Several cases of the predicted causal relationships are listed on the right side of Figure 3.2, we can see that the discovered indication relations between predicates are quite reasonable.
4 CONCLUSION AND FUTURE WORK
In this paper, we proposed a topology search algorithm for the DAG structure recovery problem. Our algorithm is better than the existing methods in both time and sample complexities. To be specific, the time complexity of our algorithm isO(p2n+ p3), while the fastest algorithm before isO(p5n) (Park, 2020; Gao et al., 2020), where p and n are the numbers of nodes and samples, respectively. Under different assumptions, our algorithm takes O(log(p)) or O(p) samples to exactly recover the DAG structure. Experimental results on synthetic data sets, proteins data sets, and knowledge base data set demonstrate the efficiency and effectiveness of our algorithm. For synthetic data sets, compared with previous baselines, our algorithm improves the performance with a significant margin and at least tens or hundreds of times faster. For the proteins data set, we achieve state-of-the-art performance. For the knowledge base data set, we can observe many reasonable structures of the discovered DAG.
The proposed algorithm is under the assumption of linear SEM. Generalization of CDCF to nonlinear SEM would be a valuable and important research topic. Learning the representation of the observed data for better structure reconstruction via the CDCF algorithm, which requires the algorithm differentiable, is also an attractive problem. To deal with the extremely large-scale problems, such as millions of nodes, implementing CDCF via sparse matrix storage and calculation on the GPU is a promising way to further improve computational performance.
A PROOF OF THEOREM 2.1
In this section, we first give several lemmas, then prove Theorem 2.1.
Lemma A.1 Let x ∈ Rp be a zero-mean random vector, C = E(xxT) ∈ Rp×p be the covariance matrix. Let x1, . . . ,xn be n independent samples, Ĉ = 1n ∑n k=1 xkx T k be the sample covariance estimator. Assume ‖C − Ĉ‖ ≤ for some > 0. Denote Ĉλ = Ĉ + λI , where λ = O( ) ≥ 0 is a parameter. Let the Cholesky factorizations of C = ExxT and Ĉλ be C = LLT and Ĉλ = L̂L̂T, respectively, where L and L̂ are both lower triangular. If ‖L−1‖2( + λ) < 34 , then
|‖Li,:‖2 − ‖[L̂λ]i,:‖2| ≤ + λ = O( ), for 1 ≤ i ≤ p; (9)
|[L−1]ij − [L̂−1]ij | ≤ 4‖L−1‖22,∞‖L−T‖2,∞( + λ) = O( ), for i > j. (10)
Proof. For all 1 ≤ i ≤ p, we have
|‖Li,:‖2 − ‖L̂i,:‖2| = |Cii − [Ĉλ]ii| ≤ ‖C − Ĉλ‖ ≤ ‖C − Ĉ‖+ λ ≤ + λ, (11)
which completes the proof for (9).
Next, we show (10). Let
L−1L̂ = I + F , (I + F )(I + F )T = I +E. (12)
We know that
L̂−1 −L−1 = [(I + F )−1 − I]L−1 = −F (I + F )−1L−1, (13)
E = L−1L̂L̂TL−T − I = L−1(Ĉλ −C)L−T. (14)
Then it follows from (13) that for i > j
|[L−1]ij−[L̂−1]ij | ≤ ‖Fi,1:i−1‖‖[(I+F )−1L−1]:,j‖ ≤ ‖Fi,1:i−1‖‖(I+F )−1‖‖L−T‖2,∞. (15)
First, we give an upper bound for ‖(I + F )−1‖. Using (12), we have (I + F )−T(I + F )−1 = (I +E)−1. It follows
‖(I + F )−1‖ = ‖(I + F )−T(I + F )−1‖ 12 = ‖(I +E)−1‖ 12
≤ 1√ 1− ‖E‖ ≤ 1√ 1− ‖L−1‖2‖Ĉλ −C‖ , (16)
where the last inequality uses (14).
Second, we give upper bound for ‖Fi,1:i−1‖. It follows from the second equality of (12) that
(1 + Fii) 2 + ‖Fi,1:i−1‖2 = 1 +Eii. (17)
Therefore,
‖Fi,1:i−1‖2 ≤ |(1 + Fii)2 − 1|+Eii (a) ≤ L̂ 2 ii −L2ii L2ii +Eii
(b) ≤ + λ L2ii + ‖L−1‖22,∞‖Ĉλ −C‖ (c) ≤ 2‖L−1‖22,∞( + λ), (18)
where (a) uses (12), (b) uses (9) and (14), (c) uses ‖C − Ĉ‖ ≤ . Substituting (18) and (16) into (15), we get
|[L−1]ij − [L̂−1]ij | ≤ 2‖L−1‖22,∞‖L−T‖2,∞ + λ√
1− ‖L−1‖2( + λ) . (19)
The conclusion follows since ‖L−1‖2( + λ) < 34 .
Theorem A.2 Let x ∈ Rp be a zero-mean random vector, C = E(xxT) ∈ Rp×p be the covariance matrix. Let x1, . . . ,xn be n independent samples, Ĉ = 1n ∑n k=1 xkx T k be the sample covariance estimator. Assume ‖C − Ĉ‖ ≤ for some > 0. Denote Ĉλ = Ĉ + λI , where λ = O( ) ≥ 0 is a parameter. Let the Cholesky factorizations of C = ExxT and Ĉλ be C = LLT and Ĉλ = L̂L̂T, respectively, where L and L̂ are both lower triangular. For the linear SEM model (1), assume (2) and (4), and for k ∈ PaG(j), δ = infk∈PaG(j) δjk > 0, where
δjk = σ 2 i∗j + ‖Σ̂n[(I − T )−1]k:j−1,k‖2 − σ2i∗k .
If δ ≥ 4( + λ) and ‖L−1‖2( + λ) < 34 , then CDCF-V is able to recover P exactly. In addition, it holds that
‖TRIU(Up)− T ‖max ≤ 4‖Σ̂−1∗ (I − T )T‖22,∞‖(I − T )Σ̂−T∗ ‖2,∞( + λ),
where TRIU(Up) stands for the strictly upper triangular part of Up, Up is the output of outer loop of Algorithm 1 with criterion (V).
Proof. For SEM model (1), denote Ĉ∗ = E( 1nX̂ TX̂), Σ̂2∗ = E( 1nN̂ TN̂) = Σ̂Tn Σ̂n, we have (5), i.e.,
Ĉ∗ = (I − T )−TΣ̂2∗(I − T )−1 = (I − T )−TΣ̂Tn Σ̂n(I − T )−1. (20)
When the permutation i∗ = [i∗1, . . . , i ∗ p] is exactly recovered, then Up in CDCF-V satisfies
Ĉλ = 1
n XT:,i∗X:,i∗ + λI = U −T p U −1 p . (21)
Denote i∗j = [i ∗ 1, . . . , i ∗ j ] for all j = 1, . . . , p. Consider the kth diagonal entries of (20) and (21). By calculations, we get
[Ĉ∗]kk = [(I − T )−1]T:,kΣ̂Tn Σ̂n[(I − T )−1]:,k = σ2i∗k + ‖uk‖ 2, (22) [Ĉλ]kk = 1
n ‖Xi∗k‖
2 + λ = 1
U2kk + ‖ûk‖2, (23)
where
uk = [Σ̂n]1:k−1,1:k−1(Ik−1 − T1:k−1,1:k−1)−1T1:k−1,k, ûk = 1
n UTk−1X T :,i∗k−1 X:,i∗k . (24)
Using ‖C − Ĉ‖ ≤ , we have
|[Ĉ∗]kk − [Ĉλ]kk| ≤ ‖C − Ĉλ‖ ≤ ‖C − Ĉ‖+ λ ≤ + λ. (25)
By Lemma A.1, we have
|‖uk‖2 − ‖ûk‖2| ≤ + λ. (26)
Using (22), (23), (25) and (26), we get
|σ2i∗k − 1
U2kk | ≤ 2( + λ). (27)
Assume that i∗1, . . . , i ∗ k−1 (k ≥ 1) are all correctly recovered. And without loss of generality, for k ∈ PaG(j), we also assume Tk:j−1,j 6= 0 (otherwise, jth and kth columns are exchangeable, and i forms another equivalence topology order to the same DAG (Sedgewick & Wayne, 2011)). Then we
have for k ∈ PaG(j) that 1
n ‖Xi∗j ‖
2 + λ− ‖[ûj ]1:k−1‖2 (a) = [Ĉ∗]jj + [Ĉλ]jj − [Ĉ∗]jj − ‖[ûj ]1:k−1‖2
(b) ≥ [Ĉ∗]jj − ( + λ)− ‖[uj ]1:k−1‖2 − ( + λ) (c) = σi∗j + ‖[uj ]k:j−1‖
2 − 2( + λ) (d) ≥ σi∗k + δ − 2( + λ) (e) = [Ĉ∗]kk − ‖uk‖2 + δ − 2( + λ) (f) ≥ [Ĉλ]kk − ‖ûk‖2 + δ − 4( + λ) (g) = 1
n ‖Xi∗k‖ 2 + λ− ‖ûk‖2 + δ − 4( + λ),
where (a) uses (23), (b) and (f) uses (25) and Lemma A.1, (c) uses (22), (d) dues to the assumption σi∗j ≥ σi∗k for k ∈ PaG(j), (e) uses (22), (g) uses (23). Therefore, using δ > 4( + λ), we have
1 n ‖Xi∗j ‖ 2 + λ− ‖[ûj ]1:k−1‖2 > 1 n ‖Xi∗k‖ 2 + λ− ‖ûk‖2,
which implies that i∗k can be correctly recovered. So, overall speaking, CDCF-V is able to recover the permutation P .
The upper bound for ‖TRIU(Up)− T ‖max follows from Lemma A.1. The proof is completed.
Proposition 2 Let Ni,: be independent bounded, or sub-Gaussian, 2 or regular polynomial-tail, 3 then for n > N( ), it holds ‖Ĉxx −Cxx‖ ≤ , w.h.p. Specifically,
N( ) ≥ C1 log p (‖(I − T )−1‖2‖Cnn‖ )2 , for bounded class;
N( ) ≥ C2 p (‖(I − T )−1‖2‖Cnn‖ )2 , for the sub-Gaussian class;
N( ) ≥ C3 p (‖(I − T )−1‖2‖Cnn‖ )2(1+r−1) , for the regular polynomial tail class.
Proof. For SEM model (1), we have
‖Ĉxx−Cxx‖ ≤ ‖(I−T )−1‖2‖Ĉnn−Cnn‖ ≤ ‖(I−T )−1‖2‖Cnn‖‖C − 12 nn ĈnnC − 12 nn −I‖, (28)
where Cxx = ExxT, Cnn = EnnT are the covariance matrices for x and n, respectively, Ĉxx, Ĉnn are the sample covariance matrices for x and n, respectively. The three results listed above follow from Corollary 5.52, Theorem 5.39 in Vershynin (2010), Theorem 1.1 in Srivastava & Vershynin (2013), respectively.
2A random vector z is isotropic and sub-Gaussian if EzzT = I and and there exists constant C > 0 such that P(|vTz| > t) ≤ exp(−Ct2) for any unit vector v. Here by “Ni,: is sub-Gaussian” we mean that C − 1 2 nn N T i,: is an isotropic and sub-Gaussian random vector. 3A random vector z is isotropic and regular polynomial-tail if EzzT = I and there exist constants r > 1, C > 0 such that P(‖V z‖2 > t) ≤ Ct−1−r for any orthogonal projection V and any t > C · rank(V ). Here by “Ni,: is regular polynomial-tail” we mean that C − 1 2 nn N T i,: is an isotropic and regular polynomial-tail random vector.
B ADDITIONAL EXPERIMENTS
Here we provide implementation details and additional experiment results.
Figures B.1, B.2 provide the results of Gumbel and Exponential noises, respectively. As we can see from the result, our algorithm still performs better than Eqvar method in different noise types.
Tables B.1, B.2 , B.3, B.4, B.5, B.6 give results on 100 nodes over different sample sizes and variances of our CDCF methods. As noted in Algorithm 1, we have V, S, VS as different criteria to select the current column, "+" representing the sample covariance matrix augmented with the scalar matrix log p n I . The truncation threshold on column i is ωi = 3.5/αi, where αi is the diagonal value of the Cholesky factor. According to the results, the algorithm "V+" achieves the best performance as the sample size is relatively large. When the sample size is small, the criterion according to sparsity shows very effective performance improvement. We also test different choices over λ = β log pn , β ∈ {0.0, 1.0..., 9.0}, the result is given in Table B.7, B.8, B.9, B.10. Empirically, β ∈ {1.0, 2.0} achieves better results. In practice, one can sample a relatively small and labeled sub-graph of the DAG to test the hyper-parameter setting then apply to large unlabeled the DAG graph.
To test the performance limitation of our methods, we provide the results of SHD on different sample number and node number in Figures B.3 to B.14 where the x-axis represents the sample number (in thousand), the y-axis denotes the node number, the color represents the value of log2(SHD + 1) (the brighter the better). We provide the figures for CDCF-V+, CDCF-S+, and CDCF-VS+ on variances graph and noise types. The figures are drawn on the mean results over ten random seeds. The figures show that the graph can be exactly recovered on 800 nodes at approximately 6000 samples. Comparing CDCF-V+ with CDCF-S+, we find that criterion (S) damages the performance when the sample number is relatively large. When sample number ∈ {1500, 3000} and node number ∈ {400, 800}, CDCF-S+ achieves better performance. Such trend can also be demonstrated in Tables B.1, B.2, B.3. CDCF-VS+ alleviates the poor performance of CDCF-S when the data is sufficient and achieves good performance on real-world data set.
We also test the performance on linear SEM with monotonously increased noise variance. Concretely, assume the topology order is i = {i1, ..., ip}, we set the noise variance of node k as σk = 1 + ik/p. We test the results on Gaussian, Gumbel, and Exponential noise with monotonous noise variance. The results are reported in Tables B.11, B.12 and B.13. As the results indicated, even with different noise levels, our algorithms achieve good performance and are able to exactly recover the DAG structure when the data is sufficient.
In the result for knowledge base data set, the axis labels of Figure 3.2 are ‘Film’, ‘People’, ‘Location’, ‘Music’, ‘Education’, ‘Tv’, ‘Medicine’, ‘Sports’, ‘Olympics’, ‘Award’, ‘Time’, ‘Organization’, ‘Language’, ‘MediaCommon’, ‘Influence’, ‘Dataworld’, ‘Business’, ‘Broadcast’ from left to right for x-axis and top to bottom for y-axis, respectively. The adjacent matrix plotted here is re-permuted to make the relations in the same domain close to each other. We keep the adjacent matrix inside a domain an upper triangular matrix. Such typology is equivalent to the generated matrix with the original order.
Baseline Implementations The baselines are implemented via the codes provided from the following links:
• NOTEARS, NOTEARS-MLP: https://github.com/xunzheng/notears • NPVAR: https://github.com/MingGao97/NPVAR • EQVAR, LISTEN: https://github.com/WY-Chen/EqVarDAG • CORL: https://github.com/huawei-noah/trustworthyAI/tree/master/gcastle • DAG-GNN: https://github.com/fishmoon1234/DAG-GNN | 1. How does the proposed method differ from existing techniques in linear model causal discovery?
2. How does the required identifiability assumption in the paper's theoretical analysis differ from conditions provided in related works?
3. What are the differences between the proposed algorithm and existing order-search algorithms in terms of finding the topological order, estimating the covariance matrix, using regularization, and pruning the DAG?
4. Why does the paper claim an improved sample complexity compared to existing results, despite assuming a boundedness condition that may not hold in practice?
5. How does the proposed method compare to other methods such as LISTEN and US in terms of performance, especially when using the inverse of the empirical covariance matrix instead of the CLIME estimator? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a method for recovering the causal graph of additive linear models from purely observational data, under some an identifiability assumption, that seems to be related to the forward step-size assumption of [1]. Their algorithm is based on iteratively identifying a root of the causal graph based on its conditional variance. Once a topological order is learned, the graph is constructed by thresholding the Cholesky factor of the permuted precision matrix. The proposed algorithm is then tested and compared on both synthetic and real-world data.
Review
The present work uses several techniques from linear model causal discovery. However, the relation to existing work is not made clear enough in my opinion. In particular, how different is the proposed method (at least the ordering part since the pruning procedure, which is done by thresholding the estimated precision matrix, also appears in [2]) from the ordering estimation using the forward stepwise section ([1], Algorithm 1) ?
On the theoretical side, the claimed improved sample complexity compared to existing results seems unfair. Indeed, Theorem 2.1 assumes that the support of the data distribution lies within a sphere of radius sqrt(R), for a constant R independent of the dimension. Table 2.1 compares this result to the one stated in [2], which only assumes that the noise involved in each equation of the SEM is sub-Gaussian, e.g., each noise is a standard Gaussian. In this scenario (which appears to be tried in the experiments), not only the boundedness assumption does not hold, but the expected value of ||x||^2 would scale at least linearly in the dimension, and even quadratically for dense graphs, involving additional dimension dependence in the sample complexity.
The required identifiability assumption (Theorem 2.1) relies on an estimate from the data, not the model parameters themselves, which is strange to me, and differs from the conditions provided in [1,2] as opposed to what is claimed.
On the practical side: What part of the algo is actually improved compared to existing order-search algorithms ? Finding the topological order ? By better estimating the covariance matrix ? Using regularisation ? Pruning the DAG ?
You mention that algorithms such as LISTEN or US perform worse (or equally) compared to the the selected baselines. I find this fact very surprising, since the corresponding papers mention much better performance than what is obtained with, e.g., NOTEARS. It would also be interesting to compare your method with LISTEN using the inverse of the empirical covariance matrix (as opposed to the CLIME estimator), so as to match the covariance estimation part more closely with yours.
[1] Park, "Identifiability of Additive Noise Models Using Conditional Variances" [2] Ghoshal et al., "Learning linear structural equation models in polynomial time and sample complexity" |
ICLR | Title
Causal Discovery via Cholesky Factorization
Abstract
Discovering the causal relationship via recovering the directed acyclic graph (DAG) structure from the observed data is a challenging combinatorial problem. This paper proposes an extremely fast, easy to implement, and high-performance DAG structure recovering algorithm. The algorithm is based on the Cholesky factorization of the covariance/precision matrix. The time complexity of the algorithm is O(pn + p), where p and n are the numbers of nodes and samples, respectively. Under proper assumptions, we show that our algorithm takes O(log(p)) or O(p) samples to exactly recover the DAG structure under proper assumptions. In both time and sample complexities, our algorithm is better than previous algorithms. On synthetic and real-world data sets, our algorithm is significantly faster than previous methods and achieves state-of-the-art performance.
1 INTRODUCTION
As Schelling had said: “The whole world is thoroughly to caught in reason, but the question is: how did it get caught in the network of reason in the first place?” (Kuhn, 1942; Žižek & von Schelling, 1997), people found that learning the causal inferences between the variables is a fundamental problem and has many applications in biology, machine learning, medicine, and economics. The problem usually is considered as finding a directed acyclic graph (DAG) from an observational joint distribution. Unfortunately, learning the DAG structure from the observations is proved to be an NP-hard problem (Chickering, 1995; Chickering et al., 2004).
The problem is generally formulated as the structural equation model (SEM), where the variable of a child node is a function of its parents with additional noises. Depending on the types of functions (linear or non-linear) and noises (Gaussian, Gumbel, etc.), there are several SEM families, e.g., Spirtes et al. (2000); Geiger & Heckerman (1994); Shimizu et al. (2006). In general, the graph can be identified from the joint distribution only up to Markov equivalence classes. Zhang & Hyvarinen (2012); Peters et al. (2014); Peters & Bühlmann (2014); Gao et al. (2020) propose several SEM forms that make the graph fully identifiable from the observed data.
Various algorithms had been proposed to deal with the problem. Search-based algorithms (Chickering, 2002; Friedman & Koller, 2003; Ramsey et al., 2017; Tsamardinos et al., 2006; Aragam & Zhou, 2015; Teyssier & Koller, 2005; Ye et al., 2019; Lv et al., 2021) generally adopt a score (e.g., BIC (Peters et al., 2014) score, Cholesky score (Ye et al., 2019), remove-fill score (Squires et al., 2020)) to measure the fitness of different graphs over data and then search over the legal DAG space to find the structure that achieves the highest score. However, exhaustive search over the legal DAG space is infeasible when p is large (e.g., there are 4.1e18 DAGs for p = 10 (Sloane et al., 2003)). Those algorithms go in quest of a trade-off between the performance and the time complexity.
Since Zheng et al. (2018) proposed an approach that converts the traditional combinatorial optimization problem into a continuous program, many methods (Yu et al., 2019; Lee et al., 2019; Ng et al., 2019a;b; Zheng et al., 2020; Lachapelle et al., 2020; Squires et al., 2020; Zhu et al., 2021) have been proposed. Those algorithms formalize the problem as a data reconstruction task with various differentiable constraints on the DAG adjacent matrix and solve it via the augmented Lagrangian method. These algorithms are able to utilize neural networks to approximate the complicated relations between the features in the observed data and achieve good performances. Recently, reinforcement learning based algorithms (Zhu et al., 2020; Wang et al., 2021) also improved the performance by exploring the possible DAG structure candidates. The algorithms update the parameters of the model
via policy gradient as long as it explored a better DAG structure with a higher reward which measures how well an explored structure meets the requirement of DAG and the observed data.
Topology order search algorithms (TOSA) (Ghoshal & Honorio, 2017; 2018; Chen et al., 2019; Gao et al., 2020; Park, 2020) decompose the DAG learning problem into two phases: (i) Topology order learning via conditional variance of the observed data; (ii) Graph estimation depends on the learned topology order. Those algorithms reduce the computation complexity into polynomial time and are guaranteed to recover the DAG structure under some identifiable assumptions. Our method in this paper is also a topology order search algorithm and it merges the two phases in TOSA into one. In each iteration, it attempts to find a child or a contemporary of the current node. Meanwhile, it also determines the corresponding column vector of the adjacent matrix. The mergence brings three main differences: First, the topology order in TOSA is recovered purely based on the conditional variance of the observed data, whereas our method may also take the sparsity of the adjacent matrix into account; Second, the graph LASSO methods, which are commonly adopted to estimate the graph in the second phase in TOSA, encourage the sparsity of the precision matrix, whereas our method is able to encourage the sparsity of the adjacent matrix; Third, the time complexity is reduced significantly. To be specific, the time complexity of our algorithm is O(p2n + p3), while the fastest algorithm before is O(p5n) (Park, 2020; Gao et al., 2020). Here p and n are the numbers of nodes and samples, respectively. In addition, under proper assumptions, we show that our algorithm takes O(log(p)) or O(p) samples to exactly recover the DAG structure. Compared with previous TOSA algorithms, the sample complexity of our method is much better. Experimental results on synthetic data sets, proteins data sets, and knowledge base data set demonstrate the efficiency and effectiveness of our algorithm. For synthetic data sets, compared with previous baselines, our algorithm improves the performance with a significant margin and at least tens or hundreds of times faster. For the proteins data set, we achieve state-of-the-art performance. For the knowledge base data set, we can observe many reasonable structures of the discovered DAG. Our code is uploaded as supplementary material and will be open-sourced upon the acceptance of this paper.
The rest of this paper is organized as follows. In Section 2, we present our algorithm together with the theoretical analysis. In Section 3, numerical results on synthetic data sets, proteins data set, and knowledge base data set are given. Finally, the paper is concluded in Section 4.
Notations. The symbol ‖ · ‖ stands for the Euclid norm of a vector or the spectral norm of a matrix. For a vector x = [x1,x2, . . . ,xp] ∈ Rp, ‖ · ‖1 stands for the `1-norm, i.e., ‖x1‖ = ∑p i=1 |xi|. For a matrix X = [Xij ] ∈ Rm×n, ‖ · ‖2,∞ stands for the two-to-infinity norm, i.e., ‖X‖2,∞ = max1≤i≤m ‖Xi,:‖; ‖ · ‖max stands for the max norm, ‖X‖max = maxi,j |Xij |.
2 CAUSAL DISCOVERY VIA CHOLESKY FACTORIZATION (CDCF)
In this section, we first present some preliminaries on DAG, then motivating our algorithm. Next, the detailed algorithm and theoretical guarantees for the exact recovery of the algorithm are given.
2.1 PRELIMINARIES
We assume the observed data is entailed by a DAG G = (p, V,E), where p is the number of nodes, V = {v1, ..., vp} and E = {(vi, vj)|i, j ∈ {1, ...p}} represent the set of nodes and edges, respectively. Each node vi is corresponding to a random variable Xi. The observed data matrix X = [x1, ...,xp] ∈ Rn×p where xi is consisting of n i.i.d observations of the random variable Xi. The joint distribution of X is P (X) = ∏p i=1 P (Xi|PaG(Xi)), where PaG(Xi) := {Xj |(vi, vj) ∈ E} is the parents of node Xi.
Given X , we seek to recover the latent DAG topology structure for the joint probability distribution (Hoyer et al., 2008; Peters et al., 2017). Generally, X is modeled via a structural equation model (SEM) with the form
Xi = fi(PaG(Xi)) +Ni, (i = 1, ..., p),
where fi is an arbitrary function representing the relation between Xi and its parents, Ni is the jointly independent noise variable.
In this paper, we focus on the linear SEM defined by Xi = Xwi +Ni, (i = 1, ..., p),
where wi ∈ Rp is a weighted column vector. Let W = [w1, . . . ,wp] ∈ Rp×p be the weighted adjacency matrix, N = [n1, . . . ,np] ∈ Rn×p be an additive independent noise matrix, where ni is n i.i.d observations following the noise variable Ni. Then the linear SEM model can be formulated as
X = XW +N . (1)
We assume the noise deviation of the child variable is approximately larger than that of its parents (see Theorem 2.1 for details). Following this assumption, a classical identifiable form of SEM is the linear-Gaussian SEM, where all Ni are i.i.d. and homoscedastic (Peters & Bühlmann, 2014).
2.2 ALGORITHM MOTIVATION
As proposed in McKay et al. (2003); Nicholson (1975), a graph is DAG if and only if the corresponding weighted adjacent matrix W can be decomposed into
W = PTPT, (2)
where P is a permutation matrix, T is a strict upper triangular matrix, i.e., Tij = 0 for all i ≤ j.
We denote the scaled permuted data matrix as X̂ = 1√ n XP , the scaled permuted noise matrix as N̂ = 1√ n NP , and the permutation order [i∗1, i ∗ 2 . . . , i ∗ p] = [1, 2, . . . , p]P . We can rewrite (1) as
X̂ = X̂T + N̂ .
Then it follows that X̂ = N̂(I − T )−1. (3)
Let E(N̂TN̂) = Σ̂2∗ = Σ̂TΣ̂, (4)
where Σ̂2∗ is the covariance matrix of the noise variables, Σ̂ is upper triangular – the Cholesky factor of Σ̂2∗. Let the diagonal entries of Σ̂ be σ
2 i∗1 , σ2i∗2 , . . . , σ 2 i∗p . We know that σ2i∗k is the conditional variance of Ni∗k .
Now using (3) and (4), we have the covariance matrix of the permuted data:
Ĉ∗ = E(X̂TX̂) = (I − T )−TE(N̂TN̂)(I − T )−1 = (I − T )−TΣ̂TΣ̂(I − T )−1. (5)
Let L = (I − T )−TΣ̂T, then Ĉ∗ = LLT , which is the Cholesky factorization of the covariance matrix Ĉ∗ since L is lower triangular. Furthermore, we can see that the diagonal entries of L are the same as that of Σ̂, i.e., Lkk = σi∗k , the conditional variances of Xi∗k and Ni∗k are the same.
The task becomes to find the permutation i∗ = [i∗1, i ∗ 2, . . . , i ∗ p] and an upper triangular matrix U such that U−TU−1 is a good approximation of the empirical estimation of the permuted covariance matrix Ĉ = 1nX T :,i∗X:,i∗ , and U satisfies some additional constraints, such as the sparsity, etc.
2.3 ALGORITHM
We iteratively find the permutation i and calculate U via the Cholesky factorization. Assume that ik−1 = [i1, . . . , ik−1] and Uk−1 = U1:k−1,1:k−1 are settled, and we have
C1:k−1,1:k−1 = 1
n XT:,ik−1X:,ik−1 + λI = U −T k−1U −1 k−1, (6)
where λ > 0 is a diagonal augmentation parameter which we will give detailed discussion latter. Next, we show how to find ik and the last column of Uk.
For the time being, let us assume ik is known, we show how to compute the last column of Uk. Let U−1k =
[ U−1k−1 yk
0 αk ] , then[
U−1k−1 yk 0 αk
]T [ U−1k−1 yk
0 αk
] = [ U−Tk−1U −1 k−1 U −T k−1yk
yTkU −1 k−1 α 2 k+‖yk‖ 2
] = 1
n
[ XT:,ik−1 X:,ik−1+λI X T :,ik−1 X:,ik
XT:,ik X:,ik−1 ‖X:,ik‖ 2+λ
] ,
Algorithm 1 Causal Discovery via Cholesky Factorization (CDCF) 1: input: Data matrix X ∈ Rn×p, Truncate Threshold ω > 0, and tuning parameter γ. 2: output: Adjacent Matrix A. 3: Set i = [1, 2, . . . , p], R = ‖X‖22,∞ and λ = γ log p n R;
4: Set ` = argmin {‖X:,i1‖, ‖X:,i2‖, . . . , ‖X:,ip‖}; 5: Exchange i1 and i` in i; Set U1 =
√ n
‖X:,i`‖2+λ ;
6: for k = 2, 3, . . . , p do 7: for j = k, k + 1, . . . , p do 8: yj = 1 nU T k−1X T :,ik−1 X:,ij ;
9: αj = √ 1 n‖X:,ij‖2 + λ− ‖yj‖2;
10: end for 11: (V) ` = argmink≤j≤p α2j ;
(S) ` = argmink≤j≤p ‖Uk−1yj‖1; (VS) ` = argmink≤j≤p ‖Uk−1yj‖1 √∣∣α2j − 1k−1 ∑k−1h=1 1[Uk−1]2hh ∣∣; 12: Exchange ik and i` in i;
13: Set Uk = [ Uk−1 − 1α`Uk−1y`
0 1α`
] ;
14: end for 15: return A = [TRIU(TRUNCATE(Up, ω))]REVERSE(i),REVERSE(i).
where the last equality dues to (6). It follows that
yk = 1
n UTk−1X T :,ik−1 X:,ik , αk =
√ 1
n ‖X:,ik‖2 + λ− ‖yk‖2. (7)
And direct calculation gives rise to Uk = [ U−1k−1 yk
0 αk
]−1 = [ Uk−1 − 1αkUk−1yk
0 1αk
] . (8)
By (8), once ik is settled, we can obtain the last column of Uk. Our task remains to select ik from {1, . . . , p} \ {i1, . . . , ik−1}. There are several ways to accomplish this task. We propose three criteria to select ik. First, we need to compute αj and yj by (7) for all possible j (ij ∈ {1, . . . , p} \ {i1, . . . , ik−1}). Then we select ik according to one of the following criteria:
(V) ik = argmink≤j≤p α2j . Under the assumption that the noise variance of the child variable is approximately larger than that of its parents, it is reasonable/natural to select the index that has the lowest estimation of the noise variance. This criterion is guaranteed to find the correct permutation i∗ with high probability, which is shown in Section 2.4.
(S) ik = argmink≤j≤p ‖Uk−1yj‖1. Using (3) and (6), we know that Up intends to estimate (I − T )Σ̂−1. When the adjacent matrix T is sparse and the noise variables are independent (i.e., Σ̂ is diagonal), we would like to select the index that leading to the most sparse column of Uk. This criterion is especially useful when the number of samples is small, see Tables B.1, B.2 and B.3 in appendix.
(VS) ik = argmink≤j≤p ‖Uk−1yj‖1 √∣∣α2j − 1k−1 ∑k−1h=1 1[Uk−1]2hh ∣∣. We empirically combine
criterion (V) and criterion (S) together to take both aspects (variance and sparsity) into account. Numerically, we found that this criterion achieves the best performance in real-world data.
The diagonal augmentation trick in (6) is commonly used for an invertible and good conditioned estimation of the covariance matrix (see e.g., (Ledoit & Wolf, 2004)). Such a trick not only ensures that our algorithm does not break down due to the singularity of the sample covariance matrix, but also stabilizes the Cholesky factorization, especially when the sample is insufficient. In addition, by setting λ = O( log pn ), the error bound between the population covariance matrix and the augmented sample covariance matrix does not become worse (see Lemma ?? in the appendix). This trick
significantly improves the ability to recover the DAG, especially when the samples are insufficient, see Tables B.4, B.5 and B.6 in appendix.
The detailed algorithm is summarized in Algorithm 1. Some comments and implementation details follow. Line 4, we select the very initial value ` = argmin {‖X:,i1‖, ‖X:,i2‖, . . . , ‖X:,ip‖}. Line 5, we exchange i1 and i` in i and calculate U1 = √ n
‖X:,i`‖2+λ . Lines 6 to 14, we iteratively calculate
Uk and update permutation order i until all the indices are settled. Line 15, we truncate U , take its strict upper triangular part (denoted by “TRIU”) and re-permute the predicted adjacent matrix back to the original order according to the permutation order i. Specifically, the truncation is done column-wisely. By (8), the value of [Up]:,k is inversely proportional to αk. So, for column k, we set ωk =
ω αk , and do the truncation: [Up]ik is set to zero if |[Up]ik| < ωk. On output, node i connects to node j in G if |Aij | > 0.
Time Complexity Note that we do not have to re-calculate the matrix multiplication of XT:,ik−1X:,ij in line 8 since we can calculate C at the cost of O(p
2n) at first. Besides, at step k, we have already calculate UTk−2X T :,ik−1
X:,ij at previous step, we only need to calculate the last entry of yj , which is the inner product between two k dimensional vectors, at the cost ofO(p) in worst case. Overall, the time complexity of CDCF is O(p3 + p2n). When n > p, the complexity becomes O(p2n), which is equivalent to the complexity of calculating the covariance matrix. Additionally, the inner loop (lines 7 to 10) of CDCF can be done in parallel, which makes the algorithm friendly to run on GPU and suitable for large scale calculations.
2.4 EXACT DAG STRUCTURE RECOVERY
The following theorem tells that our algorithm is able to recover the DAG exactly with high probability under proper assumptions.
Theorem 2.1 Let x ∈ Rp be a zero-mean random vector, C = E(xxT) ∈ Rp×p be the covariance matrix. Let x1, . . . ,xn be n independent samples, Ĉ = 1n ∑n k=1 xkx T k be the sample covariance estimator. Assume ‖C − Ĉ‖ ≤ for some > 0. Denote Ĉλ = Ĉ + λI , where λ = O( ) ≥ 0 is a parameter. Let the Cholesky factorizations of C = ExxT and Ĉλ be C = LLT and Ĉλ = L̂L̂T, respectively, where L and L̂ are both lower triangular. For the linear SEM model (1), assume (2) and (4), and for k ∈ PaG(j), δ = infk∈PaG(j) δjk > 0, where
δjk = σ 2 i∗j + ‖Σ̂n[(I − T )−1]k:j−1,k‖2 − σ2i∗k .
If δ ≥ 4( + λ) and ‖L−1‖2( + λ) < 34 , then CDCF-V is able to recover P exactly. In addition, it holds that
‖TRIU(Up)− T ‖max ≤ 4‖Σ̂−1∗ (I − T )T‖22,∞‖(I − T )Σ̂−T∗ ‖2,∞( + λ), where TRIU(Up) stands for the strictly upper triangular part of Up, Up is the output of outer loop of Algorithm 1 with criterion (V).
we know that when T is sparse, we may recover its topology structure by truncating Up.
Proposition 1 Let Ni,: be independent bounded, or sub-Gaussian, or regular polynomial-tail, then for n > N( ), it holds ‖Ĉxx −Cxx‖ ≤ , w.h.p. Specifically,
N( ) ≥ C1 log p (‖(I − T )−1‖2‖Cnn‖ )2 , for bounded class;
N( ) ≥ C2 p (‖(I − T )−1‖2‖Cnn‖ )2 , for the sub-Gaussian class;
N( ) ≥ C3 p (‖(I − T )−1‖2‖Cnn‖ )2(1+r−1) , for the regular polynomial tail class.
The proofs of are provided in the Appendix A. This theorem and proposition also indicates the sample complexity of our algorithm is O(p). This sample complexity is better than the sample complexities of previous methods, see Table 2.1 for a detailed comparison.
3 EXPERIMENTS
In this section, we apply our algorithm to synthetic data sets, proteins data set and knowledge base data set, respectively, to illustrate the efficiency and effectiveness of our algorithm.
3.1 LINEAR SEM
We evaluate the proposed methods on simulated graphs from two well-known ensembles of random graph types: Erdös–Rényi (ER) (Gilbert, 1959) and Scale-free (SF) (Barabási & Albert, 1999). The average edge number per node is denoted after the graph type. For example, ER2 represents two edges per node on average. After the graph structure is settled, we assign uniformly random edge weights to obtain a weight matrix W . We generate the observation data X from the linear SEM with three noise distributions: Gaussian, Gumbel, Exponential.
We chose our baseline methods as NOTEARS (Zheng et al., 2018), DAG-GNN (Yu et al., 2019), CORL (Wang et al., 2021), NPVAR (Gao et al., 2020), and EQVAR (Chen et al., 2019). Other methods such as PC algorithm (Spirtes et al., 2000), LiNGAM (Shimizu et al., 2006), FGS (Ramsey et al., 2017), MMHC (Tsamardinos et al., 2006), L1OBS (Schmidt et al., 2007), CAM (Bühlmann et al., 2013), RL-BIC2 (Zhu et al., 2020), A*LASSO (Xiang & Kim, 2013), LISTEN (Ghoshal & Honorio, 2018), US (Park, 2020) perform worse than or approximately equal to the selected baselines, and the results can be found in the corresponding papers.
Table 3.1 presents the structural Hamming distance (SHD) of baseline methods and our method on 3000 samples (n = 3000). Nodes number p is noted in the first column. Graph type and edge level are noted in the second column. We only report the SHD of different algorithms due to page limitation, and we find that other metrics such as true positive rate (TPR), false discovery rate (FDR), false positive rate (FPR), and F1 score have the similar comparative performance with SHD. We also test bottom-up EQVAR which is equivalent to LISTEN, the result is worse than top-down EQVAR (EV-TD) in this synthesis experiment, so we do not include the result in the table. For p = 1000 graphs, we only report the result of EV-TD and CDCF since other algorithms spend too much time (longer than a week) to recover a DAG. We test our algorithms with different variations according to criteria (V, S, VS) introduced in Section 2.3, and with diagonal augmentation trick noted by a “+” as postfix. For example, "CDCF-V" means CDCF with V criterion and λ = 0, and "CDCF-V+" means CDCF with V criterion and λ = O( log pn ). The implementation details are in the Appendix B. We report the result of CDCF-V+ here, and the results of other CDCF variations can be found in Appendix Table B.4. We run our methods on ten randomly generated graphs and report the mean and variance in the table. Figure 3.1 plots the SHD results tested on 100 nodes graph recovering from different sample sizes. We choose EV-TD and high dimension top down (EV-HTD) as baselines when p > n and p ≤ n, respectively. We can see from the results, CDCF-V+ achieves significantly better performance comparing with previous baselines.
Table 3.2 shows the running time which is tested on a 2.3 GHz single Intel Core i5 CPU. Besides, parallel calculation of the matrix multiplication on GPU makes the algorithm even faster. Recovering 5000 and 10000 nodes graph from 3000 samples on an A100 Nvidia GPU is approximately 400 and 2400 seconds, respectively. For comparison, EV-TD costs approximately 100 hours to recover a 1000 nodes DAG from 3000 samples. As illustrated in the table, CDCF is approximately dozens or hundreds of times faster than EV-TD and LISTEN, and tens of thousands times faster than NOTEARS as CDCF does not have to update the parameters with gradients.
Due to the page limitation, further experiments and discussions of the ablation study (Figures B.3 to B.14, Tables B.1 to B.6), choice of λ (Tables B.7 to B.10), and performances on different noise distribution (Figures B.1, B.2) and deviation (Tables B.11, B.12, B.13) are given in Appendix B.
3.2 PROTEINS DATA SET
We consider a bioinformatics data set (Sachs et al., 2005) consisting of continuous measurements of expression levels of proteins and phospholipids in the human immune system cells. This is a widely used data set for research on graphical models, with experimental annotations accepted by the biological research community. Following the previous algorithms setting, we noticed that different previous papers adopted different observations. To included them all, we considered the observational 853 samples from the "CD3, CD28" simulation tested by Teyssier & Koller (2005); Lachapelle et al. (2020); Zhu et al. (2020) and all 7466 samples from nine different simulations tested by Zheng et al. (2018; 2020); Yu et al. (2019).
We report the experimental results on both settings in Table 3.3. The implementation codes of the baselines are introduced in the appendix, and we use the default settings of the hyper-parameters provided in their codes. The evaluate metric is FDR, TPR, FPR, SHD, predicted nodes number (N), precision (P), F1 score. As the recall score is equal to TPR, we do not include it in the table. In both settings, CDCF-VS+ achieves state-of-the-art performance. 1 Several reasons make the recovered graph not exactly the same as the expected one. The ground truth graph suggested by the paper is mixed with directed and indirect edges. Under the settings of SEM, the node "PKA" is quite similar to the leaf nodes since most of its edges are indirect while the ground truth graph notes it as the out edges. Non-linear would not be an impact issue here since NOTEARS and our algorithm both achieve decent results. In the meantime, we do not deny that further extension of our algorithm to non-linear representation would witness an improvement on this data set.
3.3 KNOWLEDGE BASE DATA SET
We test our algorithm on FB15K-237 data set (Toutanova et al., 2015) in which the knowledge is organized as {Subject, Predicate,Object} triplets. The data set has 15K triplets and 237 types of predicates. In this experiment, we only consider the single jump predicate between the entities, which
1For NOTEARS-MLP, Table 3.3 reported the results reproduced by the code provided in Zheng et al. (2020).
have 97 predicates remained. We want to discover the causal relationships between the predicates. We organize the observation data as each sample corresponds to an entity with awareness of the position (Subject or Object), and each variable corresponds to a predicate in this knowledge base.
In Figure 3.2, we give the adjacent weighted matrix of the generated graph and several examples with high confidence (larger than 0.5). In the left figure, the label of the axis notes the first capital letter of the domain of the relations. Some of them are replaced with a dot to save space. The exact domain name and the picture with the full predicate name are provided in the appendix. The domain clusters are denoted in black boxes at the diagonal of the adjacent matrix. The red boxes denoted the cross-domain relations that are worth paying attention to. Consistent with the innateness of human sense, the recovered relationships inside a domain are denser than those across domains. In the cross-domain relations, we found that the predicate in domain "TV" ("T") has many relations with the domain "Film" ("F"), the domain "Broadcast" (last row) have many relations with the domain "Music" ("M"). Several cases of the predicted causal relationships are listed on the right side of Figure 3.2, we can see that the discovered indication relations between predicates are quite reasonable.
4 CONCLUSION AND FUTURE WORK
In this paper, we proposed a topology search algorithm for the DAG structure recovery problem. Our algorithm is better than the existing methods in both time and sample complexities. To be specific, the time complexity of our algorithm isO(p2n+ p3), while the fastest algorithm before isO(p5n) (Park, 2020; Gao et al., 2020), where p and n are the numbers of nodes and samples, respectively. Under different assumptions, our algorithm takes O(log(p)) or O(p) samples to exactly recover the DAG structure. Experimental results on synthetic data sets, proteins data sets, and knowledge base data set demonstrate the efficiency and effectiveness of our algorithm. For synthetic data sets, compared with previous baselines, our algorithm improves the performance with a significant margin and at least tens or hundreds of times faster. For the proteins data set, we achieve state-of-the-art performance. For the knowledge base data set, we can observe many reasonable structures of the discovered DAG.
The proposed algorithm is under the assumption of linear SEM. Generalization of CDCF to nonlinear SEM would be a valuable and important research topic. Learning the representation of the observed data for better structure reconstruction via the CDCF algorithm, which requires the algorithm differentiable, is also an attractive problem. To deal with the extremely large-scale problems, such as millions of nodes, implementing CDCF via sparse matrix storage and calculation on the GPU is a promising way to further improve computational performance.
A PROOF OF THEOREM 2.1
In this section, we first give several lemmas, then prove Theorem 2.1.
Lemma A.1 Let x ∈ Rp be a zero-mean random vector, C = E(xxT) ∈ Rp×p be the covariance matrix. Let x1, . . . ,xn be n independent samples, Ĉ = 1n ∑n k=1 xkx T k be the sample covariance estimator. Assume ‖C − Ĉ‖ ≤ for some > 0. Denote Ĉλ = Ĉ + λI , where λ = O( ) ≥ 0 is a parameter. Let the Cholesky factorizations of C = ExxT and Ĉλ be C = LLT and Ĉλ = L̂L̂T, respectively, where L and L̂ are both lower triangular. If ‖L−1‖2( + λ) < 34 , then
|‖Li,:‖2 − ‖[L̂λ]i,:‖2| ≤ + λ = O( ), for 1 ≤ i ≤ p; (9)
|[L−1]ij − [L̂−1]ij | ≤ 4‖L−1‖22,∞‖L−T‖2,∞( + λ) = O( ), for i > j. (10)
Proof. For all 1 ≤ i ≤ p, we have
|‖Li,:‖2 − ‖L̂i,:‖2| = |Cii − [Ĉλ]ii| ≤ ‖C − Ĉλ‖ ≤ ‖C − Ĉ‖+ λ ≤ + λ, (11)
which completes the proof for (9).
Next, we show (10). Let
L−1L̂ = I + F , (I + F )(I + F )T = I +E. (12)
We know that
L̂−1 −L−1 = [(I + F )−1 − I]L−1 = −F (I + F )−1L−1, (13)
E = L−1L̂L̂TL−T − I = L−1(Ĉλ −C)L−T. (14)
Then it follows from (13) that for i > j
|[L−1]ij−[L̂−1]ij | ≤ ‖Fi,1:i−1‖‖[(I+F )−1L−1]:,j‖ ≤ ‖Fi,1:i−1‖‖(I+F )−1‖‖L−T‖2,∞. (15)
First, we give an upper bound for ‖(I + F )−1‖. Using (12), we have (I + F )−T(I + F )−1 = (I +E)−1. It follows
‖(I + F )−1‖ = ‖(I + F )−T(I + F )−1‖ 12 = ‖(I +E)−1‖ 12
≤ 1√ 1− ‖E‖ ≤ 1√ 1− ‖L−1‖2‖Ĉλ −C‖ , (16)
where the last inequality uses (14).
Second, we give upper bound for ‖Fi,1:i−1‖. It follows from the second equality of (12) that
(1 + Fii) 2 + ‖Fi,1:i−1‖2 = 1 +Eii. (17)
Therefore,
‖Fi,1:i−1‖2 ≤ |(1 + Fii)2 − 1|+Eii (a) ≤ L̂ 2 ii −L2ii L2ii +Eii
(b) ≤ + λ L2ii + ‖L−1‖22,∞‖Ĉλ −C‖ (c) ≤ 2‖L−1‖22,∞( + λ), (18)
where (a) uses (12), (b) uses (9) and (14), (c) uses ‖C − Ĉ‖ ≤ . Substituting (18) and (16) into (15), we get
|[L−1]ij − [L̂−1]ij | ≤ 2‖L−1‖22,∞‖L−T‖2,∞ + λ√
1− ‖L−1‖2( + λ) . (19)
The conclusion follows since ‖L−1‖2( + λ) < 34 .
Theorem A.2 Let x ∈ Rp be a zero-mean random vector, C = E(xxT) ∈ Rp×p be the covariance matrix. Let x1, . . . ,xn be n independent samples, Ĉ = 1n ∑n k=1 xkx T k be the sample covariance estimator. Assume ‖C − Ĉ‖ ≤ for some > 0. Denote Ĉλ = Ĉ + λI , where λ = O( ) ≥ 0 is a parameter. Let the Cholesky factorizations of C = ExxT and Ĉλ be C = LLT and Ĉλ = L̂L̂T, respectively, where L and L̂ are both lower triangular. For the linear SEM model (1), assume (2) and (4), and for k ∈ PaG(j), δ = infk∈PaG(j) δjk > 0, where
δjk = σ 2 i∗j + ‖Σ̂n[(I − T )−1]k:j−1,k‖2 − σ2i∗k .
If δ ≥ 4( + λ) and ‖L−1‖2( + λ) < 34 , then CDCF-V is able to recover P exactly. In addition, it holds that
‖TRIU(Up)− T ‖max ≤ 4‖Σ̂−1∗ (I − T )T‖22,∞‖(I − T )Σ̂−T∗ ‖2,∞( + λ),
where TRIU(Up) stands for the strictly upper triangular part of Up, Up is the output of outer loop of Algorithm 1 with criterion (V).
Proof. For SEM model (1), denote Ĉ∗ = E( 1nX̂ TX̂), Σ̂2∗ = E( 1nN̂ TN̂) = Σ̂Tn Σ̂n, we have (5), i.e.,
Ĉ∗ = (I − T )−TΣ̂2∗(I − T )−1 = (I − T )−TΣ̂Tn Σ̂n(I − T )−1. (20)
When the permutation i∗ = [i∗1, . . . , i ∗ p] is exactly recovered, then Up in CDCF-V satisfies
Ĉλ = 1
n XT:,i∗X:,i∗ + λI = U −T p U −1 p . (21)
Denote i∗j = [i ∗ 1, . . . , i ∗ j ] for all j = 1, . . . , p. Consider the kth diagonal entries of (20) and (21). By calculations, we get
[Ĉ∗]kk = [(I − T )−1]T:,kΣ̂Tn Σ̂n[(I − T )−1]:,k = σ2i∗k + ‖uk‖ 2, (22) [Ĉλ]kk = 1
n ‖Xi∗k‖
2 + λ = 1
U2kk + ‖ûk‖2, (23)
where
uk = [Σ̂n]1:k−1,1:k−1(Ik−1 − T1:k−1,1:k−1)−1T1:k−1,k, ûk = 1
n UTk−1X T :,i∗k−1 X:,i∗k . (24)
Using ‖C − Ĉ‖ ≤ , we have
|[Ĉ∗]kk − [Ĉλ]kk| ≤ ‖C − Ĉλ‖ ≤ ‖C − Ĉ‖+ λ ≤ + λ. (25)
By Lemma A.1, we have
|‖uk‖2 − ‖ûk‖2| ≤ + λ. (26)
Using (22), (23), (25) and (26), we get
|σ2i∗k − 1
U2kk | ≤ 2( + λ). (27)
Assume that i∗1, . . . , i ∗ k−1 (k ≥ 1) are all correctly recovered. And without loss of generality, for k ∈ PaG(j), we also assume Tk:j−1,j 6= 0 (otherwise, jth and kth columns are exchangeable, and i forms another equivalence topology order to the same DAG (Sedgewick & Wayne, 2011)). Then we
have for k ∈ PaG(j) that 1
n ‖Xi∗j ‖
2 + λ− ‖[ûj ]1:k−1‖2 (a) = [Ĉ∗]jj + [Ĉλ]jj − [Ĉ∗]jj − ‖[ûj ]1:k−1‖2
(b) ≥ [Ĉ∗]jj − ( + λ)− ‖[uj ]1:k−1‖2 − ( + λ) (c) = σi∗j + ‖[uj ]k:j−1‖
2 − 2( + λ) (d) ≥ σi∗k + δ − 2( + λ) (e) = [Ĉ∗]kk − ‖uk‖2 + δ − 2( + λ) (f) ≥ [Ĉλ]kk − ‖ûk‖2 + δ − 4( + λ) (g) = 1
n ‖Xi∗k‖ 2 + λ− ‖ûk‖2 + δ − 4( + λ),
where (a) uses (23), (b) and (f) uses (25) and Lemma A.1, (c) uses (22), (d) dues to the assumption σi∗j ≥ σi∗k for k ∈ PaG(j), (e) uses (22), (g) uses (23). Therefore, using δ > 4( + λ), we have
1 n ‖Xi∗j ‖ 2 + λ− ‖[ûj ]1:k−1‖2 > 1 n ‖Xi∗k‖ 2 + λ− ‖ûk‖2,
which implies that i∗k can be correctly recovered. So, overall speaking, CDCF-V is able to recover the permutation P .
The upper bound for ‖TRIU(Up)− T ‖max follows from Lemma A.1. The proof is completed.
Proposition 2 Let Ni,: be independent bounded, or sub-Gaussian, 2 or regular polynomial-tail, 3 then for n > N( ), it holds ‖Ĉxx −Cxx‖ ≤ , w.h.p. Specifically,
N( ) ≥ C1 log p (‖(I − T )−1‖2‖Cnn‖ )2 , for bounded class;
N( ) ≥ C2 p (‖(I − T )−1‖2‖Cnn‖ )2 , for the sub-Gaussian class;
N( ) ≥ C3 p (‖(I − T )−1‖2‖Cnn‖ )2(1+r−1) , for the regular polynomial tail class.
Proof. For SEM model (1), we have
‖Ĉxx−Cxx‖ ≤ ‖(I−T )−1‖2‖Ĉnn−Cnn‖ ≤ ‖(I−T )−1‖2‖Cnn‖‖C − 12 nn ĈnnC − 12 nn −I‖, (28)
where Cxx = ExxT, Cnn = EnnT are the covariance matrices for x and n, respectively, Ĉxx, Ĉnn are the sample covariance matrices for x and n, respectively. The three results listed above follow from Corollary 5.52, Theorem 5.39 in Vershynin (2010), Theorem 1.1 in Srivastava & Vershynin (2013), respectively.
2A random vector z is isotropic and sub-Gaussian if EzzT = I and and there exists constant C > 0 such that P(|vTz| > t) ≤ exp(−Ct2) for any unit vector v. Here by “Ni,: is sub-Gaussian” we mean that C − 1 2 nn N T i,: is an isotropic and sub-Gaussian random vector. 3A random vector z is isotropic and regular polynomial-tail if EzzT = I and there exist constants r > 1, C > 0 such that P(‖V z‖2 > t) ≤ Ct−1−r for any orthogonal projection V and any t > C · rank(V ). Here by “Ni,: is regular polynomial-tail” we mean that C − 1 2 nn N T i,: is an isotropic and regular polynomial-tail random vector.
B ADDITIONAL EXPERIMENTS
Here we provide implementation details and additional experiment results.
Figures B.1, B.2 provide the results of Gumbel and Exponential noises, respectively. As we can see from the result, our algorithm still performs better than Eqvar method in different noise types.
Tables B.1, B.2 , B.3, B.4, B.5, B.6 give results on 100 nodes over different sample sizes and variances of our CDCF methods. As noted in Algorithm 1, we have V, S, VS as different criteria to select the current column, "+" representing the sample covariance matrix augmented with the scalar matrix log p n I . The truncation threshold on column i is ωi = 3.5/αi, where αi is the diagonal value of the Cholesky factor. According to the results, the algorithm "V+" achieves the best performance as the sample size is relatively large. When the sample size is small, the criterion according to sparsity shows very effective performance improvement. We also test different choices over λ = β log pn , β ∈ {0.0, 1.0..., 9.0}, the result is given in Table B.7, B.8, B.9, B.10. Empirically, β ∈ {1.0, 2.0} achieves better results. In practice, one can sample a relatively small and labeled sub-graph of the DAG to test the hyper-parameter setting then apply to large unlabeled the DAG graph.
To test the performance limitation of our methods, we provide the results of SHD on different sample number and node number in Figures B.3 to B.14 where the x-axis represents the sample number (in thousand), the y-axis denotes the node number, the color represents the value of log2(SHD + 1) (the brighter the better). We provide the figures for CDCF-V+, CDCF-S+, and CDCF-VS+ on variances graph and noise types. The figures are drawn on the mean results over ten random seeds. The figures show that the graph can be exactly recovered on 800 nodes at approximately 6000 samples. Comparing CDCF-V+ with CDCF-S+, we find that criterion (S) damages the performance when the sample number is relatively large. When sample number ∈ {1500, 3000} and node number ∈ {400, 800}, CDCF-S+ achieves better performance. Such trend can also be demonstrated in Tables B.1, B.2, B.3. CDCF-VS+ alleviates the poor performance of CDCF-S when the data is sufficient and achieves good performance on real-world data set.
We also test the performance on linear SEM with monotonously increased noise variance. Concretely, assume the topology order is i = {i1, ..., ip}, we set the noise variance of node k as σk = 1 + ik/p. We test the results on Gaussian, Gumbel, and Exponential noise with monotonous noise variance. The results are reported in Tables B.11, B.12 and B.13. As the results indicated, even with different noise levels, our algorithms achieve good performance and are able to exactly recover the DAG structure when the data is sufficient.
In the result for knowledge base data set, the axis labels of Figure 3.2 are ‘Film’, ‘People’, ‘Location’, ‘Music’, ‘Education’, ‘Tv’, ‘Medicine’, ‘Sports’, ‘Olympics’, ‘Award’, ‘Time’, ‘Organization’, ‘Language’, ‘MediaCommon’, ‘Influence’, ‘Dataworld’, ‘Business’, ‘Broadcast’ from left to right for x-axis and top to bottom for y-axis, respectively. The adjacent matrix plotted here is re-permuted to make the relations in the same domain close to each other. We keep the adjacent matrix inside a domain an upper triangular matrix. Such typology is equivalent to the generated matrix with the original order.
Baseline Implementations The baselines are implemented via the codes provided from the following links:
• NOTEARS, NOTEARS-MLP: https://github.com/xunzheng/notears • NPVAR: https://github.com/MingGao97/NPVAR • EQVAR, LISTEN: https://github.com/WY-Chen/EqVarDAG • CORL: https://github.com/huawei-noah/trustworthyAI/tree/master/gcastle • DAG-GNN: https://github.com/fishmoon1234/DAG-GNN | 1. What is the focus of the paper in terms of causal discovery?
2. What is the proposed method's basis in the paper, and how does it differ from other approaches in terms of efficiency and time complexity?
3. What kind of theoretical analysis does the paper offer for the resulting graph?
4. How do the experiments support the paper's claims?
5. What limitations does the reviewer see in terms of the applicability of the method to nonlinear cases or more general scenarios? | Summary Of The Paper
Review | Summary Of The Paper
The paper works on causal discovery in the linear Gaussian case, on which the identifiability is based on (Peters & Bühlmann, 2014). The proposed method is based on Cholesky factorization and has better efficiency/time-complexity performance than the related state-of-art methods. Moreover, it also provides a theoretical analysis of the resulted graph, which is appreciated. The experiments can support the claims.
Review
The paper is well written and the claims are well supported by the experiments and theoretical analysis of the method. The significance of the work is on the time-complexity side, which is better than the existed related works. The correctness and soundness of the method are shown by the theoretical analysis and the experimental results. My only concern is that the work is based on a quite restrictive class of SEMs, of which the extension to the nonlinear cases or the more general scenario is not so clear. |
ICLR | Title
Causal Discovery via Cholesky Factorization
Abstract
Discovering the causal relationship via recovering the directed acyclic graph (DAG) structure from the observed data is a challenging combinatorial problem. This paper proposes an extremely fast, easy to implement, and high-performance DAG structure recovering algorithm. The algorithm is based on the Cholesky factorization of the covariance/precision matrix. The time complexity of the algorithm is O(pn + p), where p and n are the numbers of nodes and samples, respectively. Under proper assumptions, we show that our algorithm takes O(log(p)) or O(p) samples to exactly recover the DAG structure under proper assumptions. In both time and sample complexities, our algorithm is better than previous algorithms. On synthetic and real-world data sets, our algorithm is significantly faster than previous methods and achieves state-of-the-art performance.
1 INTRODUCTION
As Schelling had said: “The whole world is thoroughly to caught in reason, but the question is: how did it get caught in the network of reason in the first place?” (Kuhn, 1942; Žižek & von Schelling, 1997), people found that learning the causal inferences between the variables is a fundamental problem and has many applications in biology, machine learning, medicine, and economics. The problem usually is considered as finding a directed acyclic graph (DAG) from an observational joint distribution. Unfortunately, learning the DAG structure from the observations is proved to be an NP-hard problem (Chickering, 1995; Chickering et al., 2004).
The problem is generally formulated as the structural equation model (SEM), where the variable of a child node is a function of its parents with additional noises. Depending on the types of functions (linear or non-linear) and noises (Gaussian, Gumbel, etc.), there are several SEM families, e.g., Spirtes et al. (2000); Geiger & Heckerman (1994); Shimizu et al. (2006). In general, the graph can be identified from the joint distribution only up to Markov equivalence classes. Zhang & Hyvarinen (2012); Peters et al. (2014); Peters & Bühlmann (2014); Gao et al. (2020) propose several SEM forms that make the graph fully identifiable from the observed data.
Various algorithms had been proposed to deal with the problem. Search-based algorithms (Chickering, 2002; Friedman & Koller, 2003; Ramsey et al., 2017; Tsamardinos et al., 2006; Aragam & Zhou, 2015; Teyssier & Koller, 2005; Ye et al., 2019; Lv et al., 2021) generally adopt a score (e.g., BIC (Peters et al., 2014) score, Cholesky score (Ye et al., 2019), remove-fill score (Squires et al., 2020)) to measure the fitness of different graphs over data and then search over the legal DAG space to find the structure that achieves the highest score. However, exhaustive search over the legal DAG space is infeasible when p is large (e.g., there are 4.1e18 DAGs for p = 10 (Sloane et al., 2003)). Those algorithms go in quest of a trade-off between the performance and the time complexity.
Since Zheng et al. (2018) proposed an approach that converts the traditional combinatorial optimization problem into a continuous program, many methods (Yu et al., 2019; Lee et al., 2019; Ng et al., 2019a;b; Zheng et al., 2020; Lachapelle et al., 2020; Squires et al., 2020; Zhu et al., 2021) have been proposed. Those algorithms formalize the problem as a data reconstruction task with various differentiable constraints on the DAG adjacent matrix and solve it via the augmented Lagrangian method. These algorithms are able to utilize neural networks to approximate the complicated relations between the features in the observed data and achieve good performances. Recently, reinforcement learning based algorithms (Zhu et al., 2020; Wang et al., 2021) also improved the performance by exploring the possible DAG structure candidates. The algorithms update the parameters of the model
via policy gradient as long as it explored a better DAG structure with a higher reward which measures how well an explored structure meets the requirement of DAG and the observed data.
Topology order search algorithms (TOSA) (Ghoshal & Honorio, 2017; 2018; Chen et al., 2019; Gao et al., 2020; Park, 2020) decompose the DAG learning problem into two phases: (i) Topology order learning via conditional variance of the observed data; (ii) Graph estimation depends on the learned topology order. Those algorithms reduce the computation complexity into polynomial time and are guaranteed to recover the DAG structure under some identifiable assumptions. Our method in this paper is also a topology order search algorithm and it merges the two phases in TOSA into one. In each iteration, it attempts to find a child or a contemporary of the current node. Meanwhile, it also determines the corresponding column vector of the adjacent matrix. The mergence brings three main differences: First, the topology order in TOSA is recovered purely based on the conditional variance of the observed data, whereas our method may also take the sparsity of the adjacent matrix into account; Second, the graph LASSO methods, which are commonly adopted to estimate the graph in the second phase in TOSA, encourage the sparsity of the precision matrix, whereas our method is able to encourage the sparsity of the adjacent matrix; Third, the time complexity is reduced significantly. To be specific, the time complexity of our algorithm is O(p2n + p3), while the fastest algorithm before is O(p5n) (Park, 2020; Gao et al., 2020). Here p and n are the numbers of nodes and samples, respectively. In addition, under proper assumptions, we show that our algorithm takes O(log(p)) or O(p) samples to exactly recover the DAG structure. Compared with previous TOSA algorithms, the sample complexity of our method is much better. Experimental results on synthetic data sets, proteins data sets, and knowledge base data set demonstrate the efficiency and effectiveness of our algorithm. For synthetic data sets, compared with previous baselines, our algorithm improves the performance with a significant margin and at least tens or hundreds of times faster. For the proteins data set, we achieve state-of-the-art performance. For the knowledge base data set, we can observe many reasonable structures of the discovered DAG. Our code is uploaded as supplementary material and will be open-sourced upon the acceptance of this paper.
The rest of this paper is organized as follows. In Section 2, we present our algorithm together with the theoretical analysis. In Section 3, numerical results on synthetic data sets, proteins data set, and knowledge base data set are given. Finally, the paper is concluded in Section 4.
Notations. The symbol ‖ · ‖ stands for the Euclid norm of a vector or the spectral norm of a matrix. For a vector x = [x1,x2, . . . ,xp] ∈ Rp, ‖ · ‖1 stands for the `1-norm, i.e., ‖x1‖ = ∑p i=1 |xi|. For a matrix X = [Xij ] ∈ Rm×n, ‖ · ‖2,∞ stands for the two-to-infinity norm, i.e., ‖X‖2,∞ = max1≤i≤m ‖Xi,:‖; ‖ · ‖max stands for the max norm, ‖X‖max = maxi,j |Xij |.
2 CAUSAL DISCOVERY VIA CHOLESKY FACTORIZATION (CDCF)
In this section, we first present some preliminaries on DAG, then motivating our algorithm. Next, the detailed algorithm and theoretical guarantees for the exact recovery of the algorithm are given.
2.1 PRELIMINARIES
We assume the observed data is entailed by a DAG G = (p, V,E), where p is the number of nodes, V = {v1, ..., vp} and E = {(vi, vj)|i, j ∈ {1, ...p}} represent the set of nodes and edges, respectively. Each node vi is corresponding to a random variable Xi. The observed data matrix X = [x1, ...,xp] ∈ Rn×p where xi is consisting of n i.i.d observations of the random variable Xi. The joint distribution of X is P (X) = ∏p i=1 P (Xi|PaG(Xi)), where PaG(Xi) := {Xj |(vi, vj) ∈ E} is the parents of node Xi.
Given X , we seek to recover the latent DAG topology structure for the joint probability distribution (Hoyer et al., 2008; Peters et al., 2017). Generally, X is modeled via a structural equation model (SEM) with the form
Xi = fi(PaG(Xi)) +Ni, (i = 1, ..., p),
where fi is an arbitrary function representing the relation between Xi and its parents, Ni is the jointly independent noise variable.
In this paper, we focus on the linear SEM defined by Xi = Xwi +Ni, (i = 1, ..., p),
where wi ∈ Rp is a weighted column vector. Let W = [w1, . . . ,wp] ∈ Rp×p be the weighted adjacency matrix, N = [n1, . . . ,np] ∈ Rn×p be an additive independent noise matrix, where ni is n i.i.d observations following the noise variable Ni. Then the linear SEM model can be formulated as
X = XW +N . (1)
We assume the noise deviation of the child variable is approximately larger than that of its parents (see Theorem 2.1 for details). Following this assumption, a classical identifiable form of SEM is the linear-Gaussian SEM, where all Ni are i.i.d. and homoscedastic (Peters & Bühlmann, 2014).
2.2 ALGORITHM MOTIVATION
As proposed in McKay et al. (2003); Nicholson (1975), a graph is DAG if and only if the corresponding weighted adjacent matrix W can be decomposed into
W = PTPT, (2)
where P is a permutation matrix, T is a strict upper triangular matrix, i.e., Tij = 0 for all i ≤ j.
We denote the scaled permuted data matrix as X̂ = 1√ n XP , the scaled permuted noise matrix as N̂ = 1√ n NP , and the permutation order [i∗1, i ∗ 2 . . . , i ∗ p] = [1, 2, . . . , p]P . We can rewrite (1) as
X̂ = X̂T + N̂ .
Then it follows that X̂ = N̂(I − T )−1. (3)
Let E(N̂TN̂) = Σ̂2∗ = Σ̂TΣ̂, (4)
where Σ̂2∗ is the covariance matrix of the noise variables, Σ̂ is upper triangular – the Cholesky factor of Σ̂2∗. Let the diagonal entries of Σ̂ be σ
2 i∗1 , σ2i∗2 , . . . , σ 2 i∗p . We know that σ2i∗k is the conditional variance of Ni∗k .
Now using (3) and (4), we have the covariance matrix of the permuted data:
Ĉ∗ = E(X̂TX̂) = (I − T )−TE(N̂TN̂)(I − T )−1 = (I − T )−TΣ̂TΣ̂(I − T )−1. (5)
Let L = (I − T )−TΣ̂T, then Ĉ∗ = LLT , which is the Cholesky factorization of the covariance matrix Ĉ∗ since L is lower triangular. Furthermore, we can see that the diagonal entries of L are the same as that of Σ̂, i.e., Lkk = σi∗k , the conditional variances of Xi∗k and Ni∗k are the same.
The task becomes to find the permutation i∗ = [i∗1, i ∗ 2, . . . , i ∗ p] and an upper triangular matrix U such that U−TU−1 is a good approximation of the empirical estimation of the permuted covariance matrix Ĉ = 1nX T :,i∗X:,i∗ , and U satisfies some additional constraints, such as the sparsity, etc.
2.3 ALGORITHM
We iteratively find the permutation i and calculate U via the Cholesky factorization. Assume that ik−1 = [i1, . . . , ik−1] and Uk−1 = U1:k−1,1:k−1 are settled, and we have
C1:k−1,1:k−1 = 1
n XT:,ik−1X:,ik−1 + λI = U −T k−1U −1 k−1, (6)
where λ > 0 is a diagonal augmentation parameter which we will give detailed discussion latter. Next, we show how to find ik and the last column of Uk.
For the time being, let us assume ik is known, we show how to compute the last column of Uk. Let U−1k =
[ U−1k−1 yk
0 αk ] , then[
U−1k−1 yk 0 αk
]T [ U−1k−1 yk
0 αk
] = [ U−Tk−1U −1 k−1 U −T k−1yk
yTkU −1 k−1 α 2 k+‖yk‖ 2
] = 1
n
[ XT:,ik−1 X:,ik−1+λI X T :,ik−1 X:,ik
XT:,ik X:,ik−1 ‖X:,ik‖ 2+λ
] ,
Algorithm 1 Causal Discovery via Cholesky Factorization (CDCF) 1: input: Data matrix X ∈ Rn×p, Truncate Threshold ω > 0, and tuning parameter γ. 2: output: Adjacent Matrix A. 3: Set i = [1, 2, . . . , p], R = ‖X‖22,∞ and λ = γ log p n R;
4: Set ` = argmin {‖X:,i1‖, ‖X:,i2‖, . . . , ‖X:,ip‖}; 5: Exchange i1 and i` in i; Set U1 =
√ n
‖X:,i`‖2+λ ;
6: for k = 2, 3, . . . , p do 7: for j = k, k + 1, . . . , p do 8: yj = 1 nU T k−1X T :,ik−1 X:,ij ;
9: αj = √ 1 n‖X:,ij‖2 + λ− ‖yj‖2;
10: end for 11: (V) ` = argmink≤j≤p α2j ;
(S) ` = argmink≤j≤p ‖Uk−1yj‖1; (VS) ` = argmink≤j≤p ‖Uk−1yj‖1 √∣∣α2j − 1k−1 ∑k−1h=1 1[Uk−1]2hh ∣∣; 12: Exchange ik and i` in i;
13: Set Uk = [ Uk−1 − 1α`Uk−1y`
0 1α`
] ;
14: end for 15: return A = [TRIU(TRUNCATE(Up, ω))]REVERSE(i),REVERSE(i).
where the last equality dues to (6). It follows that
yk = 1
n UTk−1X T :,ik−1 X:,ik , αk =
√ 1
n ‖X:,ik‖2 + λ− ‖yk‖2. (7)
And direct calculation gives rise to Uk = [ U−1k−1 yk
0 αk
]−1 = [ Uk−1 − 1αkUk−1yk
0 1αk
] . (8)
By (8), once ik is settled, we can obtain the last column of Uk. Our task remains to select ik from {1, . . . , p} \ {i1, . . . , ik−1}. There are several ways to accomplish this task. We propose three criteria to select ik. First, we need to compute αj and yj by (7) for all possible j (ij ∈ {1, . . . , p} \ {i1, . . . , ik−1}). Then we select ik according to one of the following criteria:
(V) ik = argmink≤j≤p α2j . Under the assumption that the noise variance of the child variable is approximately larger than that of its parents, it is reasonable/natural to select the index that has the lowest estimation of the noise variance. This criterion is guaranteed to find the correct permutation i∗ with high probability, which is shown in Section 2.4.
(S) ik = argmink≤j≤p ‖Uk−1yj‖1. Using (3) and (6), we know that Up intends to estimate (I − T )Σ̂−1. When the adjacent matrix T is sparse and the noise variables are independent (i.e., Σ̂ is diagonal), we would like to select the index that leading to the most sparse column of Uk. This criterion is especially useful when the number of samples is small, see Tables B.1, B.2 and B.3 in appendix.
(VS) ik = argmink≤j≤p ‖Uk−1yj‖1 √∣∣α2j − 1k−1 ∑k−1h=1 1[Uk−1]2hh ∣∣. We empirically combine
criterion (V) and criterion (S) together to take both aspects (variance and sparsity) into account. Numerically, we found that this criterion achieves the best performance in real-world data.
The diagonal augmentation trick in (6) is commonly used for an invertible and good conditioned estimation of the covariance matrix (see e.g., (Ledoit & Wolf, 2004)). Such a trick not only ensures that our algorithm does not break down due to the singularity of the sample covariance matrix, but also stabilizes the Cholesky factorization, especially when the sample is insufficient. In addition, by setting λ = O( log pn ), the error bound between the population covariance matrix and the augmented sample covariance matrix does not become worse (see Lemma ?? in the appendix). This trick
significantly improves the ability to recover the DAG, especially when the samples are insufficient, see Tables B.4, B.5 and B.6 in appendix.
The detailed algorithm is summarized in Algorithm 1. Some comments and implementation details follow. Line 4, we select the very initial value ` = argmin {‖X:,i1‖, ‖X:,i2‖, . . . , ‖X:,ip‖}. Line 5, we exchange i1 and i` in i and calculate U1 = √ n
‖X:,i`‖2+λ . Lines 6 to 14, we iteratively calculate
Uk and update permutation order i until all the indices are settled. Line 15, we truncate U , take its strict upper triangular part (denoted by “TRIU”) and re-permute the predicted adjacent matrix back to the original order according to the permutation order i. Specifically, the truncation is done column-wisely. By (8), the value of [Up]:,k is inversely proportional to αk. So, for column k, we set ωk =
ω αk , and do the truncation: [Up]ik is set to zero if |[Up]ik| < ωk. On output, node i connects to node j in G if |Aij | > 0.
Time Complexity Note that we do not have to re-calculate the matrix multiplication of XT:,ik−1X:,ij in line 8 since we can calculate C at the cost of O(p
2n) at first. Besides, at step k, we have already calculate UTk−2X T :,ik−1
X:,ij at previous step, we only need to calculate the last entry of yj , which is the inner product between two k dimensional vectors, at the cost ofO(p) in worst case. Overall, the time complexity of CDCF is O(p3 + p2n). When n > p, the complexity becomes O(p2n), which is equivalent to the complexity of calculating the covariance matrix. Additionally, the inner loop (lines 7 to 10) of CDCF can be done in parallel, which makes the algorithm friendly to run on GPU and suitable for large scale calculations.
2.4 EXACT DAG STRUCTURE RECOVERY
The following theorem tells that our algorithm is able to recover the DAG exactly with high probability under proper assumptions.
Theorem 2.1 Let x ∈ Rp be a zero-mean random vector, C = E(xxT) ∈ Rp×p be the covariance matrix. Let x1, . . . ,xn be n independent samples, Ĉ = 1n ∑n k=1 xkx T k be the sample covariance estimator. Assume ‖C − Ĉ‖ ≤ for some > 0. Denote Ĉλ = Ĉ + λI , where λ = O( ) ≥ 0 is a parameter. Let the Cholesky factorizations of C = ExxT and Ĉλ be C = LLT and Ĉλ = L̂L̂T, respectively, where L and L̂ are both lower triangular. For the linear SEM model (1), assume (2) and (4), and for k ∈ PaG(j), δ = infk∈PaG(j) δjk > 0, where
δjk = σ 2 i∗j + ‖Σ̂n[(I − T )−1]k:j−1,k‖2 − σ2i∗k .
If δ ≥ 4( + λ) and ‖L−1‖2( + λ) < 34 , then CDCF-V is able to recover P exactly. In addition, it holds that
‖TRIU(Up)− T ‖max ≤ 4‖Σ̂−1∗ (I − T )T‖22,∞‖(I − T )Σ̂−T∗ ‖2,∞( + λ), where TRIU(Up) stands for the strictly upper triangular part of Up, Up is the output of outer loop of Algorithm 1 with criterion (V).
we know that when T is sparse, we may recover its topology structure by truncating Up.
Proposition 1 Let Ni,: be independent bounded, or sub-Gaussian, or regular polynomial-tail, then for n > N( ), it holds ‖Ĉxx −Cxx‖ ≤ , w.h.p. Specifically,
N( ) ≥ C1 log p (‖(I − T )−1‖2‖Cnn‖ )2 , for bounded class;
N( ) ≥ C2 p (‖(I − T )−1‖2‖Cnn‖ )2 , for the sub-Gaussian class;
N( ) ≥ C3 p (‖(I − T )−1‖2‖Cnn‖ )2(1+r−1) , for the regular polynomial tail class.
The proofs of are provided in the Appendix A. This theorem and proposition also indicates the sample complexity of our algorithm is O(p). This sample complexity is better than the sample complexities of previous methods, see Table 2.1 for a detailed comparison.
3 EXPERIMENTS
In this section, we apply our algorithm to synthetic data sets, proteins data set and knowledge base data set, respectively, to illustrate the efficiency and effectiveness of our algorithm.
3.1 LINEAR SEM
We evaluate the proposed methods on simulated graphs from two well-known ensembles of random graph types: Erdös–Rényi (ER) (Gilbert, 1959) and Scale-free (SF) (Barabási & Albert, 1999). The average edge number per node is denoted after the graph type. For example, ER2 represents two edges per node on average. After the graph structure is settled, we assign uniformly random edge weights to obtain a weight matrix W . We generate the observation data X from the linear SEM with three noise distributions: Gaussian, Gumbel, Exponential.
We chose our baseline methods as NOTEARS (Zheng et al., 2018), DAG-GNN (Yu et al., 2019), CORL (Wang et al., 2021), NPVAR (Gao et al., 2020), and EQVAR (Chen et al., 2019). Other methods such as PC algorithm (Spirtes et al., 2000), LiNGAM (Shimizu et al., 2006), FGS (Ramsey et al., 2017), MMHC (Tsamardinos et al., 2006), L1OBS (Schmidt et al., 2007), CAM (Bühlmann et al., 2013), RL-BIC2 (Zhu et al., 2020), A*LASSO (Xiang & Kim, 2013), LISTEN (Ghoshal & Honorio, 2018), US (Park, 2020) perform worse than or approximately equal to the selected baselines, and the results can be found in the corresponding papers.
Table 3.1 presents the structural Hamming distance (SHD) of baseline methods and our method on 3000 samples (n = 3000). Nodes number p is noted in the first column. Graph type and edge level are noted in the second column. We only report the SHD of different algorithms due to page limitation, and we find that other metrics such as true positive rate (TPR), false discovery rate (FDR), false positive rate (FPR), and F1 score have the similar comparative performance with SHD. We also test bottom-up EQVAR which is equivalent to LISTEN, the result is worse than top-down EQVAR (EV-TD) in this synthesis experiment, so we do not include the result in the table. For p = 1000 graphs, we only report the result of EV-TD and CDCF since other algorithms spend too much time (longer than a week) to recover a DAG. We test our algorithms with different variations according to criteria (V, S, VS) introduced in Section 2.3, and with diagonal augmentation trick noted by a “+” as postfix. For example, "CDCF-V" means CDCF with V criterion and λ = 0, and "CDCF-V+" means CDCF with V criterion and λ = O( log pn ). The implementation details are in the Appendix B. We report the result of CDCF-V+ here, and the results of other CDCF variations can be found in Appendix Table B.4. We run our methods on ten randomly generated graphs and report the mean and variance in the table. Figure 3.1 plots the SHD results tested on 100 nodes graph recovering from different sample sizes. We choose EV-TD and high dimension top down (EV-HTD) as baselines when p > n and p ≤ n, respectively. We can see from the results, CDCF-V+ achieves significantly better performance comparing with previous baselines.
Table 3.2 shows the running time which is tested on a 2.3 GHz single Intel Core i5 CPU. Besides, parallel calculation of the matrix multiplication on GPU makes the algorithm even faster. Recovering 5000 and 10000 nodes graph from 3000 samples on an A100 Nvidia GPU is approximately 400 and 2400 seconds, respectively. For comparison, EV-TD costs approximately 100 hours to recover a 1000 nodes DAG from 3000 samples. As illustrated in the table, CDCF is approximately dozens or hundreds of times faster than EV-TD and LISTEN, and tens of thousands times faster than NOTEARS as CDCF does not have to update the parameters with gradients.
Due to the page limitation, further experiments and discussions of the ablation study (Figures B.3 to B.14, Tables B.1 to B.6), choice of λ (Tables B.7 to B.10), and performances on different noise distribution (Figures B.1, B.2) and deviation (Tables B.11, B.12, B.13) are given in Appendix B.
3.2 PROTEINS DATA SET
We consider a bioinformatics data set (Sachs et al., 2005) consisting of continuous measurements of expression levels of proteins and phospholipids in the human immune system cells. This is a widely used data set for research on graphical models, with experimental annotations accepted by the biological research community. Following the previous algorithms setting, we noticed that different previous papers adopted different observations. To included them all, we considered the observational 853 samples from the "CD3, CD28" simulation tested by Teyssier & Koller (2005); Lachapelle et al. (2020); Zhu et al. (2020) and all 7466 samples from nine different simulations tested by Zheng et al. (2018; 2020); Yu et al. (2019).
We report the experimental results on both settings in Table 3.3. The implementation codes of the baselines are introduced in the appendix, and we use the default settings of the hyper-parameters provided in their codes. The evaluate metric is FDR, TPR, FPR, SHD, predicted nodes number (N), precision (P), F1 score. As the recall score is equal to TPR, we do not include it in the table. In both settings, CDCF-VS+ achieves state-of-the-art performance. 1 Several reasons make the recovered graph not exactly the same as the expected one. The ground truth graph suggested by the paper is mixed with directed and indirect edges. Under the settings of SEM, the node "PKA" is quite similar to the leaf nodes since most of its edges are indirect while the ground truth graph notes it as the out edges. Non-linear would not be an impact issue here since NOTEARS and our algorithm both achieve decent results. In the meantime, we do not deny that further extension of our algorithm to non-linear representation would witness an improvement on this data set.
3.3 KNOWLEDGE BASE DATA SET
We test our algorithm on FB15K-237 data set (Toutanova et al., 2015) in which the knowledge is organized as {Subject, Predicate,Object} triplets. The data set has 15K triplets and 237 types of predicates. In this experiment, we only consider the single jump predicate between the entities, which
1For NOTEARS-MLP, Table 3.3 reported the results reproduced by the code provided in Zheng et al. (2020).
have 97 predicates remained. We want to discover the causal relationships between the predicates. We organize the observation data as each sample corresponds to an entity with awareness of the position (Subject or Object), and each variable corresponds to a predicate in this knowledge base.
In Figure 3.2, we give the adjacent weighted matrix of the generated graph and several examples with high confidence (larger than 0.5). In the left figure, the label of the axis notes the first capital letter of the domain of the relations. Some of them are replaced with a dot to save space. The exact domain name and the picture with the full predicate name are provided in the appendix. The domain clusters are denoted in black boxes at the diagonal of the adjacent matrix. The red boxes denoted the cross-domain relations that are worth paying attention to. Consistent with the innateness of human sense, the recovered relationships inside a domain are denser than those across domains. In the cross-domain relations, we found that the predicate in domain "TV" ("T") has many relations with the domain "Film" ("F"), the domain "Broadcast" (last row) have many relations with the domain "Music" ("M"). Several cases of the predicted causal relationships are listed on the right side of Figure 3.2, we can see that the discovered indication relations between predicates are quite reasonable.
4 CONCLUSION AND FUTURE WORK
In this paper, we proposed a topology search algorithm for the DAG structure recovery problem. Our algorithm is better than the existing methods in both time and sample complexities. To be specific, the time complexity of our algorithm isO(p2n+ p3), while the fastest algorithm before isO(p5n) (Park, 2020; Gao et al., 2020), where p and n are the numbers of nodes and samples, respectively. Under different assumptions, our algorithm takes O(log(p)) or O(p) samples to exactly recover the DAG structure. Experimental results on synthetic data sets, proteins data sets, and knowledge base data set demonstrate the efficiency and effectiveness of our algorithm. For synthetic data sets, compared with previous baselines, our algorithm improves the performance with a significant margin and at least tens or hundreds of times faster. For the proteins data set, we achieve state-of-the-art performance. For the knowledge base data set, we can observe many reasonable structures of the discovered DAG.
The proposed algorithm is under the assumption of linear SEM. Generalization of CDCF to nonlinear SEM would be a valuable and important research topic. Learning the representation of the observed data for better structure reconstruction via the CDCF algorithm, which requires the algorithm differentiable, is also an attractive problem. To deal with the extremely large-scale problems, such as millions of nodes, implementing CDCF via sparse matrix storage and calculation on the GPU is a promising way to further improve computational performance.
A PROOF OF THEOREM 2.1
In this section, we first give several lemmas, then prove Theorem 2.1.
Lemma A.1 Let x ∈ Rp be a zero-mean random vector, C = E(xxT) ∈ Rp×p be the covariance matrix. Let x1, . . . ,xn be n independent samples, Ĉ = 1n ∑n k=1 xkx T k be the sample covariance estimator. Assume ‖C − Ĉ‖ ≤ for some > 0. Denote Ĉλ = Ĉ + λI , where λ = O( ) ≥ 0 is a parameter. Let the Cholesky factorizations of C = ExxT and Ĉλ be C = LLT and Ĉλ = L̂L̂T, respectively, where L and L̂ are both lower triangular. If ‖L−1‖2( + λ) < 34 , then
|‖Li,:‖2 − ‖[L̂λ]i,:‖2| ≤ + λ = O( ), for 1 ≤ i ≤ p; (9)
|[L−1]ij − [L̂−1]ij | ≤ 4‖L−1‖22,∞‖L−T‖2,∞( + λ) = O( ), for i > j. (10)
Proof. For all 1 ≤ i ≤ p, we have
|‖Li,:‖2 − ‖L̂i,:‖2| = |Cii − [Ĉλ]ii| ≤ ‖C − Ĉλ‖ ≤ ‖C − Ĉ‖+ λ ≤ + λ, (11)
which completes the proof for (9).
Next, we show (10). Let
L−1L̂ = I + F , (I + F )(I + F )T = I +E. (12)
We know that
L̂−1 −L−1 = [(I + F )−1 − I]L−1 = −F (I + F )−1L−1, (13)
E = L−1L̂L̂TL−T − I = L−1(Ĉλ −C)L−T. (14)
Then it follows from (13) that for i > j
|[L−1]ij−[L̂−1]ij | ≤ ‖Fi,1:i−1‖‖[(I+F )−1L−1]:,j‖ ≤ ‖Fi,1:i−1‖‖(I+F )−1‖‖L−T‖2,∞. (15)
First, we give an upper bound for ‖(I + F )−1‖. Using (12), we have (I + F )−T(I + F )−1 = (I +E)−1. It follows
‖(I + F )−1‖ = ‖(I + F )−T(I + F )−1‖ 12 = ‖(I +E)−1‖ 12
≤ 1√ 1− ‖E‖ ≤ 1√ 1− ‖L−1‖2‖Ĉλ −C‖ , (16)
where the last inequality uses (14).
Second, we give upper bound for ‖Fi,1:i−1‖. It follows from the second equality of (12) that
(1 + Fii) 2 + ‖Fi,1:i−1‖2 = 1 +Eii. (17)
Therefore,
‖Fi,1:i−1‖2 ≤ |(1 + Fii)2 − 1|+Eii (a) ≤ L̂ 2 ii −L2ii L2ii +Eii
(b) ≤ + λ L2ii + ‖L−1‖22,∞‖Ĉλ −C‖ (c) ≤ 2‖L−1‖22,∞( + λ), (18)
where (a) uses (12), (b) uses (9) and (14), (c) uses ‖C − Ĉ‖ ≤ . Substituting (18) and (16) into (15), we get
|[L−1]ij − [L̂−1]ij | ≤ 2‖L−1‖22,∞‖L−T‖2,∞ + λ√
1− ‖L−1‖2( + λ) . (19)
The conclusion follows since ‖L−1‖2( + λ) < 34 .
Theorem A.2 Let x ∈ Rp be a zero-mean random vector, C = E(xxT) ∈ Rp×p be the covariance matrix. Let x1, . . . ,xn be n independent samples, Ĉ = 1n ∑n k=1 xkx T k be the sample covariance estimator. Assume ‖C − Ĉ‖ ≤ for some > 0. Denote Ĉλ = Ĉ + λI , where λ = O( ) ≥ 0 is a parameter. Let the Cholesky factorizations of C = ExxT and Ĉλ be C = LLT and Ĉλ = L̂L̂T, respectively, where L and L̂ are both lower triangular. For the linear SEM model (1), assume (2) and (4), and for k ∈ PaG(j), δ = infk∈PaG(j) δjk > 0, where
δjk = σ 2 i∗j + ‖Σ̂n[(I − T )−1]k:j−1,k‖2 − σ2i∗k .
If δ ≥ 4( + λ) and ‖L−1‖2( + λ) < 34 , then CDCF-V is able to recover P exactly. In addition, it holds that
‖TRIU(Up)− T ‖max ≤ 4‖Σ̂−1∗ (I − T )T‖22,∞‖(I − T )Σ̂−T∗ ‖2,∞( + λ),
where TRIU(Up) stands for the strictly upper triangular part of Up, Up is the output of outer loop of Algorithm 1 with criterion (V).
Proof. For SEM model (1), denote Ĉ∗ = E( 1nX̂ TX̂), Σ̂2∗ = E( 1nN̂ TN̂) = Σ̂Tn Σ̂n, we have (5), i.e.,
Ĉ∗ = (I − T )−TΣ̂2∗(I − T )−1 = (I − T )−TΣ̂Tn Σ̂n(I − T )−1. (20)
When the permutation i∗ = [i∗1, . . . , i ∗ p] is exactly recovered, then Up in CDCF-V satisfies
Ĉλ = 1
n XT:,i∗X:,i∗ + λI = U −T p U −1 p . (21)
Denote i∗j = [i ∗ 1, . . . , i ∗ j ] for all j = 1, . . . , p. Consider the kth diagonal entries of (20) and (21). By calculations, we get
[Ĉ∗]kk = [(I − T )−1]T:,kΣ̂Tn Σ̂n[(I − T )−1]:,k = σ2i∗k + ‖uk‖ 2, (22) [Ĉλ]kk = 1
n ‖Xi∗k‖
2 + λ = 1
U2kk + ‖ûk‖2, (23)
where
uk = [Σ̂n]1:k−1,1:k−1(Ik−1 − T1:k−1,1:k−1)−1T1:k−1,k, ûk = 1
n UTk−1X T :,i∗k−1 X:,i∗k . (24)
Using ‖C − Ĉ‖ ≤ , we have
|[Ĉ∗]kk − [Ĉλ]kk| ≤ ‖C − Ĉλ‖ ≤ ‖C − Ĉ‖+ λ ≤ + λ. (25)
By Lemma A.1, we have
|‖uk‖2 − ‖ûk‖2| ≤ + λ. (26)
Using (22), (23), (25) and (26), we get
|σ2i∗k − 1
U2kk | ≤ 2( + λ). (27)
Assume that i∗1, . . . , i ∗ k−1 (k ≥ 1) are all correctly recovered. And without loss of generality, for k ∈ PaG(j), we also assume Tk:j−1,j 6= 0 (otherwise, jth and kth columns are exchangeable, and i forms another equivalence topology order to the same DAG (Sedgewick & Wayne, 2011)). Then we
have for k ∈ PaG(j) that 1
n ‖Xi∗j ‖
2 + λ− ‖[ûj ]1:k−1‖2 (a) = [Ĉ∗]jj + [Ĉλ]jj − [Ĉ∗]jj − ‖[ûj ]1:k−1‖2
(b) ≥ [Ĉ∗]jj − ( + λ)− ‖[uj ]1:k−1‖2 − ( + λ) (c) = σi∗j + ‖[uj ]k:j−1‖
2 − 2( + λ) (d) ≥ σi∗k + δ − 2( + λ) (e) = [Ĉ∗]kk − ‖uk‖2 + δ − 2( + λ) (f) ≥ [Ĉλ]kk − ‖ûk‖2 + δ − 4( + λ) (g) = 1
n ‖Xi∗k‖ 2 + λ− ‖ûk‖2 + δ − 4( + λ),
where (a) uses (23), (b) and (f) uses (25) and Lemma A.1, (c) uses (22), (d) dues to the assumption σi∗j ≥ σi∗k for k ∈ PaG(j), (e) uses (22), (g) uses (23). Therefore, using δ > 4( + λ), we have
1 n ‖Xi∗j ‖ 2 + λ− ‖[ûj ]1:k−1‖2 > 1 n ‖Xi∗k‖ 2 + λ− ‖ûk‖2,
which implies that i∗k can be correctly recovered. So, overall speaking, CDCF-V is able to recover the permutation P .
The upper bound for ‖TRIU(Up)− T ‖max follows from Lemma A.1. The proof is completed.
Proposition 2 Let Ni,: be independent bounded, or sub-Gaussian, 2 or regular polynomial-tail, 3 then for n > N( ), it holds ‖Ĉxx −Cxx‖ ≤ , w.h.p. Specifically,
N( ) ≥ C1 log p (‖(I − T )−1‖2‖Cnn‖ )2 , for bounded class;
N( ) ≥ C2 p (‖(I − T )−1‖2‖Cnn‖ )2 , for the sub-Gaussian class;
N( ) ≥ C3 p (‖(I − T )−1‖2‖Cnn‖ )2(1+r−1) , for the regular polynomial tail class.
Proof. For SEM model (1), we have
‖Ĉxx−Cxx‖ ≤ ‖(I−T )−1‖2‖Ĉnn−Cnn‖ ≤ ‖(I−T )−1‖2‖Cnn‖‖C − 12 nn ĈnnC − 12 nn −I‖, (28)
where Cxx = ExxT, Cnn = EnnT are the covariance matrices for x and n, respectively, Ĉxx, Ĉnn are the sample covariance matrices for x and n, respectively. The three results listed above follow from Corollary 5.52, Theorem 5.39 in Vershynin (2010), Theorem 1.1 in Srivastava & Vershynin (2013), respectively.
2A random vector z is isotropic and sub-Gaussian if EzzT = I and and there exists constant C > 0 such that P(|vTz| > t) ≤ exp(−Ct2) for any unit vector v. Here by “Ni,: is sub-Gaussian” we mean that C − 1 2 nn N T i,: is an isotropic and sub-Gaussian random vector. 3A random vector z is isotropic and regular polynomial-tail if EzzT = I and there exist constants r > 1, C > 0 such that P(‖V z‖2 > t) ≤ Ct−1−r for any orthogonal projection V and any t > C · rank(V ). Here by “Ni,: is regular polynomial-tail” we mean that C − 1 2 nn N T i,: is an isotropic and regular polynomial-tail random vector.
B ADDITIONAL EXPERIMENTS
Here we provide implementation details and additional experiment results.
Figures B.1, B.2 provide the results of Gumbel and Exponential noises, respectively. As we can see from the result, our algorithm still performs better than Eqvar method in different noise types.
Tables B.1, B.2 , B.3, B.4, B.5, B.6 give results on 100 nodes over different sample sizes and variances of our CDCF methods. As noted in Algorithm 1, we have V, S, VS as different criteria to select the current column, "+" representing the sample covariance matrix augmented with the scalar matrix log p n I . The truncation threshold on column i is ωi = 3.5/αi, where αi is the diagonal value of the Cholesky factor. According to the results, the algorithm "V+" achieves the best performance as the sample size is relatively large. When the sample size is small, the criterion according to sparsity shows very effective performance improvement. We also test different choices over λ = β log pn , β ∈ {0.0, 1.0..., 9.0}, the result is given in Table B.7, B.8, B.9, B.10. Empirically, β ∈ {1.0, 2.0} achieves better results. In practice, one can sample a relatively small and labeled sub-graph of the DAG to test the hyper-parameter setting then apply to large unlabeled the DAG graph.
To test the performance limitation of our methods, we provide the results of SHD on different sample number and node number in Figures B.3 to B.14 where the x-axis represents the sample number (in thousand), the y-axis denotes the node number, the color represents the value of log2(SHD + 1) (the brighter the better). We provide the figures for CDCF-V+, CDCF-S+, and CDCF-VS+ on variances graph and noise types. The figures are drawn on the mean results over ten random seeds. The figures show that the graph can be exactly recovered on 800 nodes at approximately 6000 samples. Comparing CDCF-V+ with CDCF-S+, we find that criterion (S) damages the performance when the sample number is relatively large. When sample number ∈ {1500, 3000} and node number ∈ {400, 800}, CDCF-S+ achieves better performance. Such trend can also be demonstrated in Tables B.1, B.2, B.3. CDCF-VS+ alleviates the poor performance of CDCF-S when the data is sufficient and achieves good performance on real-world data set.
We also test the performance on linear SEM with monotonously increased noise variance. Concretely, assume the topology order is i = {i1, ..., ip}, we set the noise variance of node k as σk = 1 + ik/p. We test the results on Gaussian, Gumbel, and Exponential noise with monotonous noise variance. The results are reported in Tables B.11, B.12 and B.13. As the results indicated, even with different noise levels, our algorithms achieve good performance and are able to exactly recover the DAG structure when the data is sufficient.
In the result for knowledge base data set, the axis labels of Figure 3.2 are ‘Film’, ‘People’, ‘Location’, ‘Music’, ‘Education’, ‘Tv’, ‘Medicine’, ‘Sports’, ‘Olympics’, ‘Award’, ‘Time’, ‘Organization’, ‘Language’, ‘MediaCommon’, ‘Influence’, ‘Dataworld’, ‘Business’, ‘Broadcast’ from left to right for x-axis and top to bottom for y-axis, respectively. The adjacent matrix plotted here is re-permuted to make the relations in the same domain close to each other. We keep the adjacent matrix inside a domain an upper triangular matrix. Such typology is equivalent to the generated matrix with the original order.
Baseline Implementations The baselines are implemented via the codes provided from the following links:
• NOTEARS, NOTEARS-MLP: https://github.com/xunzheng/notears • NPVAR: https://github.com/MingGao97/NPVAR • EQVAR, LISTEN: https://github.com/WY-Chen/EqVarDAG • CORL: https://github.com/huawei-noah/trustworthyAI/tree/master/gcastle • DAG-GNN: https://github.com/fishmoon1234/DAG-GNN | 1. What is the focus and contribution of the paper regarding linear structural equation models?
2. What are the strengths and weaknesses of the proposed algorithm in terms of identifiability conditions and theoretical assumptions?
3. Do you have any concerns about the numerical experiments supporting the theoretical findings, specifically regarding sample complexity and sparsity levels?
4. How would you improve the paper by addressing the concerns mentioned in the review? | Summary Of The Paper
Review | Summary Of The Paper
This paper develops a new algorithm for learning linear structural equation models using cholesky factorization. This paper explains that the proposed algorithm is consistent in high dimensional settings and computational feasible. This paper thoroughly discusses the recent studies of learning linear SEMs and provide a clear motivation. Furthermore, this paper provides a lot of numerical experiments to support its theoretical findings.
Review
Strength : The paper provides thorough discussion of previous studies. Weak: Some points of the paper are not clear. (1) The new identifiability condition (V) on page 4 requires a clear justification. For example, suppose that a true graph is V = (1,2,3) and E = { (2,3) }. If, node 1is chosen as i_1. Then, U_{k-1} y_j = 0 for j = 2, 3. Hence, I think the condition (V) and (SV) require a mathematical proof. (2) The require conditions for Theorem 2.1 appear to be unrealistic. The first condition is that bounded random vector | x|^2 \leq R. In this case, it is definitely not satisfied when error variables are Gaussian and Exponential as applied in the numerical experiments. Furthermore, as the number of node increases, R increases. In addition, R clearly depends on the sparsity level. For instance, X1 ~ N(0, \sigma^2), X2 = X1 + N(0, \sigma^2), X3 = X1 + X2 + N(0, \sigma^2). Then, E( X1^2 + X2^2 + X3^2 ) = \sigma^2 + 2 \sigma^2 + 4 \sigma^2 = 7 \sigma^2. However, for a sparser case, X1 ~ N(0, \sigma^2), X2 = X1 + N(0, \sigma^2), X3 = X1 + N(0, \sigma^2), E( X1^2 + X2^2 + X3^2 ) = \sigma^2 + 2 \sigma^2 + 2 \sigma^2 = 5 \sigma^2. (3) The numerical experiments do not support the theoretical findings of the paper. According to Theorem 2.1., the sample complexity does not depend on the sparsity level of a graph. Hence, the comparison of performance of the proposed method for learning ER2 and ER5 cannot be explained. In addition, it is unclear that the SHD converges to zero. Hence, it would be better to provide the empirical probability of P(TRIU(U_p) = T). (4) As explained, the considered distributions (Gaussian, Exponential) do not satisfy the bounded random variable assumption. |
ICLR | Title
Causal Discovery via Cholesky Factorization
Abstract
Discovering the causal relationship via recovering the directed acyclic graph (DAG) structure from the observed data is a challenging combinatorial problem. This paper proposes an extremely fast, easy to implement, and high-performance DAG structure recovering algorithm. The algorithm is based on the Cholesky factorization of the covariance/precision matrix. The time complexity of the algorithm is O(pn + p), where p and n are the numbers of nodes and samples, respectively. Under proper assumptions, we show that our algorithm takes O(log(p)) or O(p) samples to exactly recover the DAG structure under proper assumptions. In both time and sample complexities, our algorithm is better than previous algorithms. On synthetic and real-world data sets, our algorithm is significantly faster than previous methods and achieves state-of-the-art performance.
1 INTRODUCTION
As Schelling had said: “The whole world is thoroughly to caught in reason, but the question is: how did it get caught in the network of reason in the first place?” (Kuhn, 1942; Žižek & von Schelling, 1997), people found that learning the causal inferences between the variables is a fundamental problem and has many applications in biology, machine learning, medicine, and economics. The problem usually is considered as finding a directed acyclic graph (DAG) from an observational joint distribution. Unfortunately, learning the DAG structure from the observations is proved to be an NP-hard problem (Chickering, 1995; Chickering et al., 2004).
The problem is generally formulated as the structural equation model (SEM), where the variable of a child node is a function of its parents with additional noises. Depending on the types of functions (linear or non-linear) and noises (Gaussian, Gumbel, etc.), there are several SEM families, e.g., Spirtes et al. (2000); Geiger & Heckerman (1994); Shimizu et al. (2006). In general, the graph can be identified from the joint distribution only up to Markov equivalence classes. Zhang & Hyvarinen (2012); Peters et al. (2014); Peters & Bühlmann (2014); Gao et al. (2020) propose several SEM forms that make the graph fully identifiable from the observed data.
Various algorithms had been proposed to deal with the problem. Search-based algorithms (Chickering, 2002; Friedman & Koller, 2003; Ramsey et al., 2017; Tsamardinos et al., 2006; Aragam & Zhou, 2015; Teyssier & Koller, 2005; Ye et al., 2019; Lv et al., 2021) generally adopt a score (e.g., BIC (Peters et al., 2014) score, Cholesky score (Ye et al., 2019), remove-fill score (Squires et al., 2020)) to measure the fitness of different graphs over data and then search over the legal DAG space to find the structure that achieves the highest score. However, exhaustive search over the legal DAG space is infeasible when p is large (e.g., there are 4.1e18 DAGs for p = 10 (Sloane et al., 2003)). Those algorithms go in quest of a trade-off between the performance and the time complexity.
Since Zheng et al. (2018) proposed an approach that converts the traditional combinatorial optimization problem into a continuous program, many methods (Yu et al., 2019; Lee et al., 2019; Ng et al., 2019a;b; Zheng et al., 2020; Lachapelle et al., 2020; Squires et al., 2020; Zhu et al., 2021) have been proposed. Those algorithms formalize the problem as a data reconstruction task with various differentiable constraints on the DAG adjacent matrix and solve it via the augmented Lagrangian method. These algorithms are able to utilize neural networks to approximate the complicated relations between the features in the observed data and achieve good performances. Recently, reinforcement learning based algorithms (Zhu et al., 2020; Wang et al., 2021) also improved the performance by exploring the possible DAG structure candidates. The algorithms update the parameters of the model
via policy gradient as long as it explored a better DAG structure with a higher reward which measures how well an explored structure meets the requirement of DAG and the observed data.
Topology order search algorithms (TOSA) (Ghoshal & Honorio, 2017; 2018; Chen et al., 2019; Gao et al., 2020; Park, 2020) decompose the DAG learning problem into two phases: (i) Topology order learning via conditional variance of the observed data; (ii) Graph estimation depends on the learned topology order. Those algorithms reduce the computation complexity into polynomial time and are guaranteed to recover the DAG structure under some identifiable assumptions. Our method in this paper is also a topology order search algorithm and it merges the two phases in TOSA into one. In each iteration, it attempts to find a child or a contemporary of the current node. Meanwhile, it also determines the corresponding column vector of the adjacent matrix. The mergence brings three main differences: First, the topology order in TOSA is recovered purely based on the conditional variance of the observed data, whereas our method may also take the sparsity of the adjacent matrix into account; Second, the graph LASSO methods, which are commonly adopted to estimate the graph in the second phase in TOSA, encourage the sparsity of the precision matrix, whereas our method is able to encourage the sparsity of the adjacent matrix; Third, the time complexity is reduced significantly. To be specific, the time complexity of our algorithm is O(p2n + p3), while the fastest algorithm before is O(p5n) (Park, 2020; Gao et al., 2020). Here p and n are the numbers of nodes and samples, respectively. In addition, under proper assumptions, we show that our algorithm takes O(log(p)) or O(p) samples to exactly recover the DAG structure. Compared with previous TOSA algorithms, the sample complexity of our method is much better. Experimental results on synthetic data sets, proteins data sets, and knowledge base data set demonstrate the efficiency and effectiveness of our algorithm. For synthetic data sets, compared with previous baselines, our algorithm improves the performance with a significant margin and at least tens or hundreds of times faster. For the proteins data set, we achieve state-of-the-art performance. For the knowledge base data set, we can observe many reasonable structures of the discovered DAG. Our code is uploaded as supplementary material and will be open-sourced upon the acceptance of this paper.
The rest of this paper is organized as follows. In Section 2, we present our algorithm together with the theoretical analysis. In Section 3, numerical results on synthetic data sets, proteins data set, and knowledge base data set are given. Finally, the paper is concluded in Section 4.
Notations. The symbol ‖ · ‖ stands for the Euclid norm of a vector or the spectral norm of a matrix. For a vector x = [x1,x2, . . . ,xp] ∈ Rp, ‖ · ‖1 stands for the `1-norm, i.e., ‖x1‖ = ∑p i=1 |xi|. For a matrix X = [Xij ] ∈ Rm×n, ‖ · ‖2,∞ stands for the two-to-infinity norm, i.e., ‖X‖2,∞ = max1≤i≤m ‖Xi,:‖; ‖ · ‖max stands for the max norm, ‖X‖max = maxi,j |Xij |.
2 CAUSAL DISCOVERY VIA CHOLESKY FACTORIZATION (CDCF)
In this section, we first present some preliminaries on DAG, then motivating our algorithm. Next, the detailed algorithm and theoretical guarantees for the exact recovery of the algorithm are given.
2.1 PRELIMINARIES
We assume the observed data is entailed by a DAG G = (p, V,E), where p is the number of nodes, V = {v1, ..., vp} and E = {(vi, vj)|i, j ∈ {1, ...p}} represent the set of nodes and edges, respectively. Each node vi is corresponding to a random variable Xi. The observed data matrix X = [x1, ...,xp] ∈ Rn×p where xi is consisting of n i.i.d observations of the random variable Xi. The joint distribution of X is P (X) = ∏p i=1 P (Xi|PaG(Xi)), where PaG(Xi) := {Xj |(vi, vj) ∈ E} is the parents of node Xi.
Given X , we seek to recover the latent DAG topology structure for the joint probability distribution (Hoyer et al., 2008; Peters et al., 2017). Generally, X is modeled via a structural equation model (SEM) with the form
Xi = fi(PaG(Xi)) +Ni, (i = 1, ..., p),
where fi is an arbitrary function representing the relation between Xi and its parents, Ni is the jointly independent noise variable.
In this paper, we focus on the linear SEM defined by Xi = Xwi +Ni, (i = 1, ..., p),
where wi ∈ Rp is a weighted column vector. Let W = [w1, . . . ,wp] ∈ Rp×p be the weighted adjacency matrix, N = [n1, . . . ,np] ∈ Rn×p be an additive independent noise matrix, where ni is n i.i.d observations following the noise variable Ni. Then the linear SEM model can be formulated as
X = XW +N . (1)
We assume the noise deviation of the child variable is approximately larger than that of its parents (see Theorem 2.1 for details). Following this assumption, a classical identifiable form of SEM is the linear-Gaussian SEM, where all Ni are i.i.d. and homoscedastic (Peters & Bühlmann, 2014).
2.2 ALGORITHM MOTIVATION
As proposed in McKay et al. (2003); Nicholson (1975), a graph is DAG if and only if the corresponding weighted adjacent matrix W can be decomposed into
W = PTPT, (2)
where P is a permutation matrix, T is a strict upper triangular matrix, i.e., Tij = 0 for all i ≤ j.
We denote the scaled permuted data matrix as X̂ = 1√ n XP , the scaled permuted noise matrix as N̂ = 1√ n NP , and the permutation order [i∗1, i ∗ 2 . . . , i ∗ p] = [1, 2, . . . , p]P . We can rewrite (1) as
X̂ = X̂T + N̂ .
Then it follows that X̂ = N̂(I − T )−1. (3)
Let E(N̂TN̂) = Σ̂2∗ = Σ̂TΣ̂, (4)
where Σ̂2∗ is the covariance matrix of the noise variables, Σ̂ is upper triangular – the Cholesky factor of Σ̂2∗. Let the diagonal entries of Σ̂ be σ
2 i∗1 , σ2i∗2 , . . . , σ 2 i∗p . We know that σ2i∗k is the conditional variance of Ni∗k .
Now using (3) and (4), we have the covariance matrix of the permuted data:
Ĉ∗ = E(X̂TX̂) = (I − T )−TE(N̂TN̂)(I − T )−1 = (I − T )−TΣ̂TΣ̂(I − T )−1. (5)
Let L = (I − T )−TΣ̂T, then Ĉ∗ = LLT , which is the Cholesky factorization of the covariance matrix Ĉ∗ since L is lower triangular. Furthermore, we can see that the diagonal entries of L are the same as that of Σ̂, i.e., Lkk = σi∗k , the conditional variances of Xi∗k and Ni∗k are the same.
The task becomes to find the permutation i∗ = [i∗1, i ∗ 2, . . . , i ∗ p] and an upper triangular matrix U such that U−TU−1 is a good approximation of the empirical estimation of the permuted covariance matrix Ĉ = 1nX T :,i∗X:,i∗ , and U satisfies some additional constraints, such as the sparsity, etc.
2.3 ALGORITHM
We iteratively find the permutation i and calculate U via the Cholesky factorization. Assume that ik−1 = [i1, . . . , ik−1] and Uk−1 = U1:k−1,1:k−1 are settled, and we have
C1:k−1,1:k−1 = 1
n XT:,ik−1X:,ik−1 + λI = U −T k−1U −1 k−1, (6)
where λ > 0 is a diagonal augmentation parameter which we will give detailed discussion latter. Next, we show how to find ik and the last column of Uk.
For the time being, let us assume ik is known, we show how to compute the last column of Uk. Let U−1k =
[ U−1k−1 yk
0 αk ] , then[
U−1k−1 yk 0 αk
]T [ U−1k−1 yk
0 αk
] = [ U−Tk−1U −1 k−1 U −T k−1yk
yTkU −1 k−1 α 2 k+‖yk‖ 2
] = 1
n
[ XT:,ik−1 X:,ik−1+λI X T :,ik−1 X:,ik
XT:,ik X:,ik−1 ‖X:,ik‖ 2+λ
] ,
Algorithm 1 Causal Discovery via Cholesky Factorization (CDCF) 1: input: Data matrix X ∈ Rn×p, Truncate Threshold ω > 0, and tuning parameter γ. 2: output: Adjacent Matrix A. 3: Set i = [1, 2, . . . , p], R = ‖X‖22,∞ and λ = γ log p n R;
4: Set ` = argmin {‖X:,i1‖, ‖X:,i2‖, . . . , ‖X:,ip‖}; 5: Exchange i1 and i` in i; Set U1 =
√ n
‖X:,i`‖2+λ ;
6: for k = 2, 3, . . . , p do 7: for j = k, k + 1, . . . , p do 8: yj = 1 nU T k−1X T :,ik−1 X:,ij ;
9: αj = √ 1 n‖X:,ij‖2 + λ− ‖yj‖2;
10: end for 11: (V) ` = argmink≤j≤p α2j ;
(S) ` = argmink≤j≤p ‖Uk−1yj‖1; (VS) ` = argmink≤j≤p ‖Uk−1yj‖1 √∣∣α2j − 1k−1 ∑k−1h=1 1[Uk−1]2hh ∣∣; 12: Exchange ik and i` in i;
13: Set Uk = [ Uk−1 − 1α`Uk−1y`
0 1α`
] ;
14: end for 15: return A = [TRIU(TRUNCATE(Up, ω))]REVERSE(i),REVERSE(i).
where the last equality dues to (6). It follows that
yk = 1
n UTk−1X T :,ik−1 X:,ik , αk =
√ 1
n ‖X:,ik‖2 + λ− ‖yk‖2. (7)
And direct calculation gives rise to Uk = [ U−1k−1 yk
0 αk
]−1 = [ Uk−1 − 1αkUk−1yk
0 1αk
] . (8)
By (8), once ik is settled, we can obtain the last column of Uk. Our task remains to select ik from {1, . . . , p} \ {i1, . . . , ik−1}. There are several ways to accomplish this task. We propose three criteria to select ik. First, we need to compute αj and yj by (7) for all possible j (ij ∈ {1, . . . , p} \ {i1, . . . , ik−1}). Then we select ik according to one of the following criteria:
(V) ik = argmink≤j≤p α2j . Under the assumption that the noise variance of the child variable is approximately larger than that of its parents, it is reasonable/natural to select the index that has the lowest estimation of the noise variance. This criterion is guaranteed to find the correct permutation i∗ with high probability, which is shown in Section 2.4.
(S) ik = argmink≤j≤p ‖Uk−1yj‖1. Using (3) and (6), we know that Up intends to estimate (I − T )Σ̂−1. When the adjacent matrix T is sparse and the noise variables are independent (i.e., Σ̂ is diagonal), we would like to select the index that leading to the most sparse column of Uk. This criterion is especially useful when the number of samples is small, see Tables B.1, B.2 and B.3 in appendix.
(VS) ik = argmink≤j≤p ‖Uk−1yj‖1 √∣∣α2j − 1k−1 ∑k−1h=1 1[Uk−1]2hh ∣∣. We empirically combine
criterion (V) and criterion (S) together to take both aspects (variance and sparsity) into account. Numerically, we found that this criterion achieves the best performance in real-world data.
The diagonal augmentation trick in (6) is commonly used for an invertible and good conditioned estimation of the covariance matrix (see e.g., (Ledoit & Wolf, 2004)). Such a trick not only ensures that our algorithm does not break down due to the singularity of the sample covariance matrix, but also stabilizes the Cholesky factorization, especially when the sample is insufficient. In addition, by setting λ = O( log pn ), the error bound between the population covariance matrix and the augmented sample covariance matrix does not become worse (see Lemma ?? in the appendix). This trick
significantly improves the ability to recover the DAG, especially when the samples are insufficient, see Tables B.4, B.5 and B.6 in appendix.
The detailed algorithm is summarized in Algorithm 1. Some comments and implementation details follow. Line 4, we select the very initial value ` = argmin {‖X:,i1‖, ‖X:,i2‖, . . . , ‖X:,ip‖}. Line 5, we exchange i1 and i` in i and calculate U1 = √ n
‖X:,i`‖2+λ . Lines 6 to 14, we iteratively calculate
Uk and update permutation order i until all the indices are settled. Line 15, we truncate U , take its strict upper triangular part (denoted by “TRIU”) and re-permute the predicted adjacent matrix back to the original order according to the permutation order i. Specifically, the truncation is done column-wisely. By (8), the value of [Up]:,k is inversely proportional to αk. So, for column k, we set ωk =
ω αk , and do the truncation: [Up]ik is set to zero if |[Up]ik| < ωk. On output, node i connects to node j in G if |Aij | > 0.
Time Complexity Note that we do not have to re-calculate the matrix multiplication of XT:,ik−1X:,ij in line 8 since we can calculate C at the cost of O(p
2n) at first. Besides, at step k, we have already calculate UTk−2X T :,ik−1
X:,ij at previous step, we only need to calculate the last entry of yj , which is the inner product between two k dimensional vectors, at the cost ofO(p) in worst case. Overall, the time complexity of CDCF is O(p3 + p2n). When n > p, the complexity becomes O(p2n), which is equivalent to the complexity of calculating the covariance matrix. Additionally, the inner loop (lines 7 to 10) of CDCF can be done in parallel, which makes the algorithm friendly to run on GPU and suitable for large scale calculations.
2.4 EXACT DAG STRUCTURE RECOVERY
The following theorem tells that our algorithm is able to recover the DAG exactly with high probability under proper assumptions.
Theorem 2.1 Let x ∈ Rp be a zero-mean random vector, C = E(xxT) ∈ Rp×p be the covariance matrix. Let x1, . . . ,xn be n independent samples, Ĉ = 1n ∑n k=1 xkx T k be the sample covariance estimator. Assume ‖C − Ĉ‖ ≤ for some > 0. Denote Ĉλ = Ĉ + λI , where λ = O( ) ≥ 0 is a parameter. Let the Cholesky factorizations of C = ExxT and Ĉλ be C = LLT and Ĉλ = L̂L̂T, respectively, where L and L̂ are both lower triangular. For the linear SEM model (1), assume (2) and (4), and for k ∈ PaG(j), δ = infk∈PaG(j) δjk > 0, where
δjk = σ 2 i∗j + ‖Σ̂n[(I − T )−1]k:j−1,k‖2 − σ2i∗k .
If δ ≥ 4( + λ) and ‖L−1‖2( + λ) < 34 , then CDCF-V is able to recover P exactly. In addition, it holds that
‖TRIU(Up)− T ‖max ≤ 4‖Σ̂−1∗ (I − T )T‖22,∞‖(I − T )Σ̂−T∗ ‖2,∞( + λ), where TRIU(Up) stands for the strictly upper triangular part of Up, Up is the output of outer loop of Algorithm 1 with criterion (V).
we know that when T is sparse, we may recover its topology structure by truncating Up.
Proposition 1 Let Ni,: be independent bounded, or sub-Gaussian, or regular polynomial-tail, then for n > N( ), it holds ‖Ĉxx −Cxx‖ ≤ , w.h.p. Specifically,
N( ) ≥ C1 log p (‖(I − T )−1‖2‖Cnn‖ )2 , for bounded class;
N( ) ≥ C2 p (‖(I − T )−1‖2‖Cnn‖ )2 , for the sub-Gaussian class;
N( ) ≥ C3 p (‖(I − T )−1‖2‖Cnn‖ )2(1+r−1) , for the regular polynomial tail class.
The proofs of are provided in the Appendix A. This theorem and proposition also indicates the sample complexity of our algorithm is O(p). This sample complexity is better than the sample complexities of previous methods, see Table 2.1 for a detailed comparison.
3 EXPERIMENTS
In this section, we apply our algorithm to synthetic data sets, proteins data set and knowledge base data set, respectively, to illustrate the efficiency and effectiveness of our algorithm.
3.1 LINEAR SEM
We evaluate the proposed methods on simulated graphs from two well-known ensembles of random graph types: Erdös–Rényi (ER) (Gilbert, 1959) and Scale-free (SF) (Barabási & Albert, 1999). The average edge number per node is denoted after the graph type. For example, ER2 represents two edges per node on average. After the graph structure is settled, we assign uniformly random edge weights to obtain a weight matrix W . We generate the observation data X from the linear SEM with three noise distributions: Gaussian, Gumbel, Exponential.
We chose our baseline methods as NOTEARS (Zheng et al., 2018), DAG-GNN (Yu et al., 2019), CORL (Wang et al., 2021), NPVAR (Gao et al., 2020), and EQVAR (Chen et al., 2019). Other methods such as PC algorithm (Spirtes et al., 2000), LiNGAM (Shimizu et al., 2006), FGS (Ramsey et al., 2017), MMHC (Tsamardinos et al., 2006), L1OBS (Schmidt et al., 2007), CAM (Bühlmann et al., 2013), RL-BIC2 (Zhu et al., 2020), A*LASSO (Xiang & Kim, 2013), LISTEN (Ghoshal & Honorio, 2018), US (Park, 2020) perform worse than or approximately equal to the selected baselines, and the results can be found in the corresponding papers.
Table 3.1 presents the structural Hamming distance (SHD) of baseline methods and our method on 3000 samples (n = 3000). Nodes number p is noted in the first column. Graph type and edge level are noted in the second column. We only report the SHD of different algorithms due to page limitation, and we find that other metrics such as true positive rate (TPR), false discovery rate (FDR), false positive rate (FPR), and F1 score have the similar comparative performance with SHD. We also test bottom-up EQVAR which is equivalent to LISTEN, the result is worse than top-down EQVAR (EV-TD) in this synthesis experiment, so we do not include the result in the table. For p = 1000 graphs, we only report the result of EV-TD and CDCF since other algorithms spend too much time (longer than a week) to recover a DAG. We test our algorithms with different variations according to criteria (V, S, VS) introduced in Section 2.3, and with diagonal augmentation trick noted by a “+” as postfix. For example, "CDCF-V" means CDCF with V criterion and λ = 0, and "CDCF-V+" means CDCF with V criterion and λ = O( log pn ). The implementation details are in the Appendix B. We report the result of CDCF-V+ here, and the results of other CDCF variations can be found in Appendix Table B.4. We run our methods on ten randomly generated graphs and report the mean and variance in the table. Figure 3.1 plots the SHD results tested on 100 nodes graph recovering from different sample sizes. We choose EV-TD and high dimension top down (EV-HTD) as baselines when p > n and p ≤ n, respectively. We can see from the results, CDCF-V+ achieves significantly better performance comparing with previous baselines.
Table 3.2 shows the running time which is tested on a 2.3 GHz single Intel Core i5 CPU. Besides, parallel calculation of the matrix multiplication on GPU makes the algorithm even faster. Recovering 5000 and 10000 nodes graph from 3000 samples on an A100 Nvidia GPU is approximately 400 and 2400 seconds, respectively. For comparison, EV-TD costs approximately 100 hours to recover a 1000 nodes DAG from 3000 samples. As illustrated in the table, CDCF is approximately dozens or hundreds of times faster than EV-TD and LISTEN, and tens of thousands times faster than NOTEARS as CDCF does not have to update the parameters with gradients.
Due to the page limitation, further experiments and discussions of the ablation study (Figures B.3 to B.14, Tables B.1 to B.6), choice of λ (Tables B.7 to B.10), and performances on different noise distribution (Figures B.1, B.2) and deviation (Tables B.11, B.12, B.13) are given in Appendix B.
3.2 PROTEINS DATA SET
We consider a bioinformatics data set (Sachs et al., 2005) consisting of continuous measurements of expression levels of proteins and phospholipids in the human immune system cells. This is a widely used data set for research on graphical models, with experimental annotations accepted by the biological research community. Following the previous algorithms setting, we noticed that different previous papers adopted different observations. To included them all, we considered the observational 853 samples from the "CD3, CD28" simulation tested by Teyssier & Koller (2005); Lachapelle et al. (2020); Zhu et al. (2020) and all 7466 samples from nine different simulations tested by Zheng et al. (2018; 2020); Yu et al. (2019).
We report the experimental results on both settings in Table 3.3. The implementation codes of the baselines are introduced in the appendix, and we use the default settings of the hyper-parameters provided in their codes. The evaluate metric is FDR, TPR, FPR, SHD, predicted nodes number (N), precision (P), F1 score. As the recall score is equal to TPR, we do not include it in the table. In both settings, CDCF-VS+ achieves state-of-the-art performance. 1 Several reasons make the recovered graph not exactly the same as the expected one. The ground truth graph suggested by the paper is mixed with directed and indirect edges. Under the settings of SEM, the node "PKA" is quite similar to the leaf nodes since most of its edges are indirect while the ground truth graph notes it as the out edges. Non-linear would not be an impact issue here since NOTEARS and our algorithm both achieve decent results. In the meantime, we do not deny that further extension of our algorithm to non-linear representation would witness an improvement on this data set.
3.3 KNOWLEDGE BASE DATA SET
We test our algorithm on FB15K-237 data set (Toutanova et al., 2015) in which the knowledge is organized as {Subject, Predicate,Object} triplets. The data set has 15K triplets and 237 types of predicates. In this experiment, we only consider the single jump predicate between the entities, which
1For NOTEARS-MLP, Table 3.3 reported the results reproduced by the code provided in Zheng et al. (2020).
have 97 predicates remained. We want to discover the causal relationships between the predicates. We organize the observation data as each sample corresponds to an entity with awareness of the position (Subject or Object), and each variable corresponds to a predicate in this knowledge base.
In Figure 3.2, we give the adjacent weighted matrix of the generated graph and several examples with high confidence (larger than 0.5). In the left figure, the label of the axis notes the first capital letter of the domain of the relations. Some of them are replaced with a dot to save space. The exact domain name and the picture with the full predicate name are provided in the appendix. The domain clusters are denoted in black boxes at the diagonal of the adjacent matrix. The red boxes denoted the cross-domain relations that are worth paying attention to. Consistent with the innateness of human sense, the recovered relationships inside a domain are denser than those across domains. In the cross-domain relations, we found that the predicate in domain "TV" ("T") has many relations with the domain "Film" ("F"), the domain "Broadcast" (last row) have many relations with the domain "Music" ("M"). Several cases of the predicted causal relationships are listed on the right side of Figure 3.2, we can see that the discovered indication relations between predicates are quite reasonable.
4 CONCLUSION AND FUTURE WORK
In this paper, we proposed a topology search algorithm for the DAG structure recovery problem. Our algorithm is better than the existing methods in both time and sample complexities. To be specific, the time complexity of our algorithm isO(p2n+ p3), while the fastest algorithm before isO(p5n) (Park, 2020; Gao et al., 2020), where p and n are the numbers of nodes and samples, respectively. Under different assumptions, our algorithm takes O(log(p)) or O(p) samples to exactly recover the DAG structure. Experimental results on synthetic data sets, proteins data sets, and knowledge base data set demonstrate the efficiency and effectiveness of our algorithm. For synthetic data sets, compared with previous baselines, our algorithm improves the performance with a significant margin and at least tens or hundreds of times faster. For the proteins data set, we achieve state-of-the-art performance. For the knowledge base data set, we can observe many reasonable structures of the discovered DAG.
The proposed algorithm is under the assumption of linear SEM. Generalization of CDCF to nonlinear SEM would be a valuable and important research topic. Learning the representation of the observed data for better structure reconstruction via the CDCF algorithm, which requires the algorithm differentiable, is also an attractive problem. To deal with the extremely large-scale problems, such as millions of nodes, implementing CDCF via sparse matrix storage and calculation on the GPU is a promising way to further improve computational performance.
A PROOF OF THEOREM 2.1
In this section, we first give several lemmas, then prove Theorem 2.1.
Lemma A.1 Let x ∈ Rp be a zero-mean random vector, C = E(xxT) ∈ Rp×p be the covariance matrix. Let x1, . . . ,xn be n independent samples, Ĉ = 1n ∑n k=1 xkx T k be the sample covariance estimator. Assume ‖C − Ĉ‖ ≤ for some > 0. Denote Ĉλ = Ĉ + λI , where λ = O( ) ≥ 0 is a parameter. Let the Cholesky factorizations of C = ExxT and Ĉλ be C = LLT and Ĉλ = L̂L̂T, respectively, where L and L̂ are both lower triangular. If ‖L−1‖2( + λ) < 34 , then
|‖Li,:‖2 − ‖[L̂λ]i,:‖2| ≤ + λ = O( ), for 1 ≤ i ≤ p; (9)
|[L−1]ij − [L̂−1]ij | ≤ 4‖L−1‖22,∞‖L−T‖2,∞( + λ) = O( ), for i > j. (10)
Proof. For all 1 ≤ i ≤ p, we have
|‖Li,:‖2 − ‖L̂i,:‖2| = |Cii − [Ĉλ]ii| ≤ ‖C − Ĉλ‖ ≤ ‖C − Ĉ‖+ λ ≤ + λ, (11)
which completes the proof for (9).
Next, we show (10). Let
L−1L̂ = I + F , (I + F )(I + F )T = I +E. (12)
We know that
L̂−1 −L−1 = [(I + F )−1 − I]L−1 = −F (I + F )−1L−1, (13)
E = L−1L̂L̂TL−T − I = L−1(Ĉλ −C)L−T. (14)
Then it follows from (13) that for i > j
|[L−1]ij−[L̂−1]ij | ≤ ‖Fi,1:i−1‖‖[(I+F )−1L−1]:,j‖ ≤ ‖Fi,1:i−1‖‖(I+F )−1‖‖L−T‖2,∞. (15)
First, we give an upper bound for ‖(I + F )−1‖. Using (12), we have (I + F )−T(I + F )−1 = (I +E)−1. It follows
‖(I + F )−1‖ = ‖(I + F )−T(I + F )−1‖ 12 = ‖(I +E)−1‖ 12
≤ 1√ 1− ‖E‖ ≤ 1√ 1− ‖L−1‖2‖Ĉλ −C‖ , (16)
where the last inequality uses (14).
Second, we give upper bound for ‖Fi,1:i−1‖. It follows from the second equality of (12) that
(1 + Fii) 2 + ‖Fi,1:i−1‖2 = 1 +Eii. (17)
Therefore,
‖Fi,1:i−1‖2 ≤ |(1 + Fii)2 − 1|+Eii (a) ≤ L̂ 2 ii −L2ii L2ii +Eii
(b) ≤ + λ L2ii + ‖L−1‖22,∞‖Ĉλ −C‖ (c) ≤ 2‖L−1‖22,∞( + λ), (18)
where (a) uses (12), (b) uses (9) and (14), (c) uses ‖C − Ĉ‖ ≤ . Substituting (18) and (16) into (15), we get
|[L−1]ij − [L̂−1]ij | ≤ 2‖L−1‖22,∞‖L−T‖2,∞ + λ√
1− ‖L−1‖2( + λ) . (19)
The conclusion follows since ‖L−1‖2( + λ) < 34 .
Theorem A.2 Let x ∈ Rp be a zero-mean random vector, C = E(xxT) ∈ Rp×p be the covariance matrix. Let x1, . . . ,xn be n independent samples, Ĉ = 1n ∑n k=1 xkx T k be the sample covariance estimator. Assume ‖C − Ĉ‖ ≤ for some > 0. Denote Ĉλ = Ĉ + λI , where λ = O( ) ≥ 0 is a parameter. Let the Cholesky factorizations of C = ExxT and Ĉλ be C = LLT and Ĉλ = L̂L̂T, respectively, where L and L̂ are both lower triangular. For the linear SEM model (1), assume (2) and (4), and for k ∈ PaG(j), δ = infk∈PaG(j) δjk > 0, where
δjk = σ 2 i∗j + ‖Σ̂n[(I − T )−1]k:j−1,k‖2 − σ2i∗k .
If δ ≥ 4( + λ) and ‖L−1‖2( + λ) < 34 , then CDCF-V is able to recover P exactly. In addition, it holds that
‖TRIU(Up)− T ‖max ≤ 4‖Σ̂−1∗ (I − T )T‖22,∞‖(I − T )Σ̂−T∗ ‖2,∞( + λ),
where TRIU(Up) stands for the strictly upper triangular part of Up, Up is the output of outer loop of Algorithm 1 with criterion (V).
Proof. For SEM model (1), denote Ĉ∗ = E( 1nX̂ TX̂), Σ̂2∗ = E( 1nN̂ TN̂) = Σ̂Tn Σ̂n, we have (5), i.e.,
Ĉ∗ = (I − T )−TΣ̂2∗(I − T )−1 = (I − T )−TΣ̂Tn Σ̂n(I − T )−1. (20)
When the permutation i∗ = [i∗1, . . . , i ∗ p] is exactly recovered, then Up in CDCF-V satisfies
Ĉλ = 1
n XT:,i∗X:,i∗ + λI = U −T p U −1 p . (21)
Denote i∗j = [i ∗ 1, . . . , i ∗ j ] for all j = 1, . . . , p. Consider the kth diagonal entries of (20) and (21). By calculations, we get
[Ĉ∗]kk = [(I − T )−1]T:,kΣ̂Tn Σ̂n[(I − T )−1]:,k = σ2i∗k + ‖uk‖ 2, (22) [Ĉλ]kk = 1
n ‖Xi∗k‖
2 + λ = 1
U2kk + ‖ûk‖2, (23)
where
uk = [Σ̂n]1:k−1,1:k−1(Ik−1 − T1:k−1,1:k−1)−1T1:k−1,k, ûk = 1
n UTk−1X T :,i∗k−1 X:,i∗k . (24)
Using ‖C − Ĉ‖ ≤ , we have
|[Ĉ∗]kk − [Ĉλ]kk| ≤ ‖C − Ĉλ‖ ≤ ‖C − Ĉ‖+ λ ≤ + λ. (25)
By Lemma A.1, we have
|‖uk‖2 − ‖ûk‖2| ≤ + λ. (26)
Using (22), (23), (25) and (26), we get
|σ2i∗k − 1
U2kk | ≤ 2( + λ). (27)
Assume that i∗1, . . . , i ∗ k−1 (k ≥ 1) are all correctly recovered. And without loss of generality, for k ∈ PaG(j), we also assume Tk:j−1,j 6= 0 (otherwise, jth and kth columns are exchangeable, and i forms another equivalence topology order to the same DAG (Sedgewick & Wayne, 2011)). Then we
have for k ∈ PaG(j) that 1
n ‖Xi∗j ‖
2 + λ− ‖[ûj ]1:k−1‖2 (a) = [Ĉ∗]jj + [Ĉλ]jj − [Ĉ∗]jj − ‖[ûj ]1:k−1‖2
(b) ≥ [Ĉ∗]jj − ( + λ)− ‖[uj ]1:k−1‖2 − ( + λ) (c) = σi∗j + ‖[uj ]k:j−1‖
2 − 2( + λ) (d) ≥ σi∗k + δ − 2( + λ) (e) = [Ĉ∗]kk − ‖uk‖2 + δ − 2( + λ) (f) ≥ [Ĉλ]kk − ‖ûk‖2 + δ − 4( + λ) (g) = 1
n ‖Xi∗k‖ 2 + λ− ‖ûk‖2 + δ − 4( + λ),
where (a) uses (23), (b) and (f) uses (25) and Lemma A.1, (c) uses (22), (d) dues to the assumption σi∗j ≥ σi∗k for k ∈ PaG(j), (e) uses (22), (g) uses (23). Therefore, using δ > 4( + λ), we have
1 n ‖Xi∗j ‖ 2 + λ− ‖[ûj ]1:k−1‖2 > 1 n ‖Xi∗k‖ 2 + λ− ‖ûk‖2,
which implies that i∗k can be correctly recovered. So, overall speaking, CDCF-V is able to recover the permutation P .
The upper bound for ‖TRIU(Up)− T ‖max follows from Lemma A.1. The proof is completed.
Proposition 2 Let Ni,: be independent bounded, or sub-Gaussian, 2 or regular polynomial-tail, 3 then for n > N( ), it holds ‖Ĉxx −Cxx‖ ≤ , w.h.p. Specifically,
N( ) ≥ C1 log p (‖(I − T )−1‖2‖Cnn‖ )2 , for bounded class;
N( ) ≥ C2 p (‖(I − T )−1‖2‖Cnn‖ )2 , for the sub-Gaussian class;
N( ) ≥ C3 p (‖(I − T )−1‖2‖Cnn‖ )2(1+r−1) , for the regular polynomial tail class.
Proof. For SEM model (1), we have
‖Ĉxx−Cxx‖ ≤ ‖(I−T )−1‖2‖Ĉnn−Cnn‖ ≤ ‖(I−T )−1‖2‖Cnn‖‖C − 12 nn ĈnnC − 12 nn −I‖, (28)
where Cxx = ExxT, Cnn = EnnT are the covariance matrices for x and n, respectively, Ĉxx, Ĉnn are the sample covariance matrices for x and n, respectively. The three results listed above follow from Corollary 5.52, Theorem 5.39 in Vershynin (2010), Theorem 1.1 in Srivastava & Vershynin (2013), respectively.
2A random vector z is isotropic and sub-Gaussian if EzzT = I and and there exists constant C > 0 such that P(|vTz| > t) ≤ exp(−Ct2) for any unit vector v. Here by “Ni,: is sub-Gaussian” we mean that C − 1 2 nn N T i,: is an isotropic and sub-Gaussian random vector. 3A random vector z is isotropic and regular polynomial-tail if EzzT = I and there exist constants r > 1, C > 0 such that P(‖V z‖2 > t) ≤ Ct−1−r for any orthogonal projection V and any t > C · rank(V ). Here by “Ni,: is regular polynomial-tail” we mean that C − 1 2 nn N T i,: is an isotropic and regular polynomial-tail random vector.
B ADDITIONAL EXPERIMENTS
Here we provide implementation details and additional experiment results.
Figures B.1, B.2 provide the results of Gumbel and Exponential noises, respectively. As we can see from the result, our algorithm still performs better than Eqvar method in different noise types.
Tables B.1, B.2 , B.3, B.4, B.5, B.6 give results on 100 nodes over different sample sizes and variances of our CDCF methods. As noted in Algorithm 1, we have V, S, VS as different criteria to select the current column, "+" representing the sample covariance matrix augmented with the scalar matrix log p n I . The truncation threshold on column i is ωi = 3.5/αi, where αi is the diagonal value of the Cholesky factor. According to the results, the algorithm "V+" achieves the best performance as the sample size is relatively large. When the sample size is small, the criterion according to sparsity shows very effective performance improvement. We also test different choices over λ = β log pn , β ∈ {0.0, 1.0..., 9.0}, the result is given in Table B.7, B.8, B.9, B.10. Empirically, β ∈ {1.0, 2.0} achieves better results. In practice, one can sample a relatively small and labeled sub-graph of the DAG to test the hyper-parameter setting then apply to large unlabeled the DAG graph.
To test the performance limitation of our methods, we provide the results of SHD on different sample number and node number in Figures B.3 to B.14 where the x-axis represents the sample number (in thousand), the y-axis denotes the node number, the color represents the value of log2(SHD + 1) (the brighter the better). We provide the figures for CDCF-V+, CDCF-S+, and CDCF-VS+ on variances graph and noise types. The figures are drawn on the mean results over ten random seeds. The figures show that the graph can be exactly recovered on 800 nodes at approximately 6000 samples. Comparing CDCF-V+ with CDCF-S+, we find that criterion (S) damages the performance when the sample number is relatively large. When sample number ∈ {1500, 3000} and node number ∈ {400, 800}, CDCF-S+ achieves better performance. Such trend can also be demonstrated in Tables B.1, B.2, B.3. CDCF-VS+ alleviates the poor performance of CDCF-S when the data is sufficient and achieves good performance on real-world data set.
We also test the performance on linear SEM with monotonously increased noise variance. Concretely, assume the topology order is i = {i1, ..., ip}, we set the noise variance of node k as σk = 1 + ik/p. We test the results on Gaussian, Gumbel, and Exponential noise with monotonous noise variance. The results are reported in Tables B.11, B.12 and B.13. As the results indicated, even with different noise levels, our algorithms achieve good performance and are able to exactly recover the DAG structure when the data is sufficient.
In the result for knowledge base data set, the axis labels of Figure 3.2 are ‘Film’, ‘People’, ‘Location’, ‘Music’, ‘Education’, ‘Tv’, ‘Medicine’, ‘Sports’, ‘Olympics’, ‘Award’, ‘Time’, ‘Organization’, ‘Language’, ‘MediaCommon’, ‘Influence’, ‘Dataworld’, ‘Business’, ‘Broadcast’ from left to right for x-axis and top to bottom for y-axis, respectively. The adjacent matrix plotted here is re-permuted to make the relations in the same domain close to each other. We keep the adjacent matrix inside a domain an upper triangular matrix. Such typology is equivalent to the generated matrix with the original order.
Baseline Implementations The baselines are implemented via the codes provided from the following links:
• NOTEARS, NOTEARS-MLP: https://github.com/xunzheng/notears • NPVAR: https://github.com/MingGao97/NPVAR • EQVAR, LISTEN: https://github.com/WY-Chen/EqVarDAG • CORL: https://github.com/huawei-noah/trustworthyAI/tree/master/gcastle • DAG-GNN: https://github.com/fishmoon1234/DAG-GNN | 1. What is the focus and contribution of the paper on directed graphical models?
2. What are the strengths of the proposed approach, particularly in terms of its ability to discover causal relationships?
3. What are the weaknesses of the paper, especially regarding its assumptions and limitations?
4. Do you have any concerns or questions regarding the paper's methodology or results? | Summary Of The Paper
Review | Summary Of The Paper
In this paper the authors consider the problem of learning directed graphical models in the linear SEM setting. The authors use iterative Cholesky factorization of the covariance matrix in order to learn the causal relationship from the data.
Review
The paper presents a new algorithm to discover the causal relationship in a directed acyclic graph. The authors restrict themselves to the linear SEM setting under the assumption that the conditional noise variance for child nodes is larger than that of the parent nodes which ensures that the problem is identifiable. Overall I believe that this is a good paper, but I need a few clarifications.
The authors say that the conditional noise variance of child nodes is "approximately" larger than that of the parent nodes. What approximation are the authors talking about?
While evaluating the algorithm, the authors generate ER graphs. How do the authors ensure that the resulting graphs are DAGs? |
ICLR | Title
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training
Abstract
Recent studies demonstrate that deep networks, even robustified by the state-ofthe-art adversarial training (AT), still suffer from large robust generalization gaps, in addition to the much more expensive training costs than standard training. In this paper, we investigate this intriguing problem from a new perspective, i.e., injecting appropriate forms of sparsity during adversarial training. We introduce two alternatives for sparse adversarial training: (i) static sparsity, by leveraging recent results from the lottery ticket hypothesis to identify critical sparse subnetworks arising from the early training; (ii) dynamic sparsity, by allowing the sparse subnetwork to adaptively adjust its connectivity pattern (while sticking to the same sparsity ratio) throughout training. We find both static and dynamic sparse methods to yield win-win: substantially shrinking the robust generalization gap and alleviating the robust overfitting, meanwhile significantly saving training and inference FLOPs. Extensive experiments validate our proposals with multiple network architectures on diverse datasets, including CIFAR-10/100 and TinyImageNet. For example, our methods reduce robust generalization gap and overfitting by 34.44% and 4.02%, with comparable robust/standard accuracy boosts and 87.83%/87.82% training/inference FLOPs savings on CIFAR-100 with ResNet18. Besides, our approaches can be organically combined with existing regularizers, establishing new state-of-the-art results in AT. Codes are available in https: //github.com/VITA-Group/Sparsity-Win-Robust-Generalization.
N/A
1 INTRODUCTION
Deep neural networks (DNNs) are notoriously vulnerable to maliciously crafted adversarial attacks. To conquer this fragility, numerous adversarial defense mechanisms are proposed to establish robust neural networks (Schmidt et al., 2018; Sun et al., 2019; Nakkiran, 2019; Raghunathan et al., 2019; Hu et al., 2019; Chen et al., 2020c; 2021e; Jiang et al., 2020). Among them, adversarial training (AT) based methods (Madry et al., 2017; Zhang et al., 2019) have maintained the state-of-the-art robustness. However, the AT training process usually comes with order-ofmagnitude higher computational costs than standard training, since multiple attack iterations are needed to construct strong adversarial examples (Madry et al., 2018b). Moreover, AT was recently revealed to incur severe robust generalization gaps (Rice et al., 2020), between its training and testing accuracies, as shown in Figure 1; and to require significantly more training samples (Schmidt et al., 2018) to generalize robustly.
*Equal Contribution.
1
In response to those challenges, Schmidt et al. (2018); Lee et al. (2020); Song et al. (2019) investigate the possibility of improving generalization by leveraging advanced data augmentation techniques, which further amplifies the training cost of AT. Recent studies (Rice et al., 2020; Chen et al., 2021e) found that early stopping, or several smoothness/flatness-aware regularizations (Chen et al., 2021e; Stutz et al., 2021; Singla et al., 2021), can bring effective mitigation.
In this paper, a new perspective has been explored to tackle the above challenges by enforcing appropriate sparsity patterns during AT. The connection between robust generalization and sparsity is mainly inspired by two facts. On one hand, sparsity can effectively regularize the learning of over-parameterized neural networks, hence potentially benefiting both standard and robust generalization (Balda et al., 2019). As demonstrated in Figure 1, with the increase of sparsity levels, the robust generalization gap is indeed substantially shrunk while the robust overfitting is alleviated. On the other hand, one key design philosophy that facilitates this consideration is the lottery ticket hypothesis (LTH) (Frankle & Carbin, 2019). The LTH advocates the existence of highly sparse and separately trainable subnetworks (a.k.a. winning tickets), which can be trained from the original initialization to match or even surpass the corresponding dense networks’ test accuracies. These facts point out a promising direction that utilizing proper sparsity is capable of boosting robust generalization while maintaining competitive standard and robust accuracy.
Although sparsity is beneficial, the current methods (Frankle & Carbin, 2019; Frankle et al., 2020; Renda et al., 2020) often empirically locate sparse critical subnetworks by Iterative Magnitude Pruning (IMP). It demands excessive computational cost even for standard training due to the iterative train-prune-retrain process. Recently, You et al. (2020) demonstrated that these intriguing subnetworks can be identified at the very early training stage using one-shot pruning, which they term as Early Bird (EB) tickets. We show the phenomenon also exists in the adversarial training scheme. More importantly, we take one leap further to reveal that even in adversarial training, EB tickets can be drawn from a cheap standard training stage, while still achieving solid robustness. In other words, the Early Bird is also a Robust Bird that yields an attractive win-win of efficiency and robustness - we name this finding as Robust Bird (RB) tickets.
Furthermore, we investigate the role of sparsity in a scene where the sparse connections of subnetworks change on the fly. Specifically, we initialize a subnetwork with random sparse connectivity and then optimize its weights and sparse typologies simultaneously, while sticking to the fixed small parameter budget. This training pipeline, called as Flying Bird (FB), is motivated by the latest sparse training approaches (Evci et al., 2020b) to further reduce robust generalization gap in AT, while ensuring low training costs. Moreover, an enhanced algorithm, i.e., Flying Bird+, is proposed to dynamically adjust the network capacity (or sparsity) to pursue superior robust generalization, at few extra prices of training efficiency. Our contributions can be summarized as follows:
• We perform a thorough investigation to reveal that introducing appropriate sparsity into AT is an appealing win-win, specifically: (1) substantially alleviating the robust generalization gap; (2) maintaining comparable or even better standard/robust accuracies; and (3) enhancing the AT efficiency by training only compact subnetworks.
• We explore two alternatives for sparse adversarial training: (i) the Robust Bird (RB) training that leverages static sparsity, by mining the critical sparse subnetwork at the early training stage, and using only the cheapest standard training; (ii) the Flying Bird (FB) training that allows for dynamic sparsity, which jointly optimizes both network weights and their sparse connectivity during AT, while sticking to the same sparsity level. We also discuss a FB variant called Flying Bird+ that adaptively adjusts the sparsity level on demand during AT.
• Extensive experiments are conducted on CIFAR-10, CIFAR-100, and Tiny-ImageNet with diverse network architectures. Specifically, our proposals obtain 80.16% ∼ 87.83% training FLOPs and 80.16% ∼ 87.83% inference FLOPs savings, shrink robust generalization from 28.00% ∼ 63.18% to 4.43% ∼ 34.44%, and boost the robust accuracy by up to 0.60% and the standard accuracy by up to 0.90%, across multiple datasets and architectures. Meanwhile, combining our sparse adversarial training frameworks with existing regularizations establishes the new state-of-the-art results.
2 RELATED WORK
Adversarial training and robust generalization/overfitting. Deep neural networks present vulnerability to imperceivable adversarial perturbations. To deal with this drawback, numerous defense
2
approaches have been proposed (Goodfellow et al., 2015; Kurakin et al., 2016; Madry et al., 2018a). Although many methods (Liao et al., 2018; Guo et al., 2018a; Xu et al., 2017; Dziugaite et al., 2016; Dhillon et al., 2018a; Xie et al., 2018; Jiang et al., 2020) were later found to result from obfuscated gradients (Athalye et al., 2018), adversarial training (AT) (Madry et al., 2018a), together with some of its variants (Zhang et al., 2019; Mosbach et al., 2018; Dong et al., 2018), remains as one of the most effective yet costly approaches.
A pitfall of AT, i.e., the poor robust generalization, was spotted recently. Schmidt et al. (2018) showed that AT intrinsically demands a larger sample complexity to identify well-generalizable robust solutions. Therefore, data augmentation (Lee et al., 2020; Song et al., 2019) is an effective remedy. Stutz et al. (2021); Singla et al. (2021) related robust generalization gap to curvature/flatness of loss landscapes. They introduced weight perturbing approaches and smooth activation functions to reshape the loss geometry and boost robust generalization ability. Meanwhile, the robust overfitting (Rice et al., 2020) in AT usually happens with or as a result of inferior generalization. Previous studies (Rice et al., 2020; Chen et al., 2021e) demonstrated that conventional regularization-based methods (e.g., weight decay and simple data augmentation) can not alleviate robust overfitting. Then, numerous advanced algorithms (Zhang et al., 2020; 2021b; Zhou et al., 2021; Bunk et al., 2021; Chen et al., 2021a; Dong et al., 2021; Zi et al., 2021; Tack et al., 2021; Zhang et al., 2021a) arose in the last half year to tackle the overfitting, using data manipulation, smoothened training, and else. Those methods work orthogonally to our proposal as evidenced in Section 4.
Another group of related literature lies in the field of sparse robust networks (Guo et al., 2018b). These works either treat model compression as a defense mechanism (Wang et al., 2018; Gao et al., 2017; Dhillon et al., 2018b) or pursue robust and efficient sub-models that can be deployed in resource-limited platforms (Gui et al., 2019; Ye et al., 2019; Sehwag et al., 2019). Compared to those inference-focused methods, our goal is fundamentally different: injecting sparsity during training to reduce the robust generalization gap while improving training efficiency.
Static pruning and dynamic sparse training. Pruning (LeCun et al., 1990; Han et al., 2015a) serves as a powerful technique to eliminate the weight redundancy in over-parameterized DNNs, which aims to obtain storage and computational savings with almost undamaged performance. It can roughly divided into two categories based on how to generate sparse patterns: (i) static pruning. It removes parameters (Han et al., 2015a; LeCun et al., 1990; Han et al., 2015b) or substructures (Liu et al., 2017; Zhou et al., 2016; He et al., 2017) based on optimized importance scores (Zhang et al., 2018; He et al., 2017) or some heuristics like weight magnitude (Han et al., 2015a), gradient (Molchanov et al., 2019), hessian (LeCun et al., 1990) statistics. The discarded elements usually will not participate in the next round of training or pruning. Static pruning can be flexibly applied prior to training, such as SNIP (Lee et al., 2019), GraSP (Wang et al., 2020) and SynFlow (Tanaka et al., 2020); during training (Zhang et al., 2018; He et al., 2017); and post training (Han et al., 2015a) for different trade-off between training cost and pruned models’ quality. (ii) dynamic sparse training. It updates model parameters and sparse connectivities at the same time, starting from a randomly sparsified subnetwork (Molchanov et al., 2017). During the training, the removed elements have chances to be grown back if they potentially benefit to predictions. Among the huge family of sparse training (Mocanu et al., 2016; Evci et al., 2019; Mostafa & Wang, 2019; Liu et al., 2021a; Dettmers & Zettlemoyer, 2019; Jayakumar et al., 2021; Raihan & Aamodt, 2020), the recent methods Evci et al. (2020a); Liu et al. (2021b) lead to the state-of-the-art performance.
A special case of static pruning, Lottery tickets hypothesis (LTH) (Frankle & Carbin, 2019), demonstrates the existence of sparse subnetworks in DNNs, which are capable of training in isolation and reach a comparable performance of their dense counterpart. The LTH indicates the great potential to train a sparse network from scratch without sacrificing expressiveness and has recently drawn lots of attention from diverse fields (Chen et al., 2020b;a; 2021g;f;d;c;b; 2022; Ding et al., 2022; Gan et al., 2021) beyond image recognition (Zhang et al., 2021d; Frankle et al., 2020; Redman et al., 2021).
3 METHODOLOGY
3.1 PRELIMINARIES
Adversarial training (AT). As one of the widely adopted defense mechanisms, adversarial training (Madry et al., 2018b) effectively tackles the vulnerability to maliciously crafted adversarial samples. As formulated in Equation 1, AT (specifically PGD-AT) replaces the original empirical risk minimization into a min-max optimization problem:
3
min θ
E(x,y)∈DL ( f(x; θ), y ) =⇒ min
θ E(x,y)∈D max ‖δ‖p≤ L ( f(x+ δ; θ), y ) , (1)
where f(x; θ) is a network with parameters θ. Input data x and its associated label y from training set D are used to first generate adversarial perturbations δ and then minimize the empirical classification loss L. To meet the imperceptible requirement, the `p norm of δ is constrained by a small constant . Projected Gradient Descent (PGD), i.e., δt+1 = projP [δ t + α · sgn ( ∇xL(f(x + δt; θ), y) ) ], is usually utilized to produce the adversarial perturbations with step size α, which works in an iterative manner leveraging the local first order information about the network (Madry et al., 2018b).
Sparse subnetworks. Following the routine notations in Frankle & Carbin (2019), f(x;m θ) donates a sparse subnetwork with a binary pruning mask m ∈ {0, 1}‖θ‖0 , where is the elementwise product. Intuitively, it is a copy of dense network f(x; θ) with a portion of fixed zero weights.
3.2 ROBUST BIRD FOR ADVERSARIAL TRAINING
Introducing Robust Bird. The primary goal of Robust Bird is to find a high-quality sparse subnetwork efficiently. As shown in Figure 2, it locates subnetworks quickly by detecting critical network structures arising in the early training, which later can be robustified with much less computation.
Specifically, for each epoch t during training, Robust Bird creates a sparsity mask mt by “masking out” the p% lowest-magnitude weights; then, Robust Bird tracks the corresponding mask dynamics. The key observation behind Robust Bird is that the sparsity mask mt does not change drastically beyond the early epochs of training (You et al., 2020) because high-level network connectivity patterns are learned during the initial stages (Achille et al., 2019). This indicates that (i) winning tickets emerge at a very early training stage, and (ii) that they can be identified efficiently.
Robust Bird exploits this observation by comparing the Hamming distance between sparsity masks found in consecutive epochs. For each epoch, the last l sparsity masks are stored. If all the stored masks are sufficiently close to each other, then the sparsity masks are not changing drastically over time and network connectivity patterns have emerged; thus, a Robust Bird ticket (RB ticket) is drawn. A detailed algorithmic implementation is provided in Algorithm 1 of Appendix A1. This is the RB ticket used in the second stage of adversarial training.
4
5 0 5 10 15 20 25 30 35
1 s t PC: 3 7 .6 7 %
20
10
0
10
20
30
2n d
PC : 1
9. 56
%
0 .500
1 .000
1 .500
2 .000
2.500
3 .0 00
3 .000
3 .500
3 .50 0
4.000 4 .500
5 .0 00
5.500 6 .000
6 .5 0
7.000 7.500
8 .0 00.59 .000.5
10.000 100 .000
1 000 .000
1 0 000 .000
100000 .000
1 000000 .000 10000000 .000 100000000 .000
5 0 5 10 15 20 25 30 35
1 s t PC: 3 6 .1 6 %
20
10
0
10
20
30
2n d
PC : 1
9. 17
%
1 .000 1 .5 00
2 .000 2 .500
3.000
3 .5 00
3 .500 4 .000
4 . 00 0
4 .5 00
5 .0 00
5 . 50 0 6 . 00 0
6. 50 0
7 .0 007 .5 008 .00 0 8 .50 09 .000.5
10 .0 00
1 00 .000
1000.000
1 0 000 .000
100000 .000
1 0 00000 .000
10000000 .000
1 00000000 .000
5 0 5 10 15 20 25 30 35
1 s t PC: 3 6 .9 4 %
20
10
0
10
20
30
2n d
PC : 1
9. 72
%
1 .500
2 .000
2 .500
2 .500
2 .500
3 .000
3 .000
3 .500
3 .500
4 .000 4 .5 0 5 .000
5 .500 6 .0 6 .500 7 .0 7 .500 8 .0 8 .500 9 .000
9 .50
10 .000
100 .000
1000 .000
1 0000 .000
100000 .0
1000000 .000
10000000 .000
000 .000
D en
se
R an
do m
P ru
ni ng
Fl yi
ng B
ir d+
Figure 3: Visualization of loss contours and training trajectories. We compare the dense network, randomly pruned sparse networks, and flying bird+ at 90% sparsity from ResNet-18 robustified on CIFAR-10.
Rationale of Robust Bird. Recent studies (Zhang et al., 2021c) present theoretical analyses that identified sparse winning tickets enlarge the convex region near the good local minima, leading to improved generalization. Our work also shows a related investigation in Figure A9 that, compared with dense models and random pruned subnetworks, RB tickets found by the standard training have much flatter loss landscapes, serving a high-quality starting point for further robustification. This occurs because flatness of the loss surface is often believed to indicate the standard generalization. Similarly, as advocated by Wu et al. (2020a); Hein & Andriushchenko (2017), a flatter adversarial loss landscape also effectively shrinks the robustness generalization gap. This “flatness preference” of adversarial robustness has been revealed by numerous empirical defense mechanisms, including Hessian/curvature-based regularization (Moosavi-Dezfooli et al., 2019), learned weight and logits smoothening (Chen et al., 2021e), gradient magnitude penalty (Wang & Zhang, 2019), smoothening with random noise (Liu et al., 2018), or entropy regularization (Jagatap et al., 2020).
These observations make the main cornerstone for our proposal and provide possible interpretations to the surprising finding that the RB tickets pruned from a non-robust model can be used for obtaining well-generalizable robust models in the followed robustification. Furthermore, unlike previous costly flatness regularizers (Moosavi-Dezfooli et al., 2019), our methods not only offer a flatter starting point but also obtain substantial computational savings due to the reduced model size.
3.3 FLYING BIRD FOR ADVERSARIAL TRAINING
Introducing Flying Bird(+). Since sparse subnetworks from static pruning are unable to regret for removed elements, they may be too aggressive to capture the pivotal structural patterns. Thus, we introduce Flying Bird (FB) to conduct a thorough exploration of dynamic sparsity, which allows pruned parameters to be grown back and engages in the next round of training or pruning, as demonstrated in Figure 2. Specifically, it starts from a sparse subnetwork f(x;m θ) with a random binary mask m, and then jointly optimize model parameters and sparse connectivities simultaneously. In other words, the subnetwork’s typologies are “on the fly”, decided dynamically based on current training status. Specifically, we update Flying Bird’s sparse connectivity every ∆t epochs of adversarial training, which consists of two continually applied operations: pruning and growing. For the pruning step, p% of model weights with the lowest magnitude will be eliminated, while g% weights with the largest gradient will be added back in the growth step. Note that newly added connections are not activated in the last sparse topology, and are initialized to zero since it establishes better performance as indicated in (Evci et al., 2020a; Liu et al., 2021b). Flying Bird maintains the sparsity ratio unchanged during the full training by keeping both pruning and growing ratio p%, g% equal k% that decays with a cosine annealing schedule.
We further propose Flying Bird+, an enhanced variant of FB, capable of adaptively adjusting the sparsity and learning the right parameterization level ”on demand” during training, as shown in Figure 2. To be specific, we first record the robust generalization gap and robust validation loss at each training epoch. An increasing generalization gap of the later training stage indicates a risk of overfitting, while a plateau validation loss implies underfitting. Hence, we then analyze the fitting status according to the upward/downward trend of those measurements. If most epochs (e.g., more than 3 out of the past 5 epochs in our case) tend to see enlarged robust generalization gaps, we raise the pruning ratio p% to further trim down the network capacity. Similarly, if the majority of epochs present unchanged validation loss, we will increase the growing ratio q% to enrich the subnetwork capacity. Detailed procedures are summarized in Algorithm 2 of Appendix A1.
Rationale of Flying Bird(+). As demonstrated in Evci et al. (2020a), allowing new connections to grow yields improved flexibility in navigating the loss surfaces, which creates the opportunity to
5
escape bad local minima and search for the optimal sparse connectivity Liu et al. (2021b). Flying Bird follows a similar design philosophy that excludes least important connections (Han et al., 2015a) while activating new connections with the highest potential to decrease the training loss fastest. Recent works (Wu et al., 2020c; Liu et al., 2019) have also found enabling network (re)growth can turn a poor local minima into a saddle point that facilitates further loss decrease. Flying Bird+ empowers the flexibility further by adaptive sparsity level control.
The flatness of loss geometry provides another view to dissect the robust generalization gain (Chen et al., 2021e; Stutz et al., 2021; Singla et al., 2021). Figure 3 compares the loss landscapes and training trajectories of dense, randomly pruned subnetworks, and Flying Brid+ robustified on CIFAR-10. We observe that Flying Bird+ converges to a wider loss valley with improved flatness, which usually suggests superior robust generalization (Wu et al., 2020a; Hein & Andriushchenko, 2017). Last but not the least, our approaches also significantly trim down both the training memory overhead and the computational complexity, enjoying extra bonus of efficient training and inference.
4 EXPERIMENT RESULTS
Datasets and architectures. Our experiments consider two popular architectures, ResNet-18 (He et al., 2016), VGG-16 (Simonyan & Zisserman, 2014) on three representative datasets, CIFAR-10, CIFAR-100 (Krizhevsky & Hinton, 2009) and Tiny-ImageNet (Deng et al., 2009). We randomly split one-tenth of the training samples as the validation dataset, and the performance is reported on the official testing dataset.
Training and evaluation details. We implement our experiments with the original PGD-based adversarial trainig (Madry et al., 2018b), in which we train the network against `∞ adversary with maximum perturbations of 8/255. 10-steps PGD for training and 20-steps PGD for evaluation are chosen with a step size α of 2/255, following Madry et al. (2018b); Chen et al. (2021e). In addition, we also use Auto-Attack (Croce & Hein, 2020) and CW Attack (Carlini & Wagner, 2017) for a more rigorous evaluation. More details are provided in Appendix A2. For each experiment, we train the network for 200 epochs with an SGD optimizer, whose momentum and weight decay are kept to 0.9 and 5× 10−4, respectively. The learning rate starts from 0.1 that decays by 10 times at 100,150 epoch and the batch size is 128, which follows Rice et al. (2020).
For Robust Bird, the threshold τ of mask distance is set as 0.1. In Flying Birds(+), we calculate the layer-wise sparsity by Ideal Gas Quotas (IGQ) (Vysogorets & Kempe, 2021) and then apply random pruning to initialize the sparse masks. FB updates the sparse connectivity per 2000 iterations of AT, with an update ratio k that starts from 50% and decays by cosine annealing. More details are referred to Appendix A2. Hyperparameters are either tuned by grid search or following Liu et al. (2021b).
Evaluation metrics. In general, we care about both the accuracy and efficiency of obtained sparse networks. To assess the accuracy, we consider both Robust Testing Accuracy (RA) and Standard Testing Accuracy (SA) which are computed on the perturbed and the original test sets, together with Robust Generalization Gap (RGG) (i.e., the gap of RA between train and test sets). Meantime, we report the floating point operations (FLOPs) of the whole training process and single image inference to measure the efficiency.
4.1 ROBUST BIRD IS A GOOD BIRD
In this section, we evaluate the effectiveness of static sparsity from diverse representative pruning approaches, including: (i) Random Pruning (RP), by randomly eliminating model parameters to the desired sparsity; (ii) One-shot Magnitude Pruning (OMP), which globally removes a certain ratio of lowest-magnitude weights; (iii) Pruning at Initialization algorithms. Three advanced methods, i.e., SNIP (Lee et al., 2019), GraSP (Wang et al., 2020) and SynFlow (Tanaka et al., 2020), are considered, which identify the subnetworks at initialization respect to certain criterion of gradient flow. (iv) Ideal Gas Quotas (IGS) (Vysogorets & Kempe, 2021). It adopts random pruning based on pre-calculated layer-wise sparsity which draws intuitive analogies from physics. (v) Robust Bird (RB), which can be regarded as an early stopped OMP. (vi) Small Dense. It is an important sanity check via considering smaller dense networks with the same parameter counts as the ones of sparse networks. Comprehensive results of these subnetworks at 80% and 90% sparsity are reported in Table 1, where the chosen sparsity follows routine options (Evci et al., 2020a; Liu et al., 2021b).
6
As shown in Table 1, we first observe the occurrence of poor robust generalization with 38.82% RA gap and robust overfitting with 7.49% RA degradation, when training the dense network (Baseline). Fortunately, coincided with our claims, injecting appropriate sparsity effectively tackle the issue. For instance, RB greatly shrinks the RGG by 15.45%/22.20% at 80/90% sparsity, while also mitigates robust overfitting by 2.53% ∼ 4.08%. Furthermore, comparing all static pruning methods, we find that (1) Small Dense and RP behave the worst, which suggests the identified sparse typologies play important roles rather than reduced network capacity only; (2) RB shows clear advantages to OMP in terms of all measurements, especially for 78.32% ∼ 84.80% training FLOPs savings. It validates our RB proposal that a few epochs of standard training are enough to learn a high-quality sparse structure for further robustification, and thus there is no need to complete the full training in the tickets finding stage like traditional OMP. (3) SynFlow and IGQ approaches have the best RA and SA, while RB obtains the superior robust generalization among static pruning approaches.
Finally, we explore the influence of training regimes during the RB ticket finding on CIFAR-100 with ResNet-18. Table A6 demonstrates that RB tickets perform best when found with the cheapest standard training. Specifically, at 90% and 95% sparsity, SGD RB tickets outperform both Fast AT (Wong et al., 2020) and PGD-10 RB tickets with up to 1.27% higher RA and 1.86% narrower RGG. Figure A7 offers a possible explanation for this phenomenon: the SGD training scheme more quickly develops high-level network connections, during the early epochs of training (Achille et al., 2019). As a result, RB Tickets pruned from the model trained with SGD achieve superior quality.
4.2 FLYING BIRD IS A BETTER BIRD
In this section, we discuss the advantages of dynamic sparsity and show that our Flying Bird(+) is a superior bird. Table 1 examines the effectiveness of FB(+) on CIFAR-10 with ResNet-18, and several consistent observations can be drawn: ¶ FB(+) achieve 9.92% ∼ 23.66% RGG reduction, 2.24% ∼ 5.88% decrease for robust overfitting, compared with the dense network. And FB+ at 80% sparsity even pushes the RA 0.60% higher. · Although the smaller dense network shows the leading performance w.r.t improving robust generalization, the robustness has been largely sacrificed, with up to 4.29% RA degradation, suggesting that only reducing models’ parameter counts is insufficient to keep satisfactory SA/RA. ¸ FB and FB+ achieve superior performance of RA for both the best and final checkpoints across all methods, including RB. ¹ Regardless of small dense and random pruning due to their poor robustness, FB+ reaches the most impressive robust generalization (rank #1 or #2) with the least training and inference costs. Precisely, FB+ obtains 84.46% ∼ 91.37% training FLOPs and 84.46% ∼ 93.36% inference FLOPs saving, i.e., Flying Bird+ is SUPER light-weight.
7
Superior performance across datasets and architectures. We further evaluate the performance of FB(+) across various datasets (CIFAR-10, CIFAR-100 and Tiny-ImageNet) and architectures (ResNet-18 and VGG-16). Table 2 and 3 display that both static and dynamic sparsity of our proposals serve effective remedies for improving robust generalization and mitigating robust overfitting, with 4.43% ∼ 15.45%, 14.99% ∼ 34.44% and 21.62% ∼ 23.60% RGG reduction across different architectures on CIFAR-10, CIFAR-100 and Tiny-ImageNet, respectively. Moveover, both RB and FB(+) gain significant efficiency, with up to 87.83% training and inference FLOPs savings.
Superior performance across improved attacks. Additionally, we verify both RB and FB(+) under improved attacks, i.e., Auto-Attack (Croce & Hein, 2020) and CW Attack (Carlini & Wagner, 2017). As shown in Table A8, our approaches shrink the robust generalization gap by up to 30.76% on CIFAR-10/100, and largely mitigate robust overfitting. This piece of evidence shows our proposal’s effectiveness sustained across diverse attacks.
Combining FB+ with existing start-of-the-art (SOTA) mitigation. Previous works (Chen et al., 2021e; Zhang et al., 2021a; Wu et al., 2020b) point out that smoothening regularizations (e.g., KD (Hinton et al., 2015) and SWA (Izmailov et al., 2018)) help robust generalization and lead to SOTA robust accuracies. We combine them with our FB+ and collect the robust accuracy on CIFAR-10 with ResNet-18 in Figure 4. The extra robustness gains from FB+ imply that they makes complementary contributions.
Excluding obfuscated gradients. A common “counterfeit” of robustness improvements is less effective adversarial examples resulted from obfuscated gradients (Athalye et al., 2018). Table A7 demonstrates the maintained enhanced robustness under unseen transfer attacks, which excludes the possibility of gradient masking. More are referred to Section A3.
4.3 ABLATION STUDY AND VISUALIZATION
Different sparse initialization and update frequency. As two major components in the dynamic sparsity exploration (Evci et al., 2020a), we conduct thorough ablation studies in Table 4 and 5. We found the performance of Flying Bird+ is more sensitive to different sparse initialization; using SNIP to produce initial layer-wise sparsity and updating the connections per 2000 iterations serves the superior configuration for FB+.
8
Table 4: Ablation of different sparse initialization in Flying Bird+. Subnetwroks at 80% initial sparsity are chosen on CIFAR-10 with ResNet-18.
Table 5: Ablation of different update frequency in Flying Bird+. Subnetworks at 80% initial sparsity are chosen on CIFAR-10 with ResNet-18.
Final checkpoint loss landscapes. From visualizations in Figure 5, FB and FB+ converge to much flatter loss valleys, which evidences their effectiveness in closing robust generalization gaps.
Attention and saliency maps. To visually inspect the benefits of our proposal, here we provide attention and saliency maps generated by Grad-GAM (Selvaraju et al., 2017) and tools in (Smilkov et al., 2017). Comparing the dense model to our “talented birds” (e.g., FB+), Figure 6 shows that our approaches have enhanced concentration on main objects, and are capable of capturing more local feature information, aligning better with human perception.
1
Dense
Adversarial Samples
Random Pruning
SNIP
Flying Bird
Robust Bird
Flying Bird+
+
Heatmap Saliency Map
Figure 6: (Left) Visualization of attention heatmaps on adversarial images based on Grad-Cam (Selvaraju et al., 2017). (Right) Saliency map visualization on adversarial samples (Smilkov et al., 2017).
5 CONCLUSION
We show the adversarial training of dense DNNs incurs a severe robust generalization gap, which can be effectively and efficiently resolved by injecting appropriate sparsity. Our proposed Robust Bird and Flying Bird(+) with static and dynamic sparsity, significantly mitigate the robust generalization gap while retaining competitive standard/robust accuracy, besides substantially reduced computation. Our future works plan to investigate channel- and block-wise sparse structures.
9
A1 MORE TECHNIQUE DETAILS
Algorithms of Robust Bird and Flying Bird(+). Here we present the detailed procedure to identify robust bird and flying bird(+), as summarized in algorithm 1 and 2. Note that for the increasing frequency on Line 10 and 11 in algorithm 2, we compare the measurements stored in the queue between two consequent epochs and calculate the frequency of increasing.
Algorithm 1: Finding a Robust Bird Input: f(x; θ0) w. initialization θ0, target sparsity s%, FIFO queue Q with length l, threshold τ Output: Robust bird f(x;mt∗ θT)
1 while t < tmax do 2 Update network parameters θt ← θt−1 via standard training 3 Apply static pruning towards target sparsity s% and obtain the sparse mask mt 4 Calculate the Hamming distance δH(mt,mt−1), append result to Q 5 t← t+ 1 6 if max(Q) < τ then 7 t∗ ← t 8 Rewind f(x;mt∗ θt∗)→ f(x;mt∗ θ0) 9 Training f(x;mt∗ θ0) via PGD-AT for T epochs
10 return f(x;mt∗ θT) 11 end 12 end
Algorithm 2: Finding a Flying Bird(+) Input: Initialization parameters θ0, sparse masks m of sparsity s%, FIFO queue Qp andQg
with length l, pruning and growth increasing ratio δp and δg , update threshold , optimize interval ∆t, parameter update ratio k%, ratio update starting point tstart
Output: Flying bird(+) f(x;m θT) 1 while t < T do 2 Update network parameters θt ← θt−1 via PGD-AT; 3 # Record training statistics 4 Add robust generalization gap between train and validation set to Qp 5 Add robust validation loss to Qg 6 # Update sparse masks m 7 if (t mod ∆t) == 0 then 8 |---Optional for Flying Bird+---| 9 # Update pruning and growth ratio p%, g%
10 if t > tstart and increasing frequency of Qp ≥ : p = (1 + δp)× k else p = k 11 if t > tstart and increasing frequency of Qg ≥ : g = (1 + δg)× k else g = k 12 |---Optional for Flying Bird+---| 13 Prune p% parameters with smallest weight magnitude 14 Grow g% parameters with largest gradient 15 Update sparse mask m accordingly 16 end 17 end
A2 MORE IMPLEMENTATION DETAILS
A2.1 OTHER COMMON DETAILS
We select two checkpoints during training: best, which has the best RA values on the validation set, and final, i.e., the last checkpoint. And we report both RA and SA of these two checkpoints on test sets. Apart from the robust generalization gap, we also show the extent of robust overfitting numerically by the difference of RA between best and final. Furthermore, we calculate the FLOPs
A17
at both training and inference stages to evaluate the prices of obtaining and exploiting the subnetworks respectively, in which we approximate the FLOPs of the back-propagation to be twice that of forwarding propagation (Yang et al., 2020).
A2.2 MORE DETAILS ABOUT ROBUST BIRD
For the experiments of RB tickets finding, we comprehensively study three training regimes: standard training with stochastic gradient descent (SGD), adversarial training with PGD-10 AT (Madry et al., 2018b), and Fast AT (Wong et al., 2020). Following Pang et al. (2021), we train the network with an SGD optimizer of 0.9 momentum and 5 × 10−4 weight decay. We use a batch size of 128. For the experiments of PGD-10 AT, we adopt the `∞ PGD attack with a maximum perturbation = 8/255 and a step size α = 2/255. And the learning rate starts from 0.1, then decays by ten times at 50, 150 epoch. As for fast AT, we use a cyclic schedule with a maximum learning rate equals 0.2.
A2.3 MORE DETAILS ABOUT FLYING BIRD(+)
For the experiments of Flying Bird+, the increasing ratio of pruning and growth δp, δq is kept default to 0.4% and 0.05%, respectively.
A3 MORE EXPERIMENT RESULTS
A3.1 MORE RESULTS ABOUT ROBUST BIRD
Accuracy during RB Tickets Finding Figure A7 shows the curve of standard test accuracy during the training phase of RB ticket finding. We can observe the SGD training scheme develops highlevel network connections much faster than the others, which provides a possible explanation for the superior quality of RB tickets from SGD.
0 5 10 15 20 25 30 Epoch
30 40 50 60 70 80 St an da rd A cc ur ac y %
RB Ticket Finding Performance
PGD-10 SGD FAST AT
Figure A7: Standard accuracy (SA) of PGD-10, SGD, and Fast AT during the RB ticket finding phase.
Mask Similarity Visualization. Figure A8 visualizes the dynamic similarity scores for each epoch among masks found via SGD, Fast AT, and PGD-10. Specifically, the similarity scores (You et al., 2020) reflect the Hamming distance between a pair of masks. We notice that masks found by SGD and PGD-10 share more common structures. A possible reason is that Fast AT usually adopts a cyclic learning rate schedule, while SGD and PGD use a multi-step decay schedule.
Different training regimes for finding RB tickets. We denote the subnetworks identified by standard training with SGD, adversarial training with Fast AT (Wong et al., 2020) and adversarial train-
A18
0 20 40 60 80 100
Fast AT Mask
0 20 40 60 80
100PG D-
10 M
as k
0 20 40 60 80 100
Fast AT Mask
0 20 40 60 80
100
SG D
M as
k
0 20 40 60 80 100
PGD-10 Mask
0 20 40 60 80
100
SG D
M as
k
0.50 0.55 0.60 0.65 0.70 0.75
Figure A8: Similarity scores by epoch among masks found via Fast AT, SGD, and PGD-10. A brighter color denotes higher similarity.
Table A6: Comparison results of different training regimes for RB ticket finding on CIFAR-100 with ResNet-18. The subnetworks at 90% and 95% are selected here.
Sparsity(%) Settings Roubst Accuarcy Standard Accuarcy Robust
Best Final Diff. Best Final Diff. Generalization
0 Baseline 26.93 19.62 7.31 52.03 53.91 −1.88 54.56
90
SGD tickets 25.83 23.40 2.43 49.35 53.51 −4.16 18.37↓ 36.19 Fast AT tickets 25.15 22.88 2.27 51.00 51.75 −0.75 20.23↓ 34.33 PGD-10 tickets 25.34 22.96 2.38 52.01 53.27 −1.26 20.03↓ 34.53
95
SGD tickets 24.77 24.12 0.65 49.88 50.89 −1.01 9.18↓ 45.38 Fast AT tickets 23.50 22.46 1.04 41.67 43.19 −1.52 9.53↓ 45.03 PGD-10 tickets 24.44 23.77 0.67 49.30 50.65 −1.35 9.86↓ 44.70
ing with PGD-10 AT as SGD tickets, Fast AT tickets, and PGD-10 tickets, respectively. Table A6 demonstrate the SGD tickets has the best performance.
Loss Landscape Visualization We visualize the loss landscape of the dense network, random pruned subnetwork, and robust bird tickets at 30% sparsity in Figure A9. Compared with the dense model and random pruned subnetwork, RB tickets found by the standard training shows much flatter loss landscapes, which provide a high-quality starting point for further robustification.
A3.2 MORE RESULTS ABOUT FLYING BIRD(+)
Excluding Obfuscated Gradients. To exclude this possibility of gradient masking, we show that our methods maintain improved robustness under unseen transfer attacks. As shown in Table A7, the left part represents the testing accuracy of perturbed test samples from an unseen robust model, and the right part shows the transfer testing performance on an unseen robust model (here we use a separately robustified ResNet-50 with PGD-10 on CIFAR-100).
Performance under Improved Attacks. We report the performance of both RB and FB(+) under Auto-Attack (Croce & Hein, 2020) and CW Attack (Carlini & Wagner, 2017). For Auto-Attack, we keep the default setting with = 8255 . And for CW Attack we perform 1 search step on C with an initial constant of 0.1. And we use 100 iterations for each search step with the learning rate of 0.01. As shown in Table A8, both RB and FB(+) outperform the dense counterpart in terms of robust generalization. And FB+ achieves superior performance.
More Datasets and Architectures We report more results of different sparsification methods across diverse datasets and architectures at Table A9, A10, A11 and A12, from which we observe our approaches are capable of improving robust generalization and mitigating robust overfitting.
A19
Dense Models RB Tickets (30%) Random Pruning (30%)
Figure A9: Loss landscapes visualizations (Engstrom et al., 2018; Chen et al., 2021e) of the dense model (unpruned), random pruned subnetwork at 30% sparsity, and Robust Bird (RB) tickets at 30% sparsity found by the standard training. The ResNet-18 backbone with the same original initialization on CIFAR-10 is adopted here. Results demonstrate that RB tickets offer a smoother and flatter starting point for further robustification in the second stage.
Table A7: Transfer attack performance from/on an unseen non-robust model, where the attacks are generated by/applied to the non-robust model. The robust generalization gap is also calculated based on transfer attack accuracies between train and test sets. We use ResNet-18 on CIFAR-10/100 and sub-networks at 80% sparsity.
Dataset Settings Transfer Attack from Unseen Model Transfer Attack on Unseen Model
Accuracy Robust Accuracy Robust
Best Final Diff. Generalization Best Final Diff. Generalization
CIFAR-10 Baseline 79.68 82.03 −2.35 16.43 70.48 79.85 −9.37 11.84 Robust Bird 77.33 81.04 −3.71 12.18 73.17 77.03 −3.86 11.49 Flying Bird 79.13 82.17 −3.04 13.49 71.59 77.19 −5.60 11.88 Flying Bird+ 79.47 81.90 −2.43 11.85 70.43 76.00 −5.57 11.42
CIFAR-100 Baseline 50.51 52.15 −1.64 45.91 48.67 54.48 −5.81 36.98 Robust Bird 47.25 51.74 −4.49 28.80 47.47 50.90 −3.43 35.82 Flying Bird 51.80 53.52 −1.72 31.98 45.56 50.61 −5.05 35.39 Flying Bird+ 50.72 53.56 −2.84 25.09 47.04 49.43 −2.39 35.09
Distributions of Adopted Sparse Initialization. We report the layer-wise sparsity of different initial sparse masks. As shown in Figure A10, we observe that subnetworks generally have better performance when the top layers remain most of the parameters.
Training Curve of Flying Bird+. Figure A11 shows the training curve of Flying Bird+, in which the red dotted lines represent the time for increasing the pruning ratio and the green dotted lines for growth ratio. The detailed training curve demonstrates the flexibility of flying bird+ for dynamically adjusting the sparsity levels.
A4 EXTRA RESULTS AND DISCUSSION
We sincerely appreciate all anonymous reviewers’ and area chairs’ constructive discussions for improving this paper. Extra results and discussions are presented in this section.
A20
Table A8: Evaluation under improved attacks (i.e., Auto-Attack and CW-Attack) on CIFAR-10/100 with ResNet-18 at 80% sparsity. The robust generalization gap is computed under improved attacks.
Dataset Settings Auto-Attack CW-Attack
Accuracy Robust Accuracy Robust
Best Final Diff. Generalization Best Final Diff. Generalization
CIFAR-10 Baseline 47.41 41.59 5.82 35.30 75.76 66.13 9.63 30.39 Robust Bird 45.90 42.45 3.45 21.58 ↓ 13.72 73.95 73.52 0.43 17.67 ↓ 12.72 Flying Bird 47.55 43.57 3.98 26.55 ↓ 8.75 75.30 72.08 3.22 21.77 ↓ 8.62 Flying Bird+ 47.06 44.09 3.17 21.73 ↓ 13.57 76.00 73.83 2.17 17.77 ↓ 12.62
CIFAR-100 Baseline 23.16 17.68 5.48 49.73 45.83 36.21 9.62 57.52 Robust Bird 21.29 18.00 3.29 21.72 ↓ 28.01 43.30 42.39 0.91 30.82 ↓ 26.70 Flying Bird 22.74 19.44 3.30 25.18 ↓ 24.55 46.23 42.36 3.87 35.50 ↓ 22.02 Flying Bird+ 22.90 20.31 2.59 19.05 ↓ 30.68 45.86 43.90 1.96 26.76 ↓ 30.76
Table A9: More results of different sparcification methods on CIFAR-10 with ResNet-18.
Sparsity(%) Settings Robust Accuracy Standard Accuracy Robust
Best Final Diff. Best Final Diff. Generalization
0 Baseline 51.10 43.61 7.49 81.15 83.38 −2.23 38.82
95
Small Dense 45.99 44.55 1.44 74.26 75.64 −1.38 7.87 ↓ 30.95 Random Pruning 45.64 44.18 1.46 75.20 75.20 0.00 7.96 ↓ 30.86 OMP 47.08 46.23 0.85 78.77 79.36 −0.59 12.01 ↓ 26.81 SNIP 48.18 46.72 1.46 78.55 79.21 −0.66 9.58 ↓ 29.24 GraSP 48.58 47.15 1.43 78.95 79.44 −0.49 10.37 ↓ 28.45 SynFlow 48.93 48.22 0.71 78.70 78.90 −0.20 8.25 ↓ 30.57 IGQ 48.82 47.56 1.26 79.44 79.76 −0.32 9.33 ↓ 29.49 Robust Bird 47.53 46.48 1.05 78.33 78.78 −0.45 9.20 ↓ 29.62 Flying Bird 49.62 48.46 1.16 78.12 81.43 −3.31 13.32 ↓ 25.52 Flying Bird+ 49.37 48.84 0.53 80.33 80.28 0.05 9.27 ↓ 29.55
co nv
1
la ye
r1 .0
.c on
v1
la ye
r1 .0
.c on
v2
la ye
r1 .1
.c on
v1
la ye
r1 .1
.c on
v2
la ye
r2 .0
.c on
v1
la ye
r2 .0
.c on
v2
la ye
r2 .0
.sh or
tc ut
.0
la ye
r2 .1
.c on
v1
la ye
r2 .1
.c on
v2
la ye
r3 .0
.c on
v1
la ye
r3 .0
.c on
v2
la ye
r3 .0
.sh or
tc ut
.0
la ye
r3 .1
.c on
v1
la ye
r3 .1
.c on
v2
la ye
r4 .0
.c on
v1
la ye
r4 .0
.c on
v2
la ye
r4 .0
.sh or
tc ut
.0
la ye
r4 .1
.c on
v1
la ye
r4 .1
.c on
v2
lin ea
r
Layer Name
0.0
0.2
0.4
0.6
0.8
1.0
Sp ar
si ty
Uniform GraSP SNIP SynFlow IGQ ERK
Figure A10: Layer-wise sparisty of different initial sparse masks with ResNet-18
A4.1 MORE RESULTS OF DIFFERENT SPARSITY
We report more results of subnetworks with 40/60% sparsity on CIFAR-10/100 with ResNet-18 and VGG-16. As shown in Table A13, A14, A15 and A16, our flying bird(+) achieves consistent improvement than baseline unpruned networks, in terms of 2.45 ∼ 19.81% narrower robust generalization gaps with comparable RA and SA performance.
A4.2 MORE RESULTS ON WIDERESNET
We further evaluate our flying bird(+) with WideResNet-34-10 on CIFAR-10 and report the results on Table A17. We can observe that compared with the dense network, our methods significantly shrink the robust generalization gap by up to 13.14% and maintain comparable RA/SA performance.
A21
Table A10: More results of different sparcification methods on CIFAR-10 with VGG-16.
Sparsity(%) Settings Robust Accuracy Standard Accuracy Robust
Best Final Diff. Best Final Diff. Generalization
0 Baseline 48.33 42.73 5.60 76.84 79.73 −2.89 28.00
80
Random Pruning 46.14 40.33 5.81 74.42 76.68 −2.26 21.01 ↓ 6.99 OMP 47.90 43.19 4.71 76.60 80.02 −3.42 24.97 ↓ 3.03 SNIP 48.03 43.17 4.86 76.68 80.08 −3.40 24.71 ↓ 3.29 GraSP 47.91 42.34 5.57 75.74 78.87 −3.13 23.65 ↓ 4.35 SynFlow 48.47 45.32 3.15 77.62 79.09 −1.47 20.17 ↓ 7.83 IGQ 48.57 44.25 4.32 77.51 80.01 −2.50 22.79 ↓ 5.21 Robust Bird 47.69 41.66 6.03 75.32 78.58 −3.26 23.57 ↓ 4.43 Flying Bird 48.43 44.65 3.78 77.53 79.72 −2.19 21.01 ↓ 6.99 Flying Bird+ 48.25 45.24 3.01 77.48 79.55 −2.07 17.75 ↓ 10.25
90
Random Pruning 44.33 40.33 4.00 71.27 74.46 −3.19 15.48 ↓ 12.52 OMP 47.84 43.34 4.50 75.60 79.10 −3.50 18.29 ↓ 9.71 SNIP 47.76 44.27 3.49 75.92 79.62 −3.70 17.85 ↓ 10.15 GraSP 45.96 42.12 3.84 75.19 77.03 −1.84 15.04 ↓ 12.96 SynFlow 47.54 45.79 1.75 78.43 78.70 −0.27 14.40 ↓ 13.60 IGQ 47.79 45.12 2.67 74.87 79.19 −4.32 16.06 ↓ 11.94 Robust Bird 47.09 44.13 2.96 75.53 78.36 −2.83 16.57 ↓ 11.43 Flying Bird 48.45 45.55 2.90 75.82 79.21 −3.39 16.56 ↓ 11.44 Flying Bird+ 48.39 46.26 2.13 78.73 79.12 −0.39 12.47 ↓ 15.53
A4.3 COMPARISON WITH EFFICIENT ADVERSARIAL TRAINING METHODS
To elaborate more about training efficiency, we compare our methods with two efficient training methods. Shafahi et al. (2019) proposed Free Adversarial Training that improves training efficiency by reusing the gradient information, which is orthogonal to our approaches and can be easily combined with our methods to pursue more efficiency by replacing the PGD-10 training with Free AT.
A22
Sparsity: 80% Sparsity: 90%
Additionally, Li et al. (2020) uses magnitude pruning to locate sparse structures, which is similar to OMP reported in Table 1, except they use a smaller learning rate. Our methods achieve better performance and efficiency than OMP. Specifically, with 80% sparsity, our flying bird+ reaches a 4.49% narrower robust generalization gap and 1.54% higher RA yet only requires 87.58% less training FLOPs. Also, our methods can be easily combined with Fast AT for further training efficiency.
A23
A4.4 COMPARISON WITH OTHER PRUNING AND SPARSE TRAINING METHODS
Compared with the recent work (Özdenizci & Legenstein, 2021), our flying bird(+) is different at both levels of goal and methodologies. Firstly, Özdenizci & Legenstein (2021) pursues a superior adversarial robust testing accuracy for sparsely connected networks. While we aim to investigate the relationship between sparsity and robust generalization, and demonstrate that introducing appropriate sparsity (e.g., LTH-based static sparsity or dynamic sparsity) into adversarial training
A24
substantially alleviates the robust generalization gap and maintains comparable or even better standard/robust accuracies. Secondly, Özdenizci & Legenstein (2021) samples network connectivity from a learned posterior to form a sparse subnetwork. However, our flying bird first removes the parameters with the lowest magnitude, which ensures a small term of the first-order Taylor approximation of the loss and thus limits the impact on the output of networks (Evci et al., 2020a). And then, it allows new connectivity with the largest gradient to grow to reduce the loss quickly (Evci et al., 2020a). Furthermore, we propose an enhanced variant of Flying Bird, i.e., Flying Bird+, which not only learns the sparse topologies but also is capable of adaptively adjusting the network capacity to determine the right parameterization level “on-demand” during training, while Özdenizci & Legenstein (2021) stick to a fixed parameter budget.
Another work, HYDRA (Sehwag et al., 2020) also has several differences from our robust birds. Specifically, HYDRA starts from a robust pre-trained dense network, which requires at least hundreds of epochs for adversarial training. However, our robust bird’s pre-training only needs a few epochs of standard training. Therefore, Sehwag et al. (2020) has significantly higher computational costs, compared to ours. Then, Sehwag et al. (2020) adopt TRADES (Zhang et al., 2019) for adversarial training, which also requires auxiliary inputs of clean images, while our methods follow the classical adversarial training (Madry et al., 2018b) and only take adversarial perturbed samples as input. Moreover, for CIFAR-10 experiments, Sehwag et al. (2020) uses 500k additional pseudolabeled images from the Tiny-ImageNet dataset with a robust semi-supervised training approach. However, all our methods and experiments do not leverage any external data.
Furthermore, one concurrent work (Fu et al., 2021) demonstrates that there exist subnetworks with inborn robustness. Such randomly initialized networks have matching or even superior robust accuracy of adversarially trained networks with similar parameter counts. It’s interesting to utilize this finding for further improvement of robust generalization, and we will investigate it in future works.
A25 | 1. What is the main contribution of the paper regarding training a neural network?
2. What are the strengths and weaknesses of the proposed methods RobustBird and Flying Bird?
3. How does the reviewer assess the novelty of the findings in the paper compared to prior works like Han 2015?
4. What are the concerns regarding the experimental section and the comparison with reference strategies?
5. How does the reviewer suggest improving the experiments to make the results more meaningful? | Summary Of The Paper
Review | Summary Of The Paper
This paper deals with the problem of training a neural network so that it generalizes well over data unseen at training time. Namely, they address the particular case where a network is trained over an adversarial scheme. This paper proposes two methods for learning a sparse architecture called robust and flying bird. These methods aim at identifying sparse subnetworks arising during early training stages, so to get a pruning mask that eventually yields a sparse architecture (RobustBird). FlyingBird improves over Robust Bird in teh sense that the learning mask can be dynamically adjusted over time, i.e. pruned params may be recovered later on. The authors then experiment at training multiple architectures over different datasets in the experimental section, showing better generalization abilities and lower computational complexity (MACs) for their proposed methods Robust and Flying Birds. the authors conclude that sparsity help networks to generalize better, and as a byproduct it slashes computational complexity.
Review
It is well known that sparsity helps generalizing: from example, already Han 2015 in Fig 5 shows that pruning (simple L2 regularization + thresholding) helps the network generalizing over unseen data. Similarly, the same article shows that refining the surviving parameters after pruning yields better performance, and according to my personal experience it is also true that allowing parameters to enter and exit the pruning pool, i e allowing the pruning mask to evolve, improves performance under multiple points. So, in general those "findings" claimed by the article are not that novel according to the existing literature. Concerning the experiments, the authors compare with a number of different reference strategies for pruning ratios of 80% and 90%. While I do not put into question the tradeoff between sparsity and performance, the performance corresponding to the sparsity ratio selected by the authors is so low and far beyond typical useful accuracy numbers for unpruned architecture that I cannot avoid questioning the meaningfulness of the reported results. In other words, it is inconclusive for the reader to see that the proposed method performs x% better than the closest reference when you are in the 40/50% accuracy range for cifar 10. I would suggest the authors to compare for a sparsity so that the performance is closer to more typical values for unpruned architectures. In this context, it is not clear how a reference strategy based on simple L2 regularization without pruning would perform in term of generalization ability and that would be an interesting extra reference to consider.
[Han 2015] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pp. 1135–1143, 2015b |
ICLR | Title
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training
Abstract
Recent studies demonstrate that deep networks, even robustified by the state-ofthe-art adversarial training (AT), still suffer from large robust generalization gaps, in addition to the much more expensive training costs than standard training. In this paper, we investigate this intriguing problem from a new perspective, i.e., injecting appropriate forms of sparsity during adversarial training. We introduce two alternatives for sparse adversarial training: (i) static sparsity, by leveraging recent results from the lottery ticket hypothesis to identify critical sparse subnetworks arising from the early training; (ii) dynamic sparsity, by allowing the sparse subnetwork to adaptively adjust its connectivity pattern (while sticking to the same sparsity ratio) throughout training. We find both static and dynamic sparse methods to yield win-win: substantially shrinking the robust generalization gap and alleviating the robust overfitting, meanwhile significantly saving training and inference FLOPs. Extensive experiments validate our proposals with multiple network architectures on diverse datasets, including CIFAR-10/100 and TinyImageNet. For example, our methods reduce robust generalization gap and overfitting by 34.44% and 4.02%, with comparable robust/standard accuracy boosts and 87.83%/87.82% training/inference FLOPs savings on CIFAR-100 with ResNet18. Besides, our approaches can be organically combined with existing regularizers, establishing new state-of-the-art results in AT. Codes are available in https: //github.com/VITA-Group/Sparsity-Win-Robust-Generalization.
N/A
1 INTRODUCTION
Deep neural networks (DNNs) are notoriously vulnerable to maliciously crafted adversarial attacks. To conquer this fragility, numerous adversarial defense mechanisms are proposed to establish robust neural networks (Schmidt et al., 2018; Sun et al., 2019; Nakkiran, 2019; Raghunathan et al., 2019; Hu et al., 2019; Chen et al., 2020c; 2021e; Jiang et al., 2020). Among them, adversarial training (AT) based methods (Madry et al., 2017; Zhang et al., 2019) have maintained the state-of-the-art robustness. However, the AT training process usually comes with order-ofmagnitude higher computational costs than standard training, since multiple attack iterations are needed to construct strong adversarial examples (Madry et al., 2018b). Moreover, AT was recently revealed to incur severe robust generalization gaps (Rice et al., 2020), between its training and testing accuracies, as shown in Figure 1; and to require significantly more training samples (Schmidt et al., 2018) to generalize robustly.
*Equal Contribution.
1
In response to those challenges, Schmidt et al. (2018); Lee et al. (2020); Song et al. (2019) investigate the possibility of improving generalization by leveraging advanced data augmentation techniques, which further amplifies the training cost of AT. Recent studies (Rice et al., 2020; Chen et al., 2021e) found that early stopping, or several smoothness/flatness-aware regularizations (Chen et al., 2021e; Stutz et al., 2021; Singla et al., 2021), can bring effective mitigation.
In this paper, a new perspective has been explored to tackle the above challenges by enforcing appropriate sparsity patterns during AT. The connection between robust generalization and sparsity is mainly inspired by two facts. On one hand, sparsity can effectively regularize the learning of over-parameterized neural networks, hence potentially benefiting both standard and robust generalization (Balda et al., 2019). As demonstrated in Figure 1, with the increase of sparsity levels, the robust generalization gap is indeed substantially shrunk while the robust overfitting is alleviated. On the other hand, one key design philosophy that facilitates this consideration is the lottery ticket hypothesis (LTH) (Frankle & Carbin, 2019). The LTH advocates the existence of highly sparse and separately trainable subnetworks (a.k.a. winning tickets), which can be trained from the original initialization to match or even surpass the corresponding dense networks’ test accuracies. These facts point out a promising direction that utilizing proper sparsity is capable of boosting robust generalization while maintaining competitive standard and robust accuracy.
Although sparsity is beneficial, the current methods (Frankle & Carbin, 2019; Frankle et al., 2020; Renda et al., 2020) often empirically locate sparse critical subnetworks by Iterative Magnitude Pruning (IMP). It demands excessive computational cost even for standard training due to the iterative train-prune-retrain process. Recently, You et al. (2020) demonstrated that these intriguing subnetworks can be identified at the very early training stage using one-shot pruning, which they term as Early Bird (EB) tickets. We show the phenomenon also exists in the adversarial training scheme. More importantly, we take one leap further to reveal that even in adversarial training, EB tickets can be drawn from a cheap standard training stage, while still achieving solid robustness. In other words, the Early Bird is also a Robust Bird that yields an attractive win-win of efficiency and robustness - we name this finding as Robust Bird (RB) tickets.
Furthermore, we investigate the role of sparsity in a scene where the sparse connections of subnetworks change on the fly. Specifically, we initialize a subnetwork with random sparse connectivity and then optimize its weights and sparse typologies simultaneously, while sticking to the fixed small parameter budget. This training pipeline, called as Flying Bird (FB), is motivated by the latest sparse training approaches (Evci et al., 2020b) to further reduce robust generalization gap in AT, while ensuring low training costs. Moreover, an enhanced algorithm, i.e., Flying Bird+, is proposed to dynamically adjust the network capacity (or sparsity) to pursue superior robust generalization, at few extra prices of training efficiency. Our contributions can be summarized as follows:
• We perform a thorough investigation to reveal that introducing appropriate sparsity into AT is an appealing win-win, specifically: (1) substantially alleviating the robust generalization gap; (2) maintaining comparable or even better standard/robust accuracies; and (3) enhancing the AT efficiency by training only compact subnetworks.
• We explore two alternatives for sparse adversarial training: (i) the Robust Bird (RB) training that leverages static sparsity, by mining the critical sparse subnetwork at the early training stage, and using only the cheapest standard training; (ii) the Flying Bird (FB) training that allows for dynamic sparsity, which jointly optimizes both network weights and their sparse connectivity during AT, while sticking to the same sparsity level. We also discuss a FB variant called Flying Bird+ that adaptively adjusts the sparsity level on demand during AT.
• Extensive experiments are conducted on CIFAR-10, CIFAR-100, and Tiny-ImageNet with diverse network architectures. Specifically, our proposals obtain 80.16% ∼ 87.83% training FLOPs and 80.16% ∼ 87.83% inference FLOPs savings, shrink robust generalization from 28.00% ∼ 63.18% to 4.43% ∼ 34.44%, and boost the robust accuracy by up to 0.60% and the standard accuracy by up to 0.90%, across multiple datasets and architectures. Meanwhile, combining our sparse adversarial training frameworks with existing regularizations establishes the new state-of-the-art results.
2 RELATED WORK
Adversarial training and robust generalization/overfitting. Deep neural networks present vulnerability to imperceivable adversarial perturbations. To deal with this drawback, numerous defense
2
approaches have been proposed (Goodfellow et al., 2015; Kurakin et al., 2016; Madry et al., 2018a). Although many methods (Liao et al., 2018; Guo et al., 2018a; Xu et al., 2017; Dziugaite et al., 2016; Dhillon et al., 2018a; Xie et al., 2018; Jiang et al., 2020) were later found to result from obfuscated gradients (Athalye et al., 2018), adversarial training (AT) (Madry et al., 2018a), together with some of its variants (Zhang et al., 2019; Mosbach et al., 2018; Dong et al., 2018), remains as one of the most effective yet costly approaches.
A pitfall of AT, i.e., the poor robust generalization, was spotted recently. Schmidt et al. (2018) showed that AT intrinsically demands a larger sample complexity to identify well-generalizable robust solutions. Therefore, data augmentation (Lee et al., 2020; Song et al., 2019) is an effective remedy. Stutz et al. (2021); Singla et al. (2021) related robust generalization gap to curvature/flatness of loss landscapes. They introduced weight perturbing approaches and smooth activation functions to reshape the loss geometry and boost robust generalization ability. Meanwhile, the robust overfitting (Rice et al., 2020) in AT usually happens with or as a result of inferior generalization. Previous studies (Rice et al., 2020; Chen et al., 2021e) demonstrated that conventional regularization-based methods (e.g., weight decay and simple data augmentation) can not alleviate robust overfitting. Then, numerous advanced algorithms (Zhang et al., 2020; 2021b; Zhou et al., 2021; Bunk et al., 2021; Chen et al., 2021a; Dong et al., 2021; Zi et al., 2021; Tack et al., 2021; Zhang et al., 2021a) arose in the last half year to tackle the overfitting, using data manipulation, smoothened training, and else. Those methods work orthogonally to our proposal as evidenced in Section 4.
Another group of related literature lies in the field of sparse robust networks (Guo et al., 2018b). These works either treat model compression as a defense mechanism (Wang et al., 2018; Gao et al., 2017; Dhillon et al., 2018b) or pursue robust and efficient sub-models that can be deployed in resource-limited platforms (Gui et al., 2019; Ye et al., 2019; Sehwag et al., 2019). Compared to those inference-focused methods, our goal is fundamentally different: injecting sparsity during training to reduce the robust generalization gap while improving training efficiency.
Static pruning and dynamic sparse training. Pruning (LeCun et al., 1990; Han et al., 2015a) serves as a powerful technique to eliminate the weight redundancy in over-parameterized DNNs, which aims to obtain storage and computational savings with almost undamaged performance. It can roughly divided into two categories based on how to generate sparse patterns: (i) static pruning. It removes parameters (Han et al., 2015a; LeCun et al., 1990; Han et al., 2015b) or substructures (Liu et al., 2017; Zhou et al., 2016; He et al., 2017) based on optimized importance scores (Zhang et al., 2018; He et al., 2017) or some heuristics like weight magnitude (Han et al., 2015a), gradient (Molchanov et al., 2019), hessian (LeCun et al., 1990) statistics. The discarded elements usually will not participate in the next round of training or pruning. Static pruning can be flexibly applied prior to training, such as SNIP (Lee et al., 2019), GraSP (Wang et al., 2020) and SynFlow (Tanaka et al., 2020); during training (Zhang et al., 2018; He et al., 2017); and post training (Han et al., 2015a) for different trade-off between training cost and pruned models’ quality. (ii) dynamic sparse training. It updates model parameters and sparse connectivities at the same time, starting from a randomly sparsified subnetwork (Molchanov et al., 2017). During the training, the removed elements have chances to be grown back if they potentially benefit to predictions. Among the huge family of sparse training (Mocanu et al., 2016; Evci et al., 2019; Mostafa & Wang, 2019; Liu et al., 2021a; Dettmers & Zettlemoyer, 2019; Jayakumar et al., 2021; Raihan & Aamodt, 2020), the recent methods Evci et al. (2020a); Liu et al. (2021b) lead to the state-of-the-art performance.
A special case of static pruning, Lottery tickets hypothesis (LTH) (Frankle & Carbin, 2019), demonstrates the existence of sparse subnetworks in DNNs, which are capable of training in isolation and reach a comparable performance of their dense counterpart. The LTH indicates the great potential to train a sparse network from scratch without sacrificing expressiveness and has recently drawn lots of attention from diverse fields (Chen et al., 2020b;a; 2021g;f;d;c;b; 2022; Ding et al., 2022; Gan et al., 2021) beyond image recognition (Zhang et al., 2021d; Frankle et al., 2020; Redman et al., 2021).
3 METHODOLOGY
3.1 PRELIMINARIES
Adversarial training (AT). As one of the widely adopted defense mechanisms, adversarial training (Madry et al., 2018b) effectively tackles the vulnerability to maliciously crafted adversarial samples. As formulated in Equation 1, AT (specifically PGD-AT) replaces the original empirical risk minimization into a min-max optimization problem:
3
min θ
E(x,y)∈DL ( f(x; θ), y ) =⇒ min
θ E(x,y)∈D max ‖δ‖p≤ L ( f(x+ δ; θ), y ) , (1)
where f(x; θ) is a network with parameters θ. Input data x and its associated label y from training set D are used to first generate adversarial perturbations δ and then minimize the empirical classification loss L. To meet the imperceptible requirement, the `p norm of δ is constrained by a small constant . Projected Gradient Descent (PGD), i.e., δt+1 = projP [δ t + α · sgn ( ∇xL(f(x + δt; θ), y) ) ], is usually utilized to produce the adversarial perturbations with step size α, which works in an iterative manner leveraging the local first order information about the network (Madry et al., 2018b).
Sparse subnetworks. Following the routine notations in Frankle & Carbin (2019), f(x;m θ) donates a sparse subnetwork with a binary pruning mask m ∈ {0, 1}‖θ‖0 , where is the elementwise product. Intuitively, it is a copy of dense network f(x; θ) with a portion of fixed zero weights.
3.2 ROBUST BIRD FOR ADVERSARIAL TRAINING
Introducing Robust Bird. The primary goal of Robust Bird is to find a high-quality sparse subnetwork efficiently. As shown in Figure 2, it locates subnetworks quickly by detecting critical network structures arising in the early training, which later can be robustified with much less computation.
Specifically, for each epoch t during training, Robust Bird creates a sparsity mask mt by “masking out” the p% lowest-magnitude weights; then, Robust Bird tracks the corresponding mask dynamics. The key observation behind Robust Bird is that the sparsity mask mt does not change drastically beyond the early epochs of training (You et al., 2020) because high-level network connectivity patterns are learned during the initial stages (Achille et al., 2019). This indicates that (i) winning tickets emerge at a very early training stage, and (ii) that they can be identified efficiently.
Robust Bird exploits this observation by comparing the Hamming distance between sparsity masks found in consecutive epochs. For each epoch, the last l sparsity masks are stored. If all the stored masks are sufficiently close to each other, then the sparsity masks are not changing drastically over time and network connectivity patterns have emerged; thus, a Robust Bird ticket (RB ticket) is drawn. A detailed algorithmic implementation is provided in Algorithm 1 of Appendix A1. This is the RB ticket used in the second stage of adversarial training.
4
5 0 5 10 15 20 25 30 35
1 s t PC: 3 7 .6 7 %
20
10
0
10
20
30
2n d
PC : 1
9. 56
%
0 .500
1 .000
1 .500
2 .000
2.500
3 .0 00
3 .000
3 .500
3 .50 0
4.000 4 .500
5 .0 00
5.500 6 .000
6 .5 0
7.000 7.500
8 .0 00.59 .000.5
10.000 100 .000
1 000 .000
1 0 000 .000
100000 .000
1 000000 .000 10000000 .000 100000000 .000
5 0 5 10 15 20 25 30 35
1 s t PC: 3 6 .1 6 %
20
10
0
10
20
30
2n d
PC : 1
9. 17
%
1 .000 1 .5 00
2 .000 2 .500
3.000
3 .5 00
3 .500 4 .000
4 . 00 0
4 .5 00
5 .0 00
5 . 50 0 6 . 00 0
6. 50 0
7 .0 007 .5 008 .00 0 8 .50 09 .000.5
10 .0 00
1 00 .000
1000.000
1 0 000 .000
100000 .000
1 0 00000 .000
10000000 .000
1 00000000 .000
5 0 5 10 15 20 25 30 35
1 s t PC: 3 6 .9 4 %
20
10
0
10
20
30
2n d
PC : 1
9. 72
%
1 .500
2 .000
2 .500
2 .500
2 .500
3 .000
3 .000
3 .500
3 .500
4 .000 4 .5 0 5 .000
5 .500 6 .0 6 .500 7 .0 7 .500 8 .0 8 .500 9 .000
9 .50
10 .000
100 .000
1000 .000
1 0000 .000
100000 .0
1000000 .000
10000000 .000
000 .000
D en
se
R an
do m
P ru
ni ng
Fl yi
ng B
ir d+
Figure 3: Visualization of loss contours and training trajectories. We compare the dense network, randomly pruned sparse networks, and flying bird+ at 90% sparsity from ResNet-18 robustified on CIFAR-10.
Rationale of Robust Bird. Recent studies (Zhang et al., 2021c) present theoretical analyses that identified sparse winning tickets enlarge the convex region near the good local minima, leading to improved generalization. Our work also shows a related investigation in Figure A9 that, compared with dense models and random pruned subnetworks, RB tickets found by the standard training have much flatter loss landscapes, serving a high-quality starting point for further robustification. This occurs because flatness of the loss surface is often believed to indicate the standard generalization. Similarly, as advocated by Wu et al. (2020a); Hein & Andriushchenko (2017), a flatter adversarial loss landscape also effectively shrinks the robustness generalization gap. This “flatness preference” of adversarial robustness has been revealed by numerous empirical defense mechanisms, including Hessian/curvature-based regularization (Moosavi-Dezfooli et al., 2019), learned weight and logits smoothening (Chen et al., 2021e), gradient magnitude penalty (Wang & Zhang, 2019), smoothening with random noise (Liu et al., 2018), or entropy regularization (Jagatap et al., 2020).
These observations make the main cornerstone for our proposal and provide possible interpretations to the surprising finding that the RB tickets pruned from a non-robust model can be used for obtaining well-generalizable robust models in the followed robustification. Furthermore, unlike previous costly flatness regularizers (Moosavi-Dezfooli et al., 2019), our methods not only offer a flatter starting point but also obtain substantial computational savings due to the reduced model size.
3.3 FLYING BIRD FOR ADVERSARIAL TRAINING
Introducing Flying Bird(+). Since sparse subnetworks from static pruning are unable to regret for removed elements, they may be too aggressive to capture the pivotal structural patterns. Thus, we introduce Flying Bird (FB) to conduct a thorough exploration of dynamic sparsity, which allows pruned parameters to be grown back and engages in the next round of training or pruning, as demonstrated in Figure 2. Specifically, it starts from a sparse subnetwork f(x;m θ) with a random binary mask m, and then jointly optimize model parameters and sparse connectivities simultaneously. In other words, the subnetwork’s typologies are “on the fly”, decided dynamically based on current training status. Specifically, we update Flying Bird’s sparse connectivity every ∆t epochs of adversarial training, which consists of two continually applied operations: pruning and growing. For the pruning step, p% of model weights with the lowest magnitude will be eliminated, while g% weights with the largest gradient will be added back in the growth step. Note that newly added connections are not activated in the last sparse topology, and are initialized to zero since it establishes better performance as indicated in (Evci et al., 2020a; Liu et al., 2021b). Flying Bird maintains the sparsity ratio unchanged during the full training by keeping both pruning and growing ratio p%, g% equal k% that decays with a cosine annealing schedule.
We further propose Flying Bird+, an enhanced variant of FB, capable of adaptively adjusting the sparsity and learning the right parameterization level ”on demand” during training, as shown in Figure 2. To be specific, we first record the robust generalization gap and robust validation loss at each training epoch. An increasing generalization gap of the later training stage indicates a risk of overfitting, while a plateau validation loss implies underfitting. Hence, we then analyze the fitting status according to the upward/downward trend of those measurements. If most epochs (e.g., more than 3 out of the past 5 epochs in our case) tend to see enlarged robust generalization gaps, we raise the pruning ratio p% to further trim down the network capacity. Similarly, if the majority of epochs present unchanged validation loss, we will increase the growing ratio q% to enrich the subnetwork capacity. Detailed procedures are summarized in Algorithm 2 of Appendix A1.
Rationale of Flying Bird(+). As demonstrated in Evci et al. (2020a), allowing new connections to grow yields improved flexibility in navigating the loss surfaces, which creates the opportunity to
5
escape bad local minima and search for the optimal sparse connectivity Liu et al. (2021b). Flying Bird follows a similar design philosophy that excludes least important connections (Han et al., 2015a) while activating new connections with the highest potential to decrease the training loss fastest. Recent works (Wu et al., 2020c; Liu et al., 2019) have also found enabling network (re)growth can turn a poor local minima into a saddle point that facilitates further loss decrease. Flying Bird+ empowers the flexibility further by adaptive sparsity level control.
The flatness of loss geometry provides another view to dissect the robust generalization gain (Chen et al., 2021e; Stutz et al., 2021; Singla et al., 2021). Figure 3 compares the loss landscapes and training trajectories of dense, randomly pruned subnetworks, and Flying Brid+ robustified on CIFAR-10. We observe that Flying Bird+ converges to a wider loss valley with improved flatness, which usually suggests superior robust generalization (Wu et al., 2020a; Hein & Andriushchenko, 2017). Last but not the least, our approaches also significantly trim down both the training memory overhead and the computational complexity, enjoying extra bonus of efficient training and inference.
4 EXPERIMENT RESULTS
Datasets and architectures. Our experiments consider two popular architectures, ResNet-18 (He et al., 2016), VGG-16 (Simonyan & Zisserman, 2014) on three representative datasets, CIFAR-10, CIFAR-100 (Krizhevsky & Hinton, 2009) and Tiny-ImageNet (Deng et al., 2009). We randomly split one-tenth of the training samples as the validation dataset, and the performance is reported on the official testing dataset.
Training and evaluation details. We implement our experiments with the original PGD-based adversarial trainig (Madry et al., 2018b), in which we train the network against `∞ adversary with maximum perturbations of 8/255. 10-steps PGD for training and 20-steps PGD for evaluation are chosen with a step size α of 2/255, following Madry et al. (2018b); Chen et al. (2021e). In addition, we also use Auto-Attack (Croce & Hein, 2020) and CW Attack (Carlini & Wagner, 2017) for a more rigorous evaluation. More details are provided in Appendix A2. For each experiment, we train the network for 200 epochs with an SGD optimizer, whose momentum and weight decay are kept to 0.9 and 5× 10−4, respectively. The learning rate starts from 0.1 that decays by 10 times at 100,150 epoch and the batch size is 128, which follows Rice et al. (2020).
For Robust Bird, the threshold τ of mask distance is set as 0.1. In Flying Birds(+), we calculate the layer-wise sparsity by Ideal Gas Quotas (IGQ) (Vysogorets & Kempe, 2021) and then apply random pruning to initialize the sparse masks. FB updates the sparse connectivity per 2000 iterations of AT, with an update ratio k that starts from 50% and decays by cosine annealing. More details are referred to Appendix A2. Hyperparameters are either tuned by grid search or following Liu et al. (2021b).
Evaluation metrics. In general, we care about both the accuracy and efficiency of obtained sparse networks. To assess the accuracy, we consider both Robust Testing Accuracy (RA) and Standard Testing Accuracy (SA) which are computed on the perturbed and the original test sets, together with Robust Generalization Gap (RGG) (i.e., the gap of RA between train and test sets). Meantime, we report the floating point operations (FLOPs) of the whole training process and single image inference to measure the efficiency.
4.1 ROBUST BIRD IS A GOOD BIRD
In this section, we evaluate the effectiveness of static sparsity from diverse representative pruning approaches, including: (i) Random Pruning (RP), by randomly eliminating model parameters to the desired sparsity; (ii) One-shot Magnitude Pruning (OMP), which globally removes a certain ratio of lowest-magnitude weights; (iii) Pruning at Initialization algorithms. Three advanced methods, i.e., SNIP (Lee et al., 2019), GraSP (Wang et al., 2020) and SynFlow (Tanaka et al., 2020), are considered, which identify the subnetworks at initialization respect to certain criterion of gradient flow. (iv) Ideal Gas Quotas (IGS) (Vysogorets & Kempe, 2021). It adopts random pruning based on pre-calculated layer-wise sparsity which draws intuitive analogies from physics. (v) Robust Bird (RB), which can be regarded as an early stopped OMP. (vi) Small Dense. It is an important sanity check via considering smaller dense networks with the same parameter counts as the ones of sparse networks. Comprehensive results of these subnetworks at 80% and 90% sparsity are reported in Table 1, where the chosen sparsity follows routine options (Evci et al., 2020a; Liu et al., 2021b).
6
As shown in Table 1, we first observe the occurrence of poor robust generalization with 38.82% RA gap and robust overfitting with 7.49% RA degradation, when training the dense network (Baseline). Fortunately, coincided with our claims, injecting appropriate sparsity effectively tackle the issue. For instance, RB greatly shrinks the RGG by 15.45%/22.20% at 80/90% sparsity, while also mitigates robust overfitting by 2.53% ∼ 4.08%. Furthermore, comparing all static pruning methods, we find that (1) Small Dense and RP behave the worst, which suggests the identified sparse typologies play important roles rather than reduced network capacity only; (2) RB shows clear advantages to OMP in terms of all measurements, especially for 78.32% ∼ 84.80% training FLOPs savings. It validates our RB proposal that a few epochs of standard training are enough to learn a high-quality sparse structure for further robustification, and thus there is no need to complete the full training in the tickets finding stage like traditional OMP. (3) SynFlow and IGQ approaches have the best RA and SA, while RB obtains the superior robust generalization among static pruning approaches.
Finally, we explore the influence of training regimes during the RB ticket finding on CIFAR-100 with ResNet-18. Table A6 demonstrates that RB tickets perform best when found with the cheapest standard training. Specifically, at 90% and 95% sparsity, SGD RB tickets outperform both Fast AT (Wong et al., 2020) and PGD-10 RB tickets with up to 1.27% higher RA and 1.86% narrower RGG. Figure A7 offers a possible explanation for this phenomenon: the SGD training scheme more quickly develops high-level network connections, during the early epochs of training (Achille et al., 2019). As a result, RB Tickets pruned from the model trained with SGD achieve superior quality.
4.2 FLYING BIRD IS A BETTER BIRD
In this section, we discuss the advantages of dynamic sparsity and show that our Flying Bird(+) is a superior bird. Table 1 examines the effectiveness of FB(+) on CIFAR-10 with ResNet-18, and several consistent observations can be drawn: ¶ FB(+) achieve 9.92% ∼ 23.66% RGG reduction, 2.24% ∼ 5.88% decrease for robust overfitting, compared with the dense network. And FB+ at 80% sparsity even pushes the RA 0.60% higher. · Although the smaller dense network shows the leading performance w.r.t improving robust generalization, the robustness has been largely sacrificed, with up to 4.29% RA degradation, suggesting that only reducing models’ parameter counts is insufficient to keep satisfactory SA/RA. ¸ FB and FB+ achieve superior performance of RA for both the best and final checkpoints across all methods, including RB. ¹ Regardless of small dense and random pruning due to their poor robustness, FB+ reaches the most impressive robust generalization (rank #1 or #2) with the least training and inference costs. Precisely, FB+ obtains 84.46% ∼ 91.37% training FLOPs and 84.46% ∼ 93.36% inference FLOPs saving, i.e., Flying Bird+ is SUPER light-weight.
7
Superior performance across datasets and architectures. We further evaluate the performance of FB(+) across various datasets (CIFAR-10, CIFAR-100 and Tiny-ImageNet) and architectures (ResNet-18 and VGG-16). Table 2 and 3 display that both static and dynamic sparsity of our proposals serve effective remedies for improving robust generalization and mitigating robust overfitting, with 4.43% ∼ 15.45%, 14.99% ∼ 34.44% and 21.62% ∼ 23.60% RGG reduction across different architectures on CIFAR-10, CIFAR-100 and Tiny-ImageNet, respectively. Moveover, both RB and FB(+) gain significant efficiency, with up to 87.83% training and inference FLOPs savings.
Superior performance across improved attacks. Additionally, we verify both RB and FB(+) under improved attacks, i.e., Auto-Attack (Croce & Hein, 2020) and CW Attack (Carlini & Wagner, 2017). As shown in Table A8, our approaches shrink the robust generalization gap by up to 30.76% on CIFAR-10/100, and largely mitigate robust overfitting. This piece of evidence shows our proposal’s effectiveness sustained across diverse attacks.
Combining FB+ with existing start-of-the-art (SOTA) mitigation. Previous works (Chen et al., 2021e; Zhang et al., 2021a; Wu et al., 2020b) point out that smoothening regularizations (e.g., KD (Hinton et al., 2015) and SWA (Izmailov et al., 2018)) help robust generalization and lead to SOTA robust accuracies. We combine them with our FB+ and collect the robust accuracy on CIFAR-10 with ResNet-18 in Figure 4. The extra robustness gains from FB+ imply that they makes complementary contributions.
Excluding obfuscated gradients. A common “counterfeit” of robustness improvements is less effective adversarial examples resulted from obfuscated gradients (Athalye et al., 2018). Table A7 demonstrates the maintained enhanced robustness under unseen transfer attacks, which excludes the possibility of gradient masking. More are referred to Section A3.
4.3 ABLATION STUDY AND VISUALIZATION
Different sparse initialization and update frequency. As two major components in the dynamic sparsity exploration (Evci et al., 2020a), we conduct thorough ablation studies in Table 4 and 5. We found the performance of Flying Bird+ is more sensitive to different sparse initialization; using SNIP to produce initial layer-wise sparsity and updating the connections per 2000 iterations serves the superior configuration for FB+.
8
Table 4: Ablation of different sparse initialization in Flying Bird+. Subnetwroks at 80% initial sparsity are chosen on CIFAR-10 with ResNet-18.
Table 5: Ablation of different update frequency in Flying Bird+. Subnetworks at 80% initial sparsity are chosen on CIFAR-10 with ResNet-18.
Final checkpoint loss landscapes. From visualizations in Figure 5, FB and FB+ converge to much flatter loss valleys, which evidences their effectiveness in closing robust generalization gaps.
Attention and saliency maps. To visually inspect the benefits of our proposal, here we provide attention and saliency maps generated by Grad-GAM (Selvaraju et al., 2017) and tools in (Smilkov et al., 2017). Comparing the dense model to our “talented birds” (e.g., FB+), Figure 6 shows that our approaches have enhanced concentration on main objects, and are capable of capturing more local feature information, aligning better with human perception.
1
Dense
Adversarial Samples
Random Pruning
SNIP
Flying Bird
Robust Bird
Flying Bird+
+
Heatmap Saliency Map
Figure 6: (Left) Visualization of attention heatmaps on adversarial images based on Grad-Cam (Selvaraju et al., 2017). (Right) Saliency map visualization on adversarial samples (Smilkov et al., 2017).
5 CONCLUSION
We show the adversarial training of dense DNNs incurs a severe robust generalization gap, which can be effectively and efficiently resolved by injecting appropriate sparsity. Our proposed Robust Bird and Flying Bird(+) with static and dynamic sparsity, significantly mitigate the robust generalization gap while retaining competitive standard/robust accuracy, besides substantially reduced computation. Our future works plan to investigate channel- and block-wise sparse structures.
9
A1 MORE TECHNIQUE DETAILS
Algorithms of Robust Bird and Flying Bird(+). Here we present the detailed procedure to identify robust bird and flying bird(+), as summarized in algorithm 1 and 2. Note that for the increasing frequency on Line 10 and 11 in algorithm 2, we compare the measurements stored in the queue between two consequent epochs and calculate the frequency of increasing.
Algorithm 1: Finding a Robust Bird Input: f(x; θ0) w. initialization θ0, target sparsity s%, FIFO queue Q with length l, threshold τ Output: Robust bird f(x;mt∗ θT)
1 while t < tmax do 2 Update network parameters θt ← θt−1 via standard training 3 Apply static pruning towards target sparsity s% and obtain the sparse mask mt 4 Calculate the Hamming distance δH(mt,mt−1), append result to Q 5 t← t+ 1 6 if max(Q) < τ then 7 t∗ ← t 8 Rewind f(x;mt∗ θt∗)→ f(x;mt∗ θ0) 9 Training f(x;mt∗ θ0) via PGD-AT for T epochs
10 return f(x;mt∗ θT) 11 end 12 end
Algorithm 2: Finding a Flying Bird(+) Input: Initialization parameters θ0, sparse masks m of sparsity s%, FIFO queue Qp andQg
with length l, pruning and growth increasing ratio δp and δg , update threshold , optimize interval ∆t, parameter update ratio k%, ratio update starting point tstart
Output: Flying bird(+) f(x;m θT) 1 while t < T do 2 Update network parameters θt ← θt−1 via PGD-AT; 3 # Record training statistics 4 Add robust generalization gap between train and validation set to Qp 5 Add robust validation loss to Qg 6 # Update sparse masks m 7 if (t mod ∆t) == 0 then 8 |---Optional for Flying Bird+---| 9 # Update pruning and growth ratio p%, g%
10 if t > tstart and increasing frequency of Qp ≥ : p = (1 + δp)× k else p = k 11 if t > tstart and increasing frequency of Qg ≥ : g = (1 + δg)× k else g = k 12 |---Optional for Flying Bird+---| 13 Prune p% parameters with smallest weight magnitude 14 Grow g% parameters with largest gradient 15 Update sparse mask m accordingly 16 end 17 end
A2 MORE IMPLEMENTATION DETAILS
A2.1 OTHER COMMON DETAILS
We select two checkpoints during training: best, which has the best RA values on the validation set, and final, i.e., the last checkpoint. And we report both RA and SA of these two checkpoints on test sets. Apart from the robust generalization gap, we also show the extent of robust overfitting numerically by the difference of RA between best and final. Furthermore, we calculate the FLOPs
A17
at both training and inference stages to evaluate the prices of obtaining and exploiting the subnetworks respectively, in which we approximate the FLOPs of the back-propagation to be twice that of forwarding propagation (Yang et al., 2020).
A2.2 MORE DETAILS ABOUT ROBUST BIRD
For the experiments of RB tickets finding, we comprehensively study three training regimes: standard training with stochastic gradient descent (SGD), adversarial training with PGD-10 AT (Madry et al., 2018b), and Fast AT (Wong et al., 2020). Following Pang et al. (2021), we train the network with an SGD optimizer of 0.9 momentum and 5 × 10−4 weight decay. We use a batch size of 128. For the experiments of PGD-10 AT, we adopt the `∞ PGD attack with a maximum perturbation = 8/255 and a step size α = 2/255. And the learning rate starts from 0.1, then decays by ten times at 50, 150 epoch. As for fast AT, we use a cyclic schedule with a maximum learning rate equals 0.2.
A2.3 MORE DETAILS ABOUT FLYING BIRD(+)
For the experiments of Flying Bird+, the increasing ratio of pruning and growth δp, δq is kept default to 0.4% and 0.05%, respectively.
A3 MORE EXPERIMENT RESULTS
A3.1 MORE RESULTS ABOUT ROBUST BIRD
Accuracy during RB Tickets Finding Figure A7 shows the curve of standard test accuracy during the training phase of RB ticket finding. We can observe the SGD training scheme develops highlevel network connections much faster than the others, which provides a possible explanation for the superior quality of RB tickets from SGD.
0 5 10 15 20 25 30 Epoch
30 40 50 60 70 80 St an da rd A cc ur ac y %
RB Ticket Finding Performance
PGD-10 SGD FAST AT
Figure A7: Standard accuracy (SA) of PGD-10, SGD, and Fast AT during the RB ticket finding phase.
Mask Similarity Visualization. Figure A8 visualizes the dynamic similarity scores for each epoch among masks found via SGD, Fast AT, and PGD-10. Specifically, the similarity scores (You et al., 2020) reflect the Hamming distance between a pair of masks. We notice that masks found by SGD and PGD-10 share more common structures. A possible reason is that Fast AT usually adopts a cyclic learning rate schedule, while SGD and PGD use a multi-step decay schedule.
Different training regimes for finding RB tickets. We denote the subnetworks identified by standard training with SGD, adversarial training with Fast AT (Wong et al., 2020) and adversarial train-
A18
0 20 40 60 80 100
Fast AT Mask
0 20 40 60 80
100PG D-
10 M
as k
0 20 40 60 80 100
Fast AT Mask
0 20 40 60 80
100
SG D
M as
k
0 20 40 60 80 100
PGD-10 Mask
0 20 40 60 80
100
SG D
M as
k
0.50 0.55 0.60 0.65 0.70 0.75
Figure A8: Similarity scores by epoch among masks found via Fast AT, SGD, and PGD-10. A brighter color denotes higher similarity.
Table A6: Comparison results of different training regimes for RB ticket finding on CIFAR-100 with ResNet-18. The subnetworks at 90% and 95% are selected here.
Sparsity(%) Settings Roubst Accuarcy Standard Accuarcy Robust
Best Final Diff. Best Final Diff. Generalization
0 Baseline 26.93 19.62 7.31 52.03 53.91 −1.88 54.56
90
SGD tickets 25.83 23.40 2.43 49.35 53.51 −4.16 18.37↓ 36.19 Fast AT tickets 25.15 22.88 2.27 51.00 51.75 −0.75 20.23↓ 34.33 PGD-10 tickets 25.34 22.96 2.38 52.01 53.27 −1.26 20.03↓ 34.53
95
SGD tickets 24.77 24.12 0.65 49.88 50.89 −1.01 9.18↓ 45.38 Fast AT tickets 23.50 22.46 1.04 41.67 43.19 −1.52 9.53↓ 45.03 PGD-10 tickets 24.44 23.77 0.67 49.30 50.65 −1.35 9.86↓ 44.70
ing with PGD-10 AT as SGD tickets, Fast AT tickets, and PGD-10 tickets, respectively. Table A6 demonstrate the SGD tickets has the best performance.
Loss Landscape Visualization We visualize the loss landscape of the dense network, random pruned subnetwork, and robust bird tickets at 30% sparsity in Figure A9. Compared with the dense model and random pruned subnetwork, RB tickets found by the standard training shows much flatter loss landscapes, which provide a high-quality starting point for further robustification.
A3.2 MORE RESULTS ABOUT FLYING BIRD(+)
Excluding Obfuscated Gradients. To exclude this possibility of gradient masking, we show that our methods maintain improved robustness under unseen transfer attacks. As shown in Table A7, the left part represents the testing accuracy of perturbed test samples from an unseen robust model, and the right part shows the transfer testing performance on an unseen robust model (here we use a separately robustified ResNet-50 with PGD-10 on CIFAR-100).
Performance under Improved Attacks. We report the performance of both RB and FB(+) under Auto-Attack (Croce & Hein, 2020) and CW Attack (Carlini & Wagner, 2017). For Auto-Attack, we keep the default setting with = 8255 . And for CW Attack we perform 1 search step on C with an initial constant of 0.1. And we use 100 iterations for each search step with the learning rate of 0.01. As shown in Table A8, both RB and FB(+) outperform the dense counterpart in terms of robust generalization. And FB+ achieves superior performance.
More Datasets and Architectures We report more results of different sparsification methods across diverse datasets and architectures at Table A9, A10, A11 and A12, from which we observe our approaches are capable of improving robust generalization and mitigating robust overfitting.
A19
Dense Models RB Tickets (30%) Random Pruning (30%)
Figure A9: Loss landscapes visualizations (Engstrom et al., 2018; Chen et al., 2021e) of the dense model (unpruned), random pruned subnetwork at 30% sparsity, and Robust Bird (RB) tickets at 30% sparsity found by the standard training. The ResNet-18 backbone with the same original initialization on CIFAR-10 is adopted here. Results demonstrate that RB tickets offer a smoother and flatter starting point for further robustification in the second stage.
Table A7: Transfer attack performance from/on an unseen non-robust model, where the attacks are generated by/applied to the non-robust model. The robust generalization gap is also calculated based on transfer attack accuracies between train and test sets. We use ResNet-18 on CIFAR-10/100 and sub-networks at 80% sparsity.
Dataset Settings Transfer Attack from Unseen Model Transfer Attack on Unseen Model
Accuracy Robust Accuracy Robust
Best Final Diff. Generalization Best Final Diff. Generalization
CIFAR-10 Baseline 79.68 82.03 −2.35 16.43 70.48 79.85 −9.37 11.84 Robust Bird 77.33 81.04 −3.71 12.18 73.17 77.03 −3.86 11.49 Flying Bird 79.13 82.17 −3.04 13.49 71.59 77.19 −5.60 11.88 Flying Bird+ 79.47 81.90 −2.43 11.85 70.43 76.00 −5.57 11.42
CIFAR-100 Baseline 50.51 52.15 −1.64 45.91 48.67 54.48 −5.81 36.98 Robust Bird 47.25 51.74 −4.49 28.80 47.47 50.90 −3.43 35.82 Flying Bird 51.80 53.52 −1.72 31.98 45.56 50.61 −5.05 35.39 Flying Bird+ 50.72 53.56 −2.84 25.09 47.04 49.43 −2.39 35.09
Distributions of Adopted Sparse Initialization. We report the layer-wise sparsity of different initial sparse masks. As shown in Figure A10, we observe that subnetworks generally have better performance when the top layers remain most of the parameters.
Training Curve of Flying Bird+. Figure A11 shows the training curve of Flying Bird+, in which the red dotted lines represent the time for increasing the pruning ratio and the green dotted lines for growth ratio. The detailed training curve demonstrates the flexibility of flying bird+ for dynamically adjusting the sparsity levels.
A4 EXTRA RESULTS AND DISCUSSION
We sincerely appreciate all anonymous reviewers’ and area chairs’ constructive discussions for improving this paper. Extra results and discussions are presented in this section.
A20
Table A8: Evaluation under improved attacks (i.e., Auto-Attack and CW-Attack) on CIFAR-10/100 with ResNet-18 at 80% sparsity. The robust generalization gap is computed under improved attacks.
Dataset Settings Auto-Attack CW-Attack
Accuracy Robust Accuracy Robust
Best Final Diff. Generalization Best Final Diff. Generalization
CIFAR-10 Baseline 47.41 41.59 5.82 35.30 75.76 66.13 9.63 30.39 Robust Bird 45.90 42.45 3.45 21.58 ↓ 13.72 73.95 73.52 0.43 17.67 ↓ 12.72 Flying Bird 47.55 43.57 3.98 26.55 ↓ 8.75 75.30 72.08 3.22 21.77 ↓ 8.62 Flying Bird+ 47.06 44.09 3.17 21.73 ↓ 13.57 76.00 73.83 2.17 17.77 ↓ 12.62
CIFAR-100 Baseline 23.16 17.68 5.48 49.73 45.83 36.21 9.62 57.52 Robust Bird 21.29 18.00 3.29 21.72 ↓ 28.01 43.30 42.39 0.91 30.82 ↓ 26.70 Flying Bird 22.74 19.44 3.30 25.18 ↓ 24.55 46.23 42.36 3.87 35.50 ↓ 22.02 Flying Bird+ 22.90 20.31 2.59 19.05 ↓ 30.68 45.86 43.90 1.96 26.76 ↓ 30.76
Table A9: More results of different sparcification methods on CIFAR-10 with ResNet-18.
Sparsity(%) Settings Robust Accuracy Standard Accuracy Robust
Best Final Diff. Best Final Diff. Generalization
0 Baseline 51.10 43.61 7.49 81.15 83.38 −2.23 38.82
95
Small Dense 45.99 44.55 1.44 74.26 75.64 −1.38 7.87 ↓ 30.95 Random Pruning 45.64 44.18 1.46 75.20 75.20 0.00 7.96 ↓ 30.86 OMP 47.08 46.23 0.85 78.77 79.36 −0.59 12.01 ↓ 26.81 SNIP 48.18 46.72 1.46 78.55 79.21 −0.66 9.58 ↓ 29.24 GraSP 48.58 47.15 1.43 78.95 79.44 −0.49 10.37 ↓ 28.45 SynFlow 48.93 48.22 0.71 78.70 78.90 −0.20 8.25 ↓ 30.57 IGQ 48.82 47.56 1.26 79.44 79.76 −0.32 9.33 ↓ 29.49 Robust Bird 47.53 46.48 1.05 78.33 78.78 −0.45 9.20 ↓ 29.62 Flying Bird 49.62 48.46 1.16 78.12 81.43 −3.31 13.32 ↓ 25.52 Flying Bird+ 49.37 48.84 0.53 80.33 80.28 0.05 9.27 ↓ 29.55
co nv
1
la ye
r1 .0
.c on
v1
la ye
r1 .0
.c on
v2
la ye
r1 .1
.c on
v1
la ye
r1 .1
.c on
v2
la ye
r2 .0
.c on
v1
la ye
r2 .0
.c on
v2
la ye
r2 .0
.sh or
tc ut
.0
la ye
r2 .1
.c on
v1
la ye
r2 .1
.c on
v2
la ye
r3 .0
.c on
v1
la ye
r3 .0
.c on
v2
la ye
r3 .0
.sh or
tc ut
.0
la ye
r3 .1
.c on
v1
la ye
r3 .1
.c on
v2
la ye
r4 .0
.c on
v1
la ye
r4 .0
.c on
v2
la ye
r4 .0
.sh or
tc ut
.0
la ye
r4 .1
.c on
v1
la ye
r4 .1
.c on
v2
lin ea
r
Layer Name
0.0
0.2
0.4
0.6
0.8
1.0
Sp ar
si ty
Uniform GraSP SNIP SynFlow IGQ ERK
Figure A10: Layer-wise sparisty of different initial sparse masks with ResNet-18
A4.1 MORE RESULTS OF DIFFERENT SPARSITY
We report more results of subnetworks with 40/60% sparsity on CIFAR-10/100 with ResNet-18 and VGG-16. As shown in Table A13, A14, A15 and A16, our flying bird(+) achieves consistent improvement than baseline unpruned networks, in terms of 2.45 ∼ 19.81% narrower robust generalization gaps with comparable RA and SA performance.
A4.2 MORE RESULTS ON WIDERESNET
We further evaluate our flying bird(+) with WideResNet-34-10 on CIFAR-10 and report the results on Table A17. We can observe that compared with the dense network, our methods significantly shrink the robust generalization gap by up to 13.14% and maintain comparable RA/SA performance.
A21
Table A10: More results of different sparcification methods on CIFAR-10 with VGG-16.
Sparsity(%) Settings Robust Accuracy Standard Accuracy Robust
Best Final Diff. Best Final Diff. Generalization
0 Baseline 48.33 42.73 5.60 76.84 79.73 −2.89 28.00
80
Random Pruning 46.14 40.33 5.81 74.42 76.68 −2.26 21.01 ↓ 6.99 OMP 47.90 43.19 4.71 76.60 80.02 −3.42 24.97 ↓ 3.03 SNIP 48.03 43.17 4.86 76.68 80.08 −3.40 24.71 ↓ 3.29 GraSP 47.91 42.34 5.57 75.74 78.87 −3.13 23.65 ↓ 4.35 SynFlow 48.47 45.32 3.15 77.62 79.09 −1.47 20.17 ↓ 7.83 IGQ 48.57 44.25 4.32 77.51 80.01 −2.50 22.79 ↓ 5.21 Robust Bird 47.69 41.66 6.03 75.32 78.58 −3.26 23.57 ↓ 4.43 Flying Bird 48.43 44.65 3.78 77.53 79.72 −2.19 21.01 ↓ 6.99 Flying Bird+ 48.25 45.24 3.01 77.48 79.55 −2.07 17.75 ↓ 10.25
90
Random Pruning 44.33 40.33 4.00 71.27 74.46 −3.19 15.48 ↓ 12.52 OMP 47.84 43.34 4.50 75.60 79.10 −3.50 18.29 ↓ 9.71 SNIP 47.76 44.27 3.49 75.92 79.62 −3.70 17.85 ↓ 10.15 GraSP 45.96 42.12 3.84 75.19 77.03 −1.84 15.04 ↓ 12.96 SynFlow 47.54 45.79 1.75 78.43 78.70 −0.27 14.40 ↓ 13.60 IGQ 47.79 45.12 2.67 74.87 79.19 −4.32 16.06 ↓ 11.94 Robust Bird 47.09 44.13 2.96 75.53 78.36 −2.83 16.57 ↓ 11.43 Flying Bird 48.45 45.55 2.90 75.82 79.21 −3.39 16.56 ↓ 11.44 Flying Bird+ 48.39 46.26 2.13 78.73 79.12 −0.39 12.47 ↓ 15.53
A4.3 COMPARISON WITH EFFICIENT ADVERSARIAL TRAINING METHODS
To elaborate more about training efficiency, we compare our methods with two efficient training methods. Shafahi et al. (2019) proposed Free Adversarial Training that improves training efficiency by reusing the gradient information, which is orthogonal to our approaches and can be easily combined with our methods to pursue more efficiency by replacing the PGD-10 training with Free AT.
A22
Sparsity: 80% Sparsity: 90%
Additionally, Li et al. (2020) uses magnitude pruning to locate sparse structures, which is similar to OMP reported in Table 1, except they use a smaller learning rate. Our methods achieve better performance and efficiency than OMP. Specifically, with 80% sparsity, our flying bird+ reaches a 4.49% narrower robust generalization gap and 1.54% higher RA yet only requires 87.58% less training FLOPs. Also, our methods can be easily combined with Fast AT for further training efficiency.
A23
A4.4 COMPARISON WITH OTHER PRUNING AND SPARSE TRAINING METHODS
Compared with the recent work (Özdenizci & Legenstein, 2021), our flying bird(+) is different at both levels of goal and methodologies. Firstly, Özdenizci & Legenstein (2021) pursues a superior adversarial robust testing accuracy for sparsely connected networks. While we aim to investigate the relationship between sparsity and robust generalization, and demonstrate that introducing appropriate sparsity (e.g., LTH-based static sparsity or dynamic sparsity) into adversarial training
A24
substantially alleviates the robust generalization gap and maintains comparable or even better standard/robust accuracies. Secondly, Özdenizci & Legenstein (2021) samples network connectivity from a learned posterior to form a sparse subnetwork. However, our flying bird first removes the parameters with the lowest magnitude, which ensures a small term of the first-order Taylor approximation of the loss and thus limits the impact on the output of networks (Evci et al., 2020a). And then, it allows new connectivity with the largest gradient to grow to reduce the loss quickly (Evci et al., 2020a). Furthermore, we propose an enhanced variant of Flying Bird, i.e., Flying Bird+, which not only learns the sparse topologies but also is capable of adaptively adjusting the network capacity to determine the right parameterization level “on-demand” during training, while Özdenizci & Legenstein (2021) stick to a fixed parameter budget.
Another work, HYDRA (Sehwag et al., 2020) also has several differences from our robust birds. Specifically, HYDRA starts from a robust pre-trained dense network, which requires at least hundreds of epochs for adversarial training. However, our robust bird’s pre-training only needs a few epochs of standard training. Therefore, Sehwag et al. (2020) has significantly higher computational costs, compared to ours. Then, Sehwag et al. (2020) adopt TRADES (Zhang et al., 2019) for adversarial training, which also requires auxiliary inputs of clean images, while our methods follow the classical adversarial training (Madry et al., 2018b) and only take adversarial perturbed samples as input. Moreover, for CIFAR-10 experiments, Sehwag et al. (2020) uses 500k additional pseudolabeled images from the Tiny-ImageNet dataset with a robust semi-supervised training approach. However, all our methods and experiments do not leverage any external data.
Furthermore, one concurrent work (Fu et al., 2021) demonstrates that there exist subnetworks with inborn robustness. Such randomly initialized networks have matching or even superior robust accuracy of adversarially trained networks with similar parameter counts. It’s interesting to utilize this finding for further improvement of robust generalization, and we will investigate it in future works.
A25 | 1. What is the focus of the paper regarding adversarial training?
2. What are the strengths of the proposed approach, particularly in reducing robust generalization gap and overfitting?
3. Do you have any questions or concerns about the method's effectiveness when applied to different scenarios?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any suggestions for improving the paper, such as providing theoretical insights or discussing practical hardware benefits? | Summary Of The Paper
Review | Summary Of The Paper
Recent studies demonstrate adversarial training suffers from severe overfitting besides getting very expensive. This paper proposes to handle the two problems organically altogether, with the tool of sparse training.
The authors show that injecting appropriate sparsity forms in training could substantially shrink the robust generalization gap and alleviate the robust overfitting, meanwhile significantly saving training and inference FLOPs.
Review
It is known that good sparsity can help prevent overfitting as well as reduce inference costs. The main barrier of making sparsity practical for (adversarial) training is the good sparsity pattern itself can be expensive to retrieve. This paper lays out a series of options to mitigate that barrier for adversarial training (AT).
The authors first demonstrate that good sparse subnetworks can be identified at the very early AT training stage with one-shot pruning, and the remaining stage could focus on training that very compact subnetwork. While similar observations were already drawn by You et. al. 2020 in the standard training, a notably progress made by the authors is that the mask can be located from just the cheap standard training, and it will still incur almost no performance loss when being re-used towards adversarial re-training. I find it quite intriguing and meaningful.
The authors continue to investigate the role of sparsity when the sparse connections of subnetworks are on the fly. It allows more flexibility for the network to lower the training loss more, by tweaking not only weights but also topology. The authors further relaxed it to dynamically adjust the network capacity/sparsity to pursue superior robust performance, at a minor sacrifice of training efficiency.
The authors reported a variety of experiments using two backbones and three datasets. Their proposed methods are found to reduce robust generalization gap and overfitting by 34.44% and 4.02%, with comparable robust/standard accuracy boosts and 87.83%/87.82% training/inference FLOPs savings on CIFAR-100 with ResNet18; and similar competitive performance on other settings. Their proposed approaches can be combined with existing regularizers to yield new state-of-the-art results. The authors also carefully examined their robustness gains against adaptive and transferred attacks.
Here I have only two nitpicks. First, it would be better if the authors can give some theoretical insights why their strategy works, even with simplified assumptions. Currently the rationales offered are a bit vague. Second, while the FLOPS reduction is impressive, no actual hardware measurements were reported like in the Early Bird paper. Discussing the practical hardware benefits of the proposed strategy would enhance this work’s impact.
=================== post rebuttal ====================
I have read the rebuttal and the rebuttal addresses my concerns. |
ICLR | Title
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training
Abstract
Recent studies demonstrate that deep networks, even robustified by the state-ofthe-art adversarial training (AT), still suffer from large robust generalization gaps, in addition to the much more expensive training costs than standard training. In this paper, we investigate this intriguing problem from a new perspective, i.e., injecting appropriate forms of sparsity during adversarial training. We introduce two alternatives for sparse adversarial training: (i) static sparsity, by leveraging recent results from the lottery ticket hypothesis to identify critical sparse subnetworks arising from the early training; (ii) dynamic sparsity, by allowing the sparse subnetwork to adaptively adjust its connectivity pattern (while sticking to the same sparsity ratio) throughout training. We find both static and dynamic sparse methods to yield win-win: substantially shrinking the robust generalization gap and alleviating the robust overfitting, meanwhile significantly saving training and inference FLOPs. Extensive experiments validate our proposals with multiple network architectures on diverse datasets, including CIFAR-10/100 and TinyImageNet. For example, our methods reduce robust generalization gap and overfitting by 34.44% and 4.02%, with comparable robust/standard accuracy boosts and 87.83%/87.82% training/inference FLOPs savings on CIFAR-100 with ResNet18. Besides, our approaches can be organically combined with existing regularizers, establishing new state-of-the-art results in AT. Codes are available in https: //github.com/VITA-Group/Sparsity-Win-Robust-Generalization.
N/A
1 INTRODUCTION
Deep neural networks (DNNs) are notoriously vulnerable to maliciously crafted adversarial attacks. To conquer this fragility, numerous adversarial defense mechanisms are proposed to establish robust neural networks (Schmidt et al., 2018; Sun et al., 2019; Nakkiran, 2019; Raghunathan et al., 2019; Hu et al., 2019; Chen et al., 2020c; 2021e; Jiang et al., 2020). Among them, adversarial training (AT) based methods (Madry et al., 2017; Zhang et al., 2019) have maintained the state-of-the-art robustness. However, the AT training process usually comes with order-ofmagnitude higher computational costs than standard training, since multiple attack iterations are needed to construct strong adversarial examples (Madry et al., 2018b). Moreover, AT was recently revealed to incur severe robust generalization gaps (Rice et al., 2020), between its training and testing accuracies, as shown in Figure 1; and to require significantly more training samples (Schmidt et al., 2018) to generalize robustly.
*Equal Contribution.
1
In response to those challenges, Schmidt et al. (2018); Lee et al. (2020); Song et al. (2019) investigate the possibility of improving generalization by leveraging advanced data augmentation techniques, which further amplifies the training cost of AT. Recent studies (Rice et al., 2020; Chen et al., 2021e) found that early stopping, or several smoothness/flatness-aware regularizations (Chen et al., 2021e; Stutz et al., 2021; Singla et al., 2021), can bring effective mitigation.
In this paper, a new perspective has been explored to tackle the above challenges by enforcing appropriate sparsity patterns during AT. The connection between robust generalization and sparsity is mainly inspired by two facts. On one hand, sparsity can effectively regularize the learning of over-parameterized neural networks, hence potentially benefiting both standard and robust generalization (Balda et al., 2019). As demonstrated in Figure 1, with the increase of sparsity levels, the robust generalization gap is indeed substantially shrunk while the robust overfitting is alleviated. On the other hand, one key design philosophy that facilitates this consideration is the lottery ticket hypothesis (LTH) (Frankle & Carbin, 2019). The LTH advocates the existence of highly sparse and separately trainable subnetworks (a.k.a. winning tickets), which can be trained from the original initialization to match or even surpass the corresponding dense networks’ test accuracies. These facts point out a promising direction that utilizing proper sparsity is capable of boosting robust generalization while maintaining competitive standard and robust accuracy.
Although sparsity is beneficial, the current methods (Frankle & Carbin, 2019; Frankle et al., 2020; Renda et al., 2020) often empirically locate sparse critical subnetworks by Iterative Magnitude Pruning (IMP). It demands excessive computational cost even for standard training due to the iterative train-prune-retrain process. Recently, You et al. (2020) demonstrated that these intriguing subnetworks can be identified at the very early training stage using one-shot pruning, which they term as Early Bird (EB) tickets. We show the phenomenon also exists in the adversarial training scheme. More importantly, we take one leap further to reveal that even in adversarial training, EB tickets can be drawn from a cheap standard training stage, while still achieving solid robustness. In other words, the Early Bird is also a Robust Bird that yields an attractive win-win of efficiency and robustness - we name this finding as Robust Bird (RB) tickets.
Furthermore, we investigate the role of sparsity in a scene where the sparse connections of subnetworks change on the fly. Specifically, we initialize a subnetwork with random sparse connectivity and then optimize its weights and sparse typologies simultaneously, while sticking to the fixed small parameter budget. This training pipeline, called as Flying Bird (FB), is motivated by the latest sparse training approaches (Evci et al., 2020b) to further reduce robust generalization gap in AT, while ensuring low training costs. Moreover, an enhanced algorithm, i.e., Flying Bird+, is proposed to dynamically adjust the network capacity (or sparsity) to pursue superior robust generalization, at few extra prices of training efficiency. Our contributions can be summarized as follows:
• We perform a thorough investigation to reveal that introducing appropriate sparsity into AT is an appealing win-win, specifically: (1) substantially alleviating the robust generalization gap; (2) maintaining comparable or even better standard/robust accuracies; and (3) enhancing the AT efficiency by training only compact subnetworks.
• We explore two alternatives for sparse adversarial training: (i) the Robust Bird (RB) training that leverages static sparsity, by mining the critical sparse subnetwork at the early training stage, and using only the cheapest standard training; (ii) the Flying Bird (FB) training that allows for dynamic sparsity, which jointly optimizes both network weights and their sparse connectivity during AT, while sticking to the same sparsity level. We also discuss a FB variant called Flying Bird+ that adaptively adjusts the sparsity level on demand during AT.
• Extensive experiments are conducted on CIFAR-10, CIFAR-100, and Tiny-ImageNet with diverse network architectures. Specifically, our proposals obtain 80.16% ∼ 87.83% training FLOPs and 80.16% ∼ 87.83% inference FLOPs savings, shrink robust generalization from 28.00% ∼ 63.18% to 4.43% ∼ 34.44%, and boost the robust accuracy by up to 0.60% and the standard accuracy by up to 0.90%, across multiple datasets and architectures. Meanwhile, combining our sparse adversarial training frameworks with existing regularizations establishes the new state-of-the-art results.
2 RELATED WORK
Adversarial training and robust generalization/overfitting. Deep neural networks present vulnerability to imperceivable adversarial perturbations. To deal with this drawback, numerous defense
2
approaches have been proposed (Goodfellow et al., 2015; Kurakin et al., 2016; Madry et al., 2018a). Although many methods (Liao et al., 2018; Guo et al., 2018a; Xu et al., 2017; Dziugaite et al., 2016; Dhillon et al., 2018a; Xie et al., 2018; Jiang et al., 2020) were later found to result from obfuscated gradients (Athalye et al., 2018), adversarial training (AT) (Madry et al., 2018a), together with some of its variants (Zhang et al., 2019; Mosbach et al., 2018; Dong et al., 2018), remains as one of the most effective yet costly approaches.
A pitfall of AT, i.e., the poor robust generalization, was spotted recently. Schmidt et al. (2018) showed that AT intrinsically demands a larger sample complexity to identify well-generalizable robust solutions. Therefore, data augmentation (Lee et al., 2020; Song et al., 2019) is an effective remedy. Stutz et al. (2021); Singla et al. (2021) related robust generalization gap to curvature/flatness of loss landscapes. They introduced weight perturbing approaches and smooth activation functions to reshape the loss geometry and boost robust generalization ability. Meanwhile, the robust overfitting (Rice et al., 2020) in AT usually happens with or as a result of inferior generalization. Previous studies (Rice et al., 2020; Chen et al., 2021e) demonstrated that conventional regularization-based methods (e.g., weight decay and simple data augmentation) can not alleviate robust overfitting. Then, numerous advanced algorithms (Zhang et al., 2020; 2021b; Zhou et al., 2021; Bunk et al., 2021; Chen et al., 2021a; Dong et al., 2021; Zi et al., 2021; Tack et al., 2021; Zhang et al., 2021a) arose in the last half year to tackle the overfitting, using data manipulation, smoothened training, and else. Those methods work orthogonally to our proposal as evidenced in Section 4.
Another group of related literature lies in the field of sparse robust networks (Guo et al., 2018b). These works either treat model compression as a defense mechanism (Wang et al., 2018; Gao et al., 2017; Dhillon et al., 2018b) or pursue robust and efficient sub-models that can be deployed in resource-limited platforms (Gui et al., 2019; Ye et al., 2019; Sehwag et al., 2019). Compared to those inference-focused methods, our goal is fundamentally different: injecting sparsity during training to reduce the robust generalization gap while improving training efficiency.
Static pruning and dynamic sparse training. Pruning (LeCun et al., 1990; Han et al., 2015a) serves as a powerful technique to eliminate the weight redundancy in over-parameterized DNNs, which aims to obtain storage and computational savings with almost undamaged performance. It can roughly divided into two categories based on how to generate sparse patterns: (i) static pruning. It removes parameters (Han et al., 2015a; LeCun et al., 1990; Han et al., 2015b) or substructures (Liu et al., 2017; Zhou et al., 2016; He et al., 2017) based on optimized importance scores (Zhang et al., 2018; He et al., 2017) or some heuristics like weight magnitude (Han et al., 2015a), gradient (Molchanov et al., 2019), hessian (LeCun et al., 1990) statistics. The discarded elements usually will not participate in the next round of training or pruning. Static pruning can be flexibly applied prior to training, such as SNIP (Lee et al., 2019), GraSP (Wang et al., 2020) and SynFlow (Tanaka et al., 2020); during training (Zhang et al., 2018; He et al., 2017); and post training (Han et al., 2015a) for different trade-off between training cost and pruned models’ quality. (ii) dynamic sparse training. It updates model parameters and sparse connectivities at the same time, starting from a randomly sparsified subnetwork (Molchanov et al., 2017). During the training, the removed elements have chances to be grown back if they potentially benefit to predictions. Among the huge family of sparse training (Mocanu et al., 2016; Evci et al., 2019; Mostafa & Wang, 2019; Liu et al., 2021a; Dettmers & Zettlemoyer, 2019; Jayakumar et al., 2021; Raihan & Aamodt, 2020), the recent methods Evci et al. (2020a); Liu et al. (2021b) lead to the state-of-the-art performance.
A special case of static pruning, Lottery tickets hypothesis (LTH) (Frankle & Carbin, 2019), demonstrates the existence of sparse subnetworks in DNNs, which are capable of training in isolation and reach a comparable performance of their dense counterpart. The LTH indicates the great potential to train a sparse network from scratch without sacrificing expressiveness and has recently drawn lots of attention from diverse fields (Chen et al., 2020b;a; 2021g;f;d;c;b; 2022; Ding et al., 2022; Gan et al., 2021) beyond image recognition (Zhang et al., 2021d; Frankle et al., 2020; Redman et al., 2021).
3 METHODOLOGY
3.1 PRELIMINARIES
Adversarial training (AT). As one of the widely adopted defense mechanisms, adversarial training (Madry et al., 2018b) effectively tackles the vulnerability to maliciously crafted adversarial samples. As formulated in Equation 1, AT (specifically PGD-AT) replaces the original empirical risk minimization into a min-max optimization problem:
3
min θ
E(x,y)∈DL ( f(x; θ), y ) =⇒ min
θ E(x,y)∈D max ‖δ‖p≤ L ( f(x+ δ; θ), y ) , (1)
where f(x; θ) is a network with parameters θ. Input data x and its associated label y from training set D are used to first generate adversarial perturbations δ and then minimize the empirical classification loss L. To meet the imperceptible requirement, the `p norm of δ is constrained by a small constant . Projected Gradient Descent (PGD), i.e., δt+1 = projP [δ t + α · sgn ( ∇xL(f(x + δt; θ), y) ) ], is usually utilized to produce the adversarial perturbations with step size α, which works in an iterative manner leveraging the local first order information about the network (Madry et al., 2018b).
Sparse subnetworks. Following the routine notations in Frankle & Carbin (2019), f(x;m θ) donates a sparse subnetwork with a binary pruning mask m ∈ {0, 1}‖θ‖0 , where is the elementwise product. Intuitively, it is a copy of dense network f(x; θ) with a portion of fixed zero weights.
3.2 ROBUST BIRD FOR ADVERSARIAL TRAINING
Introducing Robust Bird. The primary goal of Robust Bird is to find a high-quality sparse subnetwork efficiently. As shown in Figure 2, it locates subnetworks quickly by detecting critical network structures arising in the early training, which later can be robustified with much less computation.
Specifically, for each epoch t during training, Robust Bird creates a sparsity mask mt by “masking out” the p% lowest-magnitude weights; then, Robust Bird tracks the corresponding mask dynamics. The key observation behind Robust Bird is that the sparsity mask mt does not change drastically beyond the early epochs of training (You et al., 2020) because high-level network connectivity patterns are learned during the initial stages (Achille et al., 2019). This indicates that (i) winning tickets emerge at a very early training stage, and (ii) that they can be identified efficiently.
Robust Bird exploits this observation by comparing the Hamming distance between sparsity masks found in consecutive epochs. For each epoch, the last l sparsity masks are stored. If all the stored masks are sufficiently close to each other, then the sparsity masks are not changing drastically over time and network connectivity patterns have emerged; thus, a Robust Bird ticket (RB ticket) is drawn. A detailed algorithmic implementation is provided in Algorithm 1 of Appendix A1. This is the RB ticket used in the second stage of adversarial training.
4
5 0 5 10 15 20 25 30 35
1 s t PC: 3 7 .6 7 %
20
10
0
10
20
30
2n d
PC : 1
9. 56
%
0 .500
1 .000
1 .500
2 .000
2.500
3 .0 00
3 .000
3 .500
3 .50 0
4.000 4 .500
5 .0 00
5.500 6 .000
6 .5 0
7.000 7.500
8 .0 00.59 .000.5
10.000 100 .000
1 000 .000
1 0 000 .000
100000 .000
1 000000 .000 10000000 .000 100000000 .000
5 0 5 10 15 20 25 30 35
1 s t PC: 3 6 .1 6 %
20
10
0
10
20
30
2n d
PC : 1
9. 17
%
1 .000 1 .5 00
2 .000 2 .500
3.000
3 .5 00
3 .500 4 .000
4 . 00 0
4 .5 00
5 .0 00
5 . 50 0 6 . 00 0
6. 50 0
7 .0 007 .5 008 .00 0 8 .50 09 .000.5
10 .0 00
1 00 .000
1000.000
1 0 000 .000
100000 .000
1 0 00000 .000
10000000 .000
1 00000000 .000
5 0 5 10 15 20 25 30 35
1 s t PC: 3 6 .9 4 %
20
10
0
10
20
30
2n d
PC : 1
9. 72
%
1 .500
2 .000
2 .500
2 .500
2 .500
3 .000
3 .000
3 .500
3 .500
4 .000 4 .5 0 5 .000
5 .500 6 .0 6 .500 7 .0 7 .500 8 .0 8 .500 9 .000
9 .50
10 .000
100 .000
1000 .000
1 0000 .000
100000 .0
1000000 .000
10000000 .000
000 .000
D en
se
R an
do m
P ru
ni ng
Fl yi
ng B
ir d+
Figure 3: Visualization of loss contours and training trajectories. We compare the dense network, randomly pruned sparse networks, and flying bird+ at 90% sparsity from ResNet-18 robustified on CIFAR-10.
Rationale of Robust Bird. Recent studies (Zhang et al., 2021c) present theoretical analyses that identified sparse winning tickets enlarge the convex region near the good local minima, leading to improved generalization. Our work also shows a related investigation in Figure A9 that, compared with dense models and random pruned subnetworks, RB tickets found by the standard training have much flatter loss landscapes, serving a high-quality starting point for further robustification. This occurs because flatness of the loss surface is often believed to indicate the standard generalization. Similarly, as advocated by Wu et al. (2020a); Hein & Andriushchenko (2017), a flatter adversarial loss landscape also effectively shrinks the robustness generalization gap. This “flatness preference” of adversarial robustness has been revealed by numerous empirical defense mechanisms, including Hessian/curvature-based regularization (Moosavi-Dezfooli et al., 2019), learned weight and logits smoothening (Chen et al., 2021e), gradient magnitude penalty (Wang & Zhang, 2019), smoothening with random noise (Liu et al., 2018), or entropy regularization (Jagatap et al., 2020).
These observations make the main cornerstone for our proposal and provide possible interpretations to the surprising finding that the RB tickets pruned from a non-robust model can be used for obtaining well-generalizable robust models in the followed robustification. Furthermore, unlike previous costly flatness regularizers (Moosavi-Dezfooli et al., 2019), our methods not only offer a flatter starting point but also obtain substantial computational savings due to the reduced model size.
3.3 FLYING BIRD FOR ADVERSARIAL TRAINING
Introducing Flying Bird(+). Since sparse subnetworks from static pruning are unable to regret for removed elements, they may be too aggressive to capture the pivotal structural patterns. Thus, we introduce Flying Bird (FB) to conduct a thorough exploration of dynamic sparsity, which allows pruned parameters to be grown back and engages in the next round of training or pruning, as demonstrated in Figure 2. Specifically, it starts from a sparse subnetwork f(x;m θ) with a random binary mask m, and then jointly optimize model parameters and sparse connectivities simultaneously. In other words, the subnetwork’s typologies are “on the fly”, decided dynamically based on current training status. Specifically, we update Flying Bird’s sparse connectivity every ∆t epochs of adversarial training, which consists of two continually applied operations: pruning and growing. For the pruning step, p% of model weights with the lowest magnitude will be eliminated, while g% weights with the largest gradient will be added back in the growth step. Note that newly added connections are not activated in the last sparse topology, and are initialized to zero since it establishes better performance as indicated in (Evci et al., 2020a; Liu et al., 2021b). Flying Bird maintains the sparsity ratio unchanged during the full training by keeping both pruning and growing ratio p%, g% equal k% that decays with a cosine annealing schedule.
We further propose Flying Bird+, an enhanced variant of FB, capable of adaptively adjusting the sparsity and learning the right parameterization level ”on demand” during training, as shown in Figure 2. To be specific, we first record the robust generalization gap and robust validation loss at each training epoch. An increasing generalization gap of the later training stage indicates a risk of overfitting, while a plateau validation loss implies underfitting. Hence, we then analyze the fitting status according to the upward/downward trend of those measurements. If most epochs (e.g., more than 3 out of the past 5 epochs in our case) tend to see enlarged robust generalization gaps, we raise the pruning ratio p% to further trim down the network capacity. Similarly, if the majority of epochs present unchanged validation loss, we will increase the growing ratio q% to enrich the subnetwork capacity. Detailed procedures are summarized in Algorithm 2 of Appendix A1.
Rationale of Flying Bird(+). As demonstrated in Evci et al. (2020a), allowing new connections to grow yields improved flexibility in navigating the loss surfaces, which creates the opportunity to
5
escape bad local minima and search for the optimal sparse connectivity Liu et al. (2021b). Flying Bird follows a similar design philosophy that excludes least important connections (Han et al., 2015a) while activating new connections with the highest potential to decrease the training loss fastest. Recent works (Wu et al., 2020c; Liu et al., 2019) have also found enabling network (re)growth can turn a poor local minima into a saddle point that facilitates further loss decrease. Flying Bird+ empowers the flexibility further by adaptive sparsity level control.
The flatness of loss geometry provides another view to dissect the robust generalization gain (Chen et al., 2021e; Stutz et al., 2021; Singla et al., 2021). Figure 3 compares the loss landscapes and training trajectories of dense, randomly pruned subnetworks, and Flying Brid+ robustified on CIFAR-10. We observe that Flying Bird+ converges to a wider loss valley with improved flatness, which usually suggests superior robust generalization (Wu et al., 2020a; Hein & Andriushchenko, 2017). Last but not the least, our approaches also significantly trim down both the training memory overhead and the computational complexity, enjoying extra bonus of efficient training and inference.
4 EXPERIMENT RESULTS
Datasets and architectures. Our experiments consider two popular architectures, ResNet-18 (He et al., 2016), VGG-16 (Simonyan & Zisserman, 2014) on three representative datasets, CIFAR-10, CIFAR-100 (Krizhevsky & Hinton, 2009) and Tiny-ImageNet (Deng et al., 2009). We randomly split one-tenth of the training samples as the validation dataset, and the performance is reported on the official testing dataset.
Training and evaluation details. We implement our experiments with the original PGD-based adversarial trainig (Madry et al., 2018b), in which we train the network against `∞ adversary with maximum perturbations of 8/255. 10-steps PGD for training and 20-steps PGD for evaluation are chosen with a step size α of 2/255, following Madry et al. (2018b); Chen et al. (2021e). In addition, we also use Auto-Attack (Croce & Hein, 2020) and CW Attack (Carlini & Wagner, 2017) for a more rigorous evaluation. More details are provided in Appendix A2. For each experiment, we train the network for 200 epochs with an SGD optimizer, whose momentum and weight decay are kept to 0.9 and 5× 10−4, respectively. The learning rate starts from 0.1 that decays by 10 times at 100,150 epoch and the batch size is 128, which follows Rice et al. (2020).
For Robust Bird, the threshold τ of mask distance is set as 0.1. In Flying Birds(+), we calculate the layer-wise sparsity by Ideal Gas Quotas (IGQ) (Vysogorets & Kempe, 2021) and then apply random pruning to initialize the sparse masks. FB updates the sparse connectivity per 2000 iterations of AT, with an update ratio k that starts from 50% and decays by cosine annealing. More details are referred to Appendix A2. Hyperparameters are either tuned by grid search or following Liu et al. (2021b).
Evaluation metrics. In general, we care about both the accuracy and efficiency of obtained sparse networks. To assess the accuracy, we consider both Robust Testing Accuracy (RA) and Standard Testing Accuracy (SA) which are computed on the perturbed and the original test sets, together with Robust Generalization Gap (RGG) (i.e., the gap of RA between train and test sets). Meantime, we report the floating point operations (FLOPs) of the whole training process and single image inference to measure the efficiency.
4.1 ROBUST BIRD IS A GOOD BIRD
In this section, we evaluate the effectiveness of static sparsity from diverse representative pruning approaches, including: (i) Random Pruning (RP), by randomly eliminating model parameters to the desired sparsity; (ii) One-shot Magnitude Pruning (OMP), which globally removes a certain ratio of lowest-magnitude weights; (iii) Pruning at Initialization algorithms. Three advanced methods, i.e., SNIP (Lee et al., 2019), GraSP (Wang et al., 2020) and SynFlow (Tanaka et al., 2020), are considered, which identify the subnetworks at initialization respect to certain criterion of gradient flow. (iv) Ideal Gas Quotas (IGS) (Vysogorets & Kempe, 2021). It adopts random pruning based on pre-calculated layer-wise sparsity which draws intuitive analogies from physics. (v) Robust Bird (RB), which can be regarded as an early stopped OMP. (vi) Small Dense. It is an important sanity check via considering smaller dense networks with the same parameter counts as the ones of sparse networks. Comprehensive results of these subnetworks at 80% and 90% sparsity are reported in Table 1, where the chosen sparsity follows routine options (Evci et al., 2020a; Liu et al., 2021b).
6
As shown in Table 1, we first observe the occurrence of poor robust generalization with 38.82% RA gap and robust overfitting with 7.49% RA degradation, when training the dense network (Baseline). Fortunately, coincided with our claims, injecting appropriate sparsity effectively tackle the issue. For instance, RB greatly shrinks the RGG by 15.45%/22.20% at 80/90% sparsity, while also mitigates robust overfitting by 2.53% ∼ 4.08%. Furthermore, comparing all static pruning methods, we find that (1) Small Dense and RP behave the worst, which suggests the identified sparse typologies play important roles rather than reduced network capacity only; (2) RB shows clear advantages to OMP in terms of all measurements, especially for 78.32% ∼ 84.80% training FLOPs savings. It validates our RB proposal that a few epochs of standard training are enough to learn a high-quality sparse structure for further robustification, and thus there is no need to complete the full training in the tickets finding stage like traditional OMP. (3) SynFlow and IGQ approaches have the best RA and SA, while RB obtains the superior robust generalization among static pruning approaches.
Finally, we explore the influence of training regimes during the RB ticket finding on CIFAR-100 with ResNet-18. Table A6 demonstrates that RB tickets perform best when found with the cheapest standard training. Specifically, at 90% and 95% sparsity, SGD RB tickets outperform both Fast AT (Wong et al., 2020) and PGD-10 RB tickets with up to 1.27% higher RA and 1.86% narrower RGG. Figure A7 offers a possible explanation for this phenomenon: the SGD training scheme more quickly develops high-level network connections, during the early epochs of training (Achille et al., 2019). As a result, RB Tickets pruned from the model trained with SGD achieve superior quality.
4.2 FLYING BIRD IS A BETTER BIRD
In this section, we discuss the advantages of dynamic sparsity and show that our Flying Bird(+) is a superior bird. Table 1 examines the effectiveness of FB(+) on CIFAR-10 with ResNet-18, and several consistent observations can be drawn: ¶ FB(+) achieve 9.92% ∼ 23.66% RGG reduction, 2.24% ∼ 5.88% decrease for robust overfitting, compared with the dense network. And FB+ at 80% sparsity even pushes the RA 0.60% higher. · Although the smaller dense network shows the leading performance w.r.t improving robust generalization, the robustness has been largely sacrificed, with up to 4.29% RA degradation, suggesting that only reducing models’ parameter counts is insufficient to keep satisfactory SA/RA. ¸ FB and FB+ achieve superior performance of RA for both the best and final checkpoints across all methods, including RB. ¹ Regardless of small dense and random pruning due to their poor robustness, FB+ reaches the most impressive robust generalization (rank #1 or #2) with the least training and inference costs. Precisely, FB+ obtains 84.46% ∼ 91.37% training FLOPs and 84.46% ∼ 93.36% inference FLOPs saving, i.e., Flying Bird+ is SUPER light-weight.
7
Superior performance across datasets and architectures. We further evaluate the performance of FB(+) across various datasets (CIFAR-10, CIFAR-100 and Tiny-ImageNet) and architectures (ResNet-18 and VGG-16). Table 2 and 3 display that both static and dynamic sparsity of our proposals serve effective remedies for improving robust generalization and mitigating robust overfitting, with 4.43% ∼ 15.45%, 14.99% ∼ 34.44% and 21.62% ∼ 23.60% RGG reduction across different architectures on CIFAR-10, CIFAR-100 and Tiny-ImageNet, respectively. Moveover, both RB and FB(+) gain significant efficiency, with up to 87.83% training and inference FLOPs savings.
Superior performance across improved attacks. Additionally, we verify both RB and FB(+) under improved attacks, i.e., Auto-Attack (Croce & Hein, 2020) and CW Attack (Carlini & Wagner, 2017). As shown in Table A8, our approaches shrink the robust generalization gap by up to 30.76% on CIFAR-10/100, and largely mitigate robust overfitting. This piece of evidence shows our proposal’s effectiveness sustained across diverse attacks.
Combining FB+ with existing start-of-the-art (SOTA) mitigation. Previous works (Chen et al., 2021e; Zhang et al., 2021a; Wu et al., 2020b) point out that smoothening regularizations (e.g., KD (Hinton et al., 2015) and SWA (Izmailov et al., 2018)) help robust generalization and lead to SOTA robust accuracies. We combine them with our FB+ and collect the robust accuracy on CIFAR-10 with ResNet-18 in Figure 4. The extra robustness gains from FB+ imply that they makes complementary contributions.
Excluding obfuscated gradients. A common “counterfeit” of robustness improvements is less effective adversarial examples resulted from obfuscated gradients (Athalye et al., 2018). Table A7 demonstrates the maintained enhanced robustness under unseen transfer attacks, which excludes the possibility of gradient masking. More are referred to Section A3.
4.3 ABLATION STUDY AND VISUALIZATION
Different sparse initialization and update frequency. As two major components in the dynamic sparsity exploration (Evci et al., 2020a), we conduct thorough ablation studies in Table 4 and 5. We found the performance of Flying Bird+ is more sensitive to different sparse initialization; using SNIP to produce initial layer-wise sparsity and updating the connections per 2000 iterations serves the superior configuration for FB+.
8
Table 4: Ablation of different sparse initialization in Flying Bird+. Subnetwroks at 80% initial sparsity are chosen on CIFAR-10 with ResNet-18.
Table 5: Ablation of different update frequency in Flying Bird+. Subnetworks at 80% initial sparsity are chosen on CIFAR-10 with ResNet-18.
Final checkpoint loss landscapes. From visualizations in Figure 5, FB and FB+ converge to much flatter loss valleys, which evidences their effectiveness in closing robust generalization gaps.
Attention and saliency maps. To visually inspect the benefits of our proposal, here we provide attention and saliency maps generated by Grad-GAM (Selvaraju et al., 2017) and tools in (Smilkov et al., 2017). Comparing the dense model to our “talented birds” (e.g., FB+), Figure 6 shows that our approaches have enhanced concentration on main objects, and are capable of capturing more local feature information, aligning better with human perception.
1
Dense
Adversarial Samples
Random Pruning
SNIP
Flying Bird
Robust Bird
Flying Bird+
+
Heatmap Saliency Map
Figure 6: (Left) Visualization of attention heatmaps on adversarial images based on Grad-Cam (Selvaraju et al., 2017). (Right) Saliency map visualization on adversarial samples (Smilkov et al., 2017).
5 CONCLUSION
We show the adversarial training of dense DNNs incurs a severe robust generalization gap, which can be effectively and efficiently resolved by injecting appropriate sparsity. Our proposed Robust Bird and Flying Bird(+) with static and dynamic sparsity, significantly mitigate the robust generalization gap while retaining competitive standard/robust accuracy, besides substantially reduced computation. Our future works plan to investigate channel- and block-wise sparse structures.
9
A1 MORE TECHNIQUE DETAILS
Algorithms of Robust Bird and Flying Bird(+). Here we present the detailed procedure to identify robust bird and flying bird(+), as summarized in algorithm 1 and 2. Note that for the increasing frequency on Line 10 and 11 in algorithm 2, we compare the measurements stored in the queue between two consequent epochs and calculate the frequency of increasing.
Algorithm 1: Finding a Robust Bird Input: f(x; θ0) w. initialization θ0, target sparsity s%, FIFO queue Q with length l, threshold τ Output: Robust bird f(x;mt∗ θT)
1 while t < tmax do 2 Update network parameters θt ← θt−1 via standard training 3 Apply static pruning towards target sparsity s% and obtain the sparse mask mt 4 Calculate the Hamming distance δH(mt,mt−1), append result to Q 5 t← t+ 1 6 if max(Q) < τ then 7 t∗ ← t 8 Rewind f(x;mt∗ θt∗)→ f(x;mt∗ θ0) 9 Training f(x;mt∗ θ0) via PGD-AT for T epochs
10 return f(x;mt∗ θT) 11 end 12 end
Algorithm 2: Finding a Flying Bird(+) Input: Initialization parameters θ0, sparse masks m of sparsity s%, FIFO queue Qp andQg
with length l, pruning and growth increasing ratio δp and δg , update threshold , optimize interval ∆t, parameter update ratio k%, ratio update starting point tstart
Output: Flying bird(+) f(x;m θT) 1 while t < T do 2 Update network parameters θt ← θt−1 via PGD-AT; 3 # Record training statistics 4 Add robust generalization gap between train and validation set to Qp 5 Add robust validation loss to Qg 6 # Update sparse masks m 7 if (t mod ∆t) == 0 then 8 |---Optional for Flying Bird+---| 9 # Update pruning and growth ratio p%, g%
10 if t > tstart and increasing frequency of Qp ≥ : p = (1 + δp)× k else p = k 11 if t > tstart and increasing frequency of Qg ≥ : g = (1 + δg)× k else g = k 12 |---Optional for Flying Bird+---| 13 Prune p% parameters with smallest weight magnitude 14 Grow g% parameters with largest gradient 15 Update sparse mask m accordingly 16 end 17 end
A2 MORE IMPLEMENTATION DETAILS
A2.1 OTHER COMMON DETAILS
We select two checkpoints during training: best, which has the best RA values on the validation set, and final, i.e., the last checkpoint. And we report both RA and SA of these two checkpoints on test sets. Apart from the robust generalization gap, we also show the extent of robust overfitting numerically by the difference of RA between best and final. Furthermore, we calculate the FLOPs
A17
at both training and inference stages to evaluate the prices of obtaining and exploiting the subnetworks respectively, in which we approximate the FLOPs of the back-propagation to be twice that of forwarding propagation (Yang et al., 2020).
A2.2 MORE DETAILS ABOUT ROBUST BIRD
For the experiments of RB tickets finding, we comprehensively study three training regimes: standard training with stochastic gradient descent (SGD), adversarial training with PGD-10 AT (Madry et al., 2018b), and Fast AT (Wong et al., 2020). Following Pang et al. (2021), we train the network with an SGD optimizer of 0.9 momentum and 5 × 10−4 weight decay. We use a batch size of 128. For the experiments of PGD-10 AT, we adopt the `∞ PGD attack with a maximum perturbation = 8/255 and a step size α = 2/255. And the learning rate starts from 0.1, then decays by ten times at 50, 150 epoch. As for fast AT, we use a cyclic schedule with a maximum learning rate equals 0.2.
A2.3 MORE DETAILS ABOUT FLYING BIRD(+)
For the experiments of Flying Bird+, the increasing ratio of pruning and growth δp, δq is kept default to 0.4% and 0.05%, respectively.
A3 MORE EXPERIMENT RESULTS
A3.1 MORE RESULTS ABOUT ROBUST BIRD
Accuracy during RB Tickets Finding Figure A7 shows the curve of standard test accuracy during the training phase of RB ticket finding. We can observe the SGD training scheme develops highlevel network connections much faster than the others, which provides a possible explanation for the superior quality of RB tickets from SGD.
0 5 10 15 20 25 30 Epoch
30 40 50 60 70 80 St an da rd A cc ur ac y %
RB Ticket Finding Performance
PGD-10 SGD FAST AT
Figure A7: Standard accuracy (SA) of PGD-10, SGD, and Fast AT during the RB ticket finding phase.
Mask Similarity Visualization. Figure A8 visualizes the dynamic similarity scores for each epoch among masks found via SGD, Fast AT, and PGD-10. Specifically, the similarity scores (You et al., 2020) reflect the Hamming distance between a pair of masks. We notice that masks found by SGD and PGD-10 share more common structures. A possible reason is that Fast AT usually adopts a cyclic learning rate schedule, while SGD and PGD use a multi-step decay schedule.
Different training regimes for finding RB tickets. We denote the subnetworks identified by standard training with SGD, adversarial training with Fast AT (Wong et al., 2020) and adversarial train-
A18
0 20 40 60 80 100
Fast AT Mask
0 20 40 60 80
100PG D-
10 M
as k
0 20 40 60 80 100
Fast AT Mask
0 20 40 60 80
100
SG D
M as
k
0 20 40 60 80 100
PGD-10 Mask
0 20 40 60 80
100
SG D
M as
k
0.50 0.55 0.60 0.65 0.70 0.75
Figure A8: Similarity scores by epoch among masks found via Fast AT, SGD, and PGD-10. A brighter color denotes higher similarity.
Table A6: Comparison results of different training regimes for RB ticket finding on CIFAR-100 with ResNet-18. The subnetworks at 90% and 95% are selected here.
Sparsity(%) Settings Roubst Accuarcy Standard Accuarcy Robust
Best Final Diff. Best Final Diff. Generalization
0 Baseline 26.93 19.62 7.31 52.03 53.91 −1.88 54.56
90
SGD tickets 25.83 23.40 2.43 49.35 53.51 −4.16 18.37↓ 36.19 Fast AT tickets 25.15 22.88 2.27 51.00 51.75 −0.75 20.23↓ 34.33 PGD-10 tickets 25.34 22.96 2.38 52.01 53.27 −1.26 20.03↓ 34.53
95
SGD tickets 24.77 24.12 0.65 49.88 50.89 −1.01 9.18↓ 45.38 Fast AT tickets 23.50 22.46 1.04 41.67 43.19 −1.52 9.53↓ 45.03 PGD-10 tickets 24.44 23.77 0.67 49.30 50.65 −1.35 9.86↓ 44.70
ing with PGD-10 AT as SGD tickets, Fast AT tickets, and PGD-10 tickets, respectively. Table A6 demonstrate the SGD tickets has the best performance.
Loss Landscape Visualization We visualize the loss landscape of the dense network, random pruned subnetwork, and robust bird tickets at 30% sparsity in Figure A9. Compared with the dense model and random pruned subnetwork, RB tickets found by the standard training shows much flatter loss landscapes, which provide a high-quality starting point for further robustification.
A3.2 MORE RESULTS ABOUT FLYING BIRD(+)
Excluding Obfuscated Gradients. To exclude this possibility of gradient masking, we show that our methods maintain improved robustness under unseen transfer attacks. As shown in Table A7, the left part represents the testing accuracy of perturbed test samples from an unseen robust model, and the right part shows the transfer testing performance on an unseen robust model (here we use a separately robustified ResNet-50 with PGD-10 on CIFAR-100).
Performance under Improved Attacks. We report the performance of both RB and FB(+) under Auto-Attack (Croce & Hein, 2020) and CW Attack (Carlini & Wagner, 2017). For Auto-Attack, we keep the default setting with = 8255 . And for CW Attack we perform 1 search step on C with an initial constant of 0.1. And we use 100 iterations for each search step with the learning rate of 0.01. As shown in Table A8, both RB and FB(+) outperform the dense counterpart in terms of robust generalization. And FB+ achieves superior performance.
More Datasets and Architectures We report more results of different sparsification methods across diverse datasets and architectures at Table A9, A10, A11 and A12, from which we observe our approaches are capable of improving robust generalization and mitigating robust overfitting.
A19
Dense Models RB Tickets (30%) Random Pruning (30%)
Figure A9: Loss landscapes visualizations (Engstrom et al., 2018; Chen et al., 2021e) of the dense model (unpruned), random pruned subnetwork at 30% sparsity, and Robust Bird (RB) tickets at 30% sparsity found by the standard training. The ResNet-18 backbone with the same original initialization on CIFAR-10 is adopted here. Results demonstrate that RB tickets offer a smoother and flatter starting point for further robustification in the second stage.
Table A7: Transfer attack performance from/on an unseen non-robust model, where the attacks are generated by/applied to the non-robust model. The robust generalization gap is also calculated based on transfer attack accuracies between train and test sets. We use ResNet-18 on CIFAR-10/100 and sub-networks at 80% sparsity.
Dataset Settings Transfer Attack from Unseen Model Transfer Attack on Unseen Model
Accuracy Robust Accuracy Robust
Best Final Diff. Generalization Best Final Diff. Generalization
CIFAR-10 Baseline 79.68 82.03 −2.35 16.43 70.48 79.85 −9.37 11.84 Robust Bird 77.33 81.04 −3.71 12.18 73.17 77.03 −3.86 11.49 Flying Bird 79.13 82.17 −3.04 13.49 71.59 77.19 −5.60 11.88 Flying Bird+ 79.47 81.90 −2.43 11.85 70.43 76.00 −5.57 11.42
CIFAR-100 Baseline 50.51 52.15 −1.64 45.91 48.67 54.48 −5.81 36.98 Robust Bird 47.25 51.74 −4.49 28.80 47.47 50.90 −3.43 35.82 Flying Bird 51.80 53.52 −1.72 31.98 45.56 50.61 −5.05 35.39 Flying Bird+ 50.72 53.56 −2.84 25.09 47.04 49.43 −2.39 35.09
Distributions of Adopted Sparse Initialization. We report the layer-wise sparsity of different initial sparse masks. As shown in Figure A10, we observe that subnetworks generally have better performance when the top layers remain most of the parameters.
Training Curve of Flying Bird+. Figure A11 shows the training curve of Flying Bird+, in which the red dotted lines represent the time for increasing the pruning ratio and the green dotted lines for growth ratio. The detailed training curve demonstrates the flexibility of flying bird+ for dynamically adjusting the sparsity levels.
A4 EXTRA RESULTS AND DISCUSSION
We sincerely appreciate all anonymous reviewers’ and area chairs’ constructive discussions for improving this paper. Extra results and discussions are presented in this section.
A20
Table A8: Evaluation under improved attacks (i.e., Auto-Attack and CW-Attack) on CIFAR-10/100 with ResNet-18 at 80% sparsity. The robust generalization gap is computed under improved attacks.
Dataset Settings Auto-Attack CW-Attack
Accuracy Robust Accuracy Robust
Best Final Diff. Generalization Best Final Diff. Generalization
CIFAR-10 Baseline 47.41 41.59 5.82 35.30 75.76 66.13 9.63 30.39 Robust Bird 45.90 42.45 3.45 21.58 ↓ 13.72 73.95 73.52 0.43 17.67 ↓ 12.72 Flying Bird 47.55 43.57 3.98 26.55 ↓ 8.75 75.30 72.08 3.22 21.77 ↓ 8.62 Flying Bird+ 47.06 44.09 3.17 21.73 ↓ 13.57 76.00 73.83 2.17 17.77 ↓ 12.62
CIFAR-100 Baseline 23.16 17.68 5.48 49.73 45.83 36.21 9.62 57.52 Robust Bird 21.29 18.00 3.29 21.72 ↓ 28.01 43.30 42.39 0.91 30.82 ↓ 26.70 Flying Bird 22.74 19.44 3.30 25.18 ↓ 24.55 46.23 42.36 3.87 35.50 ↓ 22.02 Flying Bird+ 22.90 20.31 2.59 19.05 ↓ 30.68 45.86 43.90 1.96 26.76 ↓ 30.76
Table A9: More results of different sparcification methods on CIFAR-10 with ResNet-18.
Sparsity(%) Settings Robust Accuracy Standard Accuracy Robust
Best Final Diff. Best Final Diff. Generalization
0 Baseline 51.10 43.61 7.49 81.15 83.38 −2.23 38.82
95
Small Dense 45.99 44.55 1.44 74.26 75.64 −1.38 7.87 ↓ 30.95 Random Pruning 45.64 44.18 1.46 75.20 75.20 0.00 7.96 ↓ 30.86 OMP 47.08 46.23 0.85 78.77 79.36 −0.59 12.01 ↓ 26.81 SNIP 48.18 46.72 1.46 78.55 79.21 −0.66 9.58 ↓ 29.24 GraSP 48.58 47.15 1.43 78.95 79.44 −0.49 10.37 ↓ 28.45 SynFlow 48.93 48.22 0.71 78.70 78.90 −0.20 8.25 ↓ 30.57 IGQ 48.82 47.56 1.26 79.44 79.76 −0.32 9.33 ↓ 29.49 Robust Bird 47.53 46.48 1.05 78.33 78.78 −0.45 9.20 ↓ 29.62 Flying Bird 49.62 48.46 1.16 78.12 81.43 −3.31 13.32 ↓ 25.52 Flying Bird+ 49.37 48.84 0.53 80.33 80.28 0.05 9.27 ↓ 29.55
co nv
1
la ye
r1 .0
.c on
v1
la ye
r1 .0
.c on
v2
la ye
r1 .1
.c on
v1
la ye
r1 .1
.c on
v2
la ye
r2 .0
.c on
v1
la ye
r2 .0
.c on
v2
la ye
r2 .0
.sh or
tc ut
.0
la ye
r2 .1
.c on
v1
la ye
r2 .1
.c on
v2
la ye
r3 .0
.c on
v1
la ye
r3 .0
.c on
v2
la ye
r3 .0
.sh or
tc ut
.0
la ye
r3 .1
.c on
v1
la ye
r3 .1
.c on
v2
la ye
r4 .0
.c on
v1
la ye
r4 .0
.c on
v2
la ye
r4 .0
.sh or
tc ut
.0
la ye
r4 .1
.c on
v1
la ye
r4 .1
.c on
v2
lin ea
r
Layer Name
0.0
0.2
0.4
0.6
0.8
1.0
Sp ar
si ty
Uniform GraSP SNIP SynFlow IGQ ERK
Figure A10: Layer-wise sparisty of different initial sparse masks with ResNet-18
A4.1 MORE RESULTS OF DIFFERENT SPARSITY
We report more results of subnetworks with 40/60% sparsity on CIFAR-10/100 with ResNet-18 and VGG-16. As shown in Table A13, A14, A15 and A16, our flying bird(+) achieves consistent improvement than baseline unpruned networks, in terms of 2.45 ∼ 19.81% narrower robust generalization gaps with comparable RA and SA performance.
A4.2 MORE RESULTS ON WIDERESNET
We further evaluate our flying bird(+) with WideResNet-34-10 on CIFAR-10 and report the results on Table A17. We can observe that compared with the dense network, our methods significantly shrink the robust generalization gap by up to 13.14% and maintain comparable RA/SA performance.
A21
Table A10: More results of different sparcification methods on CIFAR-10 with VGG-16.
Sparsity(%) Settings Robust Accuracy Standard Accuracy Robust
Best Final Diff. Best Final Diff. Generalization
0 Baseline 48.33 42.73 5.60 76.84 79.73 −2.89 28.00
80
Random Pruning 46.14 40.33 5.81 74.42 76.68 −2.26 21.01 ↓ 6.99 OMP 47.90 43.19 4.71 76.60 80.02 −3.42 24.97 ↓ 3.03 SNIP 48.03 43.17 4.86 76.68 80.08 −3.40 24.71 ↓ 3.29 GraSP 47.91 42.34 5.57 75.74 78.87 −3.13 23.65 ↓ 4.35 SynFlow 48.47 45.32 3.15 77.62 79.09 −1.47 20.17 ↓ 7.83 IGQ 48.57 44.25 4.32 77.51 80.01 −2.50 22.79 ↓ 5.21 Robust Bird 47.69 41.66 6.03 75.32 78.58 −3.26 23.57 ↓ 4.43 Flying Bird 48.43 44.65 3.78 77.53 79.72 −2.19 21.01 ↓ 6.99 Flying Bird+ 48.25 45.24 3.01 77.48 79.55 −2.07 17.75 ↓ 10.25
90
Random Pruning 44.33 40.33 4.00 71.27 74.46 −3.19 15.48 ↓ 12.52 OMP 47.84 43.34 4.50 75.60 79.10 −3.50 18.29 ↓ 9.71 SNIP 47.76 44.27 3.49 75.92 79.62 −3.70 17.85 ↓ 10.15 GraSP 45.96 42.12 3.84 75.19 77.03 −1.84 15.04 ↓ 12.96 SynFlow 47.54 45.79 1.75 78.43 78.70 −0.27 14.40 ↓ 13.60 IGQ 47.79 45.12 2.67 74.87 79.19 −4.32 16.06 ↓ 11.94 Robust Bird 47.09 44.13 2.96 75.53 78.36 −2.83 16.57 ↓ 11.43 Flying Bird 48.45 45.55 2.90 75.82 79.21 −3.39 16.56 ↓ 11.44 Flying Bird+ 48.39 46.26 2.13 78.73 79.12 −0.39 12.47 ↓ 15.53
A4.3 COMPARISON WITH EFFICIENT ADVERSARIAL TRAINING METHODS
To elaborate more about training efficiency, we compare our methods with two efficient training methods. Shafahi et al. (2019) proposed Free Adversarial Training that improves training efficiency by reusing the gradient information, which is orthogonal to our approaches and can be easily combined with our methods to pursue more efficiency by replacing the PGD-10 training with Free AT.
A22
Sparsity: 80% Sparsity: 90%
Additionally, Li et al. (2020) uses magnitude pruning to locate sparse structures, which is similar to OMP reported in Table 1, except they use a smaller learning rate. Our methods achieve better performance and efficiency than OMP. Specifically, with 80% sparsity, our flying bird+ reaches a 4.49% narrower robust generalization gap and 1.54% higher RA yet only requires 87.58% less training FLOPs. Also, our methods can be easily combined with Fast AT for further training efficiency.
A23
A4.4 COMPARISON WITH OTHER PRUNING AND SPARSE TRAINING METHODS
Compared with the recent work (Özdenizci & Legenstein, 2021), our flying bird(+) is different at both levels of goal and methodologies. Firstly, Özdenizci & Legenstein (2021) pursues a superior adversarial robust testing accuracy for sparsely connected networks. While we aim to investigate the relationship between sparsity and robust generalization, and demonstrate that introducing appropriate sparsity (e.g., LTH-based static sparsity or dynamic sparsity) into adversarial training
A24
substantially alleviates the robust generalization gap and maintains comparable or even better standard/robust accuracies. Secondly, Özdenizci & Legenstein (2021) samples network connectivity from a learned posterior to form a sparse subnetwork. However, our flying bird first removes the parameters with the lowest magnitude, which ensures a small term of the first-order Taylor approximation of the loss and thus limits the impact on the output of networks (Evci et al., 2020a). And then, it allows new connectivity with the largest gradient to grow to reduce the loss quickly (Evci et al., 2020a). Furthermore, we propose an enhanced variant of Flying Bird, i.e., Flying Bird+, which not only learns the sparse topologies but also is capable of adaptively adjusting the network capacity to determine the right parameterization level “on-demand” during training, while Özdenizci & Legenstein (2021) stick to a fixed parameter budget.
Another work, HYDRA (Sehwag et al., 2020) also has several differences from our robust birds. Specifically, HYDRA starts from a robust pre-trained dense network, which requires at least hundreds of epochs for adversarial training. However, our robust bird’s pre-training only needs a few epochs of standard training. Therefore, Sehwag et al. (2020) has significantly higher computational costs, compared to ours. Then, Sehwag et al. (2020) adopt TRADES (Zhang et al., 2019) for adversarial training, which also requires auxiliary inputs of clean images, while our methods follow the classical adversarial training (Madry et al., 2018b) and only take adversarial perturbed samples as input. Moreover, for CIFAR-10 experiments, Sehwag et al. (2020) uses 500k additional pseudolabeled images from the Tiny-ImageNet dataset with a robust semi-supervised training approach. However, all our methods and experiments do not leverage any external data.
Furthermore, one concurrent work (Fu et al., 2021) demonstrates that there exist subnetworks with inborn robustness. Such randomly initialized networks have matching or even superior robust accuracy of adversarially trained networks with similar parameter counts. It’s interesting to utilize this finding for further improvement of robust generalization, and we will investigate it in future works.
A25 | 1. What is the focus of the paper regarding efficient robust training?
2. What are the strengths of the proposed approach, particularly in terms of static and dynamic sparsity?
3. What are the weaknesses of the paper, especially regarding the explanation of sparse mask reuse and the lack of theoretical analysis?
4. Do you have any questions about the experiments and their setup?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The authors proposed to leverage static and dynamic sparsity in efficient robust training. The proposed methods can significantly mitigate the robust generalization gap while retaining competitive performance (standard/robust accuracy) with substantially reduced computation budgets.
Review
Strength: - The motivation for introducing sparsity to adversarial training for both generalization and efficiency gains are valuable and novel. The paper investigates two sparsity forms: static and dynamic. - For the static form, the paper extends the early-bird idea by You et. al. 2020, and show the phenomenon also exists in the adversarial training scheme. So in general the novelty of this paper is only fair, except a surprise finding that even in adversarial training, EB tickets can still be drawn from a cheap standard pre-training stage. - For the dynamic form, the authors presented a vanilla version plus an advanced variant capable of adaptively adjusting the sparsity levels. - Experiments are solid and convincing. Besides PGD, the authors also used Auto-Attack (Croce & Hein, 2020) and CW Attack (Carlini & Wagner, 2017) for a more rigorous evaluation. The authors also carefully excluded obfuscated gradients using transferred unseen attacks - Attention visualization is another interesting and novel angle to compare different pruning methods. - Paper is structured well and easy to follow. I especially like that rationale is always clearly presented in company with the algorithms - All codes are included, and the reproducibility looks good to me.
Weakness: - I would appreciate if the authors could elaborate more on why the sparse mask found from a non-robust model could be reused to training a robust model. If that is true, I wonder whether or not there indeed exists any tight coupling between sparse mask structure and robustness. - Table 1 only have two sparsity levels: 80% and 90%. Why only this two, are they specifically cherry-picked? It would be better if the authors could demonstrate some more sparsity levels. - Table 1 should also have compared with Early Bird (You et al. (2020) ) and existing dynamic sparse training methods (Evci et al. (2020a); Liu et al. (2021b)) - This paper does not provide any theoretical analysis on why the proposed strategy should work for efficient adversarial training. It is unclear which factor is the main performance booster: sparsity regularizing the adversarial overfitting, or sparsity for efficient “lottery”-style training. Despite this work being mainly empirical, some theory probing would have improved it. - Typos: “much more flatter” -> much flatter, and more. Proofreading is required. |
ICLR | Title
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training
Abstract
Recent studies demonstrate that deep networks, even robustified by the state-ofthe-art adversarial training (AT), still suffer from large robust generalization gaps, in addition to the much more expensive training costs than standard training. In this paper, we investigate this intriguing problem from a new perspective, i.e., injecting appropriate forms of sparsity during adversarial training. We introduce two alternatives for sparse adversarial training: (i) static sparsity, by leveraging recent results from the lottery ticket hypothesis to identify critical sparse subnetworks arising from the early training; (ii) dynamic sparsity, by allowing the sparse subnetwork to adaptively adjust its connectivity pattern (while sticking to the same sparsity ratio) throughout training. We find both static and dynamic sparse methods to yield win-win: substantially shrinking the robust generalization gap and alleviating the robust overfitting, meanwhile significantly saving training and inference FLOPs. Extensive experiments validate our proposals with multiple network architectures on diverse datasets, including CIFAR-10/100 and TinyImageNet. For example, our methods reduce robust generalization gap and overfitting by 34.44% and 4.02%, with comparable robust/standard accuracy boosts and 87.83%/87.82% training/inference FLOPs savings on CIFAR-100 with ResNet18. Besides, our approaches can be organically combined with existing regularizers, establishing new state-of-the-art results in AT. Codes are available in https: //github.com/VITA-Group/Sparsity-Win-Robust-Generalization.
N/A
1 INTRODUCTION
Deep neural networks (DNNs) are notoriously vulnerable to maliciously crafted adversarial attacks. To conquer this fragility, numerous adversarial defense mechanisms are proposed to establish robust neural networks (Schmidt et al., 2018; Sun et al., 2019; Nakkiran, 2019; Raghunathan et al., 2019; Hu et al., 2019; Chen et al., 2020c; 2021e; Jiang et al., 2020). Among them, adversarial training (AT) based methods (Madry et al., 2017; Zhang et al., 2019) have maintained the state-of-the-art robustness. However, the AT training process usually comes with order-ofmagnitude higher computational costs than standard training, since multiple attack iterations are needed to construct strong adversarial examples (Madry et al., 2018b). Moreover, AT was recently revealed to incur severe robust generalization gaps (Rice et al., 2020), between its training and testing accuracies, as shown in Figure 1; and to require significantly more training samples (Schmidt et al., 2018) to generalize robustly.
*Equal Contribution.
1
In response to those challenges, Schmidt et al. (2018); Lee et al. (2020); Song et al. (2019) investigate the possibility of improving generalization by leveraging advanced data augmentation techniques, which further amplifies the training cost of AT. Recent studies (Rice et al., 2020; Chen et al., 2021e) found that early stopping, or several smoothness/flatness-aware regularizations (Chen et al., 2021e; Stutz et al., 2021; Singla et al., 2021), can bring effective mitigation.
In this paper, a new perspective has been explored to tackle the above challenges by enforcing appropriate sparsity patterns during AT. The connection between robust generalization and sparsity is mainly inspired by two facts. On one hand, sparsity can effectively regularize the learning of over-parameterized neural networks, hence potentially benefiting both standard and robust generalization (Balda et al., 2019). As demonstrated in Figure 1, with the increase of sparsity levels, the robust generalization gap is indeed substantially shrunk while the robust overfitting is alleviated. On the other hand, one key design philosophy that facilitates this consideration is the lottery ticket hypothesis (LTH) (Frankle & Carbin, 2019). The LTH advocates the existence of highly sparse and separately trainable subnetworks (a.k.a. winning tickets), which can be trained from the original initialization to match or even surpass the corresponding dense networks’ test accuracies. These facts point out a promising direction that utilizing proper sparsity is capable of boosting robust generalization while maintaining competitive standard and robust accuracy.
Although sparsity is beneficial, the current methods (Frankle & Carbin, 2019; Frankle et al., 2020; Renda et al., 2020) often empirically locate sparse critical subnetworks by Iterative Magnitude Pruning (IMP). It demands excessive computational cost even for standard training due to the iterative train-prune-retrain process. Recently, You et al. (2020) demonstrated that these intriguing subnetworks can be identified at the very early training stage using one-shot pruning, which they term as Early Bird (EB) tickets. We show the phenomenon also exists in the adversarial training scheme. More importantly, we take one leap further to reveal that even in adversarial training, EB tickets can be drawn from a cheap standard training stage, while still achieving solid robustness. In other words, the Early Bird is also a Robust Bird that yields an attractive win-win of efficiency and robustness - we name this finding as Robust Bird (RB) tickets.
Furthermore, we investigate the role of sparsity in a scene where the sparse connections of subnetworks change on the fly. Specifically, we initialize a subnetwork with random sparse connectivity and then optimize its weights and sparse typologies simultaneously, while sticking to the fixed small parameter budget. This training pipeline, called as Flying Bird (FB), is motivated by the latest sparse training approaches (Evci et al., 2020b) to further reduce robust generalization gap in AT, while ensuring low training costs. Moreover, an enhanced algorithm, i.e., Flying Bird+, is proposed to dynamically adjust the network capacity (or sparsity) to pursue superior robust generalization, at few extra prices of training efficiency. Our contributions can be summarized as follows:
• We perform a thorough investigation to reveal that introducing appropriate sparsity into AT is an appealing win-win, specifically: (1) substantially alleviating the robust generalization gap; (2) maintaining comparable or even better standard/robust accuracies; and (3) enhancing the AT efficiency by training only compact subnetworks.
• We explore two alternatives for sparse adversarial training: (i) the Robust Bird (RB) training that leverages static sparsity, by mining the critical sparse subnetwork at the early training stage, and using only the cheapest standard training; (ii) the Flying Bird (FB) training that allows for dynamic sparsity, which jointly optimizes both network weights and their sparse connectivity during AT, while sticking to the same sparsity level. We also discuss a FB variant called Flying Bird+ that adaptively adjusts the sparsity level on demand during AT.
• Extensive experiments are conducted on CIFAR-10, CIFAR-100, and Tiny-ImageNet with diverse network architectures. Specifically, our proposals obtain 80.16% ∼ 87.83% training FLOPs and 80.16% ∼ 87.83% inference FLOPs savings, shrink robust generalization from 28.00% ∼ 63.18% to 4.43% ∼ 34.44%, and boost the robust accuracy by up to 0.60% and the standard accuracy by up to 0.90%, across multiple datasets and architectures. Meanwhile, combining our sparse adversarial training frameworks with existing regularizations establishes the new state-of-the-art results.
2 RELATED WORK
Adversarial training and robust generalization/overfitting. Deep neural networks present vulnerability to imperceivable adversarial perturbations. To deal with this drawback, numerous defense
2
approaches have been proposed (Goodfellow et al., 2015; Kurakin et al., 2016; Madry et al., 2018a). Although many methods (Liao et al., 2018; Guo et al., 2018a; Xu et al., 2017; Dziugaite et al., 2016; Dhillon et al., 2018a; Xie et al., 2018; Jiang et al., 2020) were later found to result from obfuscated gradients (Athalye et al., 2018), adversarial training (AT) (Madry et al., 2018a), together with some of its variants (Zhang et al., 2019; Mosbach et al., 2018; Dong et al., 2018), remains as one of the most effective yet costly approaches.
A pitfall of AT, i.e., the poor robust generalization, was spotted recently. Schmidt et al. (2018) showed that AT intrinsically demands a larger sample complexity to identify well-generalizable robust solutions. Therefore, data augmentation (Lee et al., 2020; Song et al., 2019) is an effective remedy. Stutz et al. (2021); Singla et al. (2021) related robust generalization gap to curvature/flatness of loss landscapes. They introduced weight perturbing approaches and smooth activation functions to reshape the loss geometry and boost robust generalization ability. Meanwhile, the robust overfitting (Rice et al., 2020) in AT usually happens with or as a result of inferior generalization. Previous studies (Rice et al., 2020; Chen et al., 2021e) demonstrated that conventional regularization-based methods (e.g., weight decay and simple data augmentation) can not alleviate robust overfitting. Then, numerous advanced algorithms (Zhang et al., 2020; 2021b; Zhou et al., 2021; Bunk et al., 2021; Chen et al., 2021a; Dong et al., 2021; Zi et al., 2021; Tack et al., 2021; Zhang et al., 2021a) arose in the last half year to tackle the overfitting, using data manipulation, smoothened training, and else. Those methods work orthogonally to our proposal as evidenced in Section 4.
Another group of related literature lies in the field of sparse robust networks (Guo et al., 2018b). These works either treat model compression as a defense mechanism (Wang et al., 2018; Gao et al., 2017; Dhillon et al., 2018b) or pursue robust and efficient sub-models that can be deployed in resource-limited platforms (Gui et al., 2019; Ye et al., 2019; Sehwag et al., 2019). Compared to those inference-focused methods, our goal is fundamentally different: injecting sparsity during training to reduce the robust generalization gap while improving training efficiency.
Static pruning and dynamic sparse training. Pruning (LeCun et al., 1990; Han et al., 2015a) serves as a powerful technique to eliminate the weight redundancy in over-parameterized DNNs, which aims to obtain storage and computational savings with almost undamaged performance. It can roughly divided into two categories based on how to generate sparse patterns: (i) static pruning. It removes parameters (Han et al., 2015a; LeCun et al., 1990; Han et al., 2015b) or substructures (Liu et al., 2017; Zhou et al., 2016; He et al., 2017) based on optimized importance scores (Zhang et al., 2018; He et al., 2017) or some heuristics like weight magnitude (Han et al., 2015a), gradient (Molchanov et al., 2019), hessian (LeCun et al., 1990) statistics. The discarded elements usually will not participate in the next round of training or pruning. Static pruning can be flexibly applied prior to training, such as SNIP (Lee et al., 2019), GraSP (Wang et al., 2020) and SynFlow (Tanaka et al., 2020); during training (Zhang et al., 2018; He et al., 2017); and post training (Han et al., 2015a) for different trade-off between training cost and pruned models’ quality. (ii) dynamic sparse training. It updates model parameters and sparse connectivities at the same time, starting from a randomly sparsified subnetwork (Molchanov et al., 2017). During the training, the removed elements have chances to be grown back if they potentially benefit to predictions. Among the huge family of sparse training (Mocanu et al., 2016; Evci et al., 2019; Mostafa & Wang, 2019; Liu et al., 2021a; Dettmers & Zettlemoyer, 2019; Jayakumar et al., 2021; Raihan & Aamodt, 2020), the recent methods Evci et al. (2020a); Liu et al. (2021b) lead to the state-of-the-art performance.
A special case of static pruning, Lottery tickets hypothesis (LTH) (Frankle & Carbin, 2019), demonstrates the existence of sparse subnetworks in DNNs, which are capable of training in isolation and reach a comparable performance of their dense counterpart. The LTH indicates the great potential to train a sparse network from scratch without sacrificing expressiveness and has recently drawn lots of attention from diverse fields (Chen et al., 2020b;a; 2021g;f;d;c;b; 2022; Ding et al., 2022; Gan et al., 2021) beyond image recognition (Zhang et al., 2021d; Frankle et al., 2020; Redman et al., 2021).
3 METHODOLOGY
3.1 PRELIMINARIES
Adversarial training (AT). As one of the widely adopted defense mechanisms, adversarial training (Madry et al., 2018b) effectively tackles the vulnerability to maliciously crafted adversarial samples. As formulated in Equation 1, AT (specifically PGD-AT) replaces the original empirical risk minimization into a min-max optimization problem:
3
min θ
E(x,y)∈DL ( f(x; θ), y ) =⇒ min
θ E(x,y)∈D max ‖δ‖p≤ L ( f(x+ δ; θ), y ) , (1)
where f(x; θ) is a network with parameters θ. Input data x and its associated label y from training set D are used to first generate adversarial perturbations δ and then minimize the empirical classification loss L. To meet the imperceptible requirement, the `p norm of δ is constrained by a small constant . Projected Gradient Descent (PGD), i.e., δt+1 = projP [δ t + α · sgn ( ∇xL(f(x + δt; θ), y) ) ], is usually utilized to produce the adversarial perturbations with step size α, which works in an iterative manner leveraging the local first order information about the network (Madry et al., 2018b).
Sparse subnetworks. Following the routine notations in Frankle & Carbin (2019), f(x;m θ) donates a sparse subnetwork with a binary pruning mask m ∈ {0, 1}‖θ‖0 , where is the elementwise product. Intuitively, it is a copy of dense network f(x; θ) with a portion of fixed zero weights.
3.2 ROBUST BIRD FOR ADVERSARIAL TRAINING
Introducing Robust Bird. The primary goal of Robust Bird is to find a high-quality sparse subnetwork efficiently. As shown in Figure 2, it locates subnetworks quickly by detecting critical network structures arising in the early training, which later can be robustified with much less computation.
Specifically, for each epoch t during training, Robust Bird creates a sparsity mask mt by “masking out” the p% lowest-magnitude weights; then, Robust Bird tracks the corresponding mask dynamics. The key observation behind Robust Bird is that the sparsity mask mt does not change drastically beyond the early epochs of training (You et al., 2020) because high-level network connectivity patterns are learned during the initial stages (Achille et al., 2019). This indicates that (i) winning tickets emerge at a very early training stage, and (ii) that they can be identified efficiently.
Robust Bird exploits this observation by comparing the Hamming distance between sparsity masks found in consecutive epochs. For each epoch, the last l sparsity masks are stored. If all the stored masks are sufficiently close to each other, then the sparsity masks are not changing drastically over time and network connectivity patterns have emerged; thus, a Robust Bird ticket (RB ticket) is drawn. A detailed algorithmic implementation is provided in Algorithm 1 of Appendix A1. This is the RB ticket used in the second stage of adversarial training.
4
5 0 5 10 15 20 25 30 35
1 s t PC: 3 7 .6 7 %
20
10
0
10
20
30
2n d
PC : 1
9. 56
%
0 .500
1 .000
1 .500
2 .000
2.500
3 .0 00
3 .000
3 .500
3 .50 0
4.000 4 .500
5 .0 00
5.500 6 .000
6 .5 0
7.000 7.500
8 .0 00.59 .000.5
10.000 100 .000
1 000 .000
1 0 000 .000
100000 .000
1 000000 .000 10000000 .000 100000000 .000
5 0 5 10 15 20 25 30 35
1 s t PC: 3 6 .1 6 %
20
10
0
10
20
30
2n d
PC : 1
9. 17
%
1 .000 1 .5 00
2 .000 2 .500
3.000
3 .5 00
3 .500 4 .000
4 . 00 0
4 .5 00
5 .0 00
5 . 50 0 6 . 00 0
6. 50 0
7 .0 007 .5 008 .00 0 8 .50 09 .000.5
10 .0 00
1 00 .000
1000.000
1 0 000 .000
100000 .000
1 0 00000 .000
10000000 .000
1 00000000 .000
5 0 5 10 15 20 25 30 35
1 s t PC: 3 6 .9 4 %
20
10
0
10
20
30
2n d
PC : 1
9. 72
%
1 .500
2 .000
2 .500
2 .500
2 .500
3 .000
3 .000
3 .500
3 .500
4 .000 4 .5 0 5 .000
5 .500 6 .0 6 .500 7 .0 7 .500 8 .0 8 .500 9 .000
9 .50
10 .000
100 .000
1000 .000
1 0000 .000
100000 .0
1000000 .000
10000000 .000
000 .000
D en
se
R an
do m
P ru
ni ng
Fl yi
ng B
ir d+
Figure 3: Visualization of loss contours and training trajectories. We compare the dense network, randomly pruned sparse networks, and flying bird+ at 90% sparsity from ResNet-18 robustified on CIFAR-10.
Rationale of Robust Bird. Recent studies (Zhang et al., 2021c) present theoretical analyses that identified sparse winning tickets enlarge the convex region near the good local minima, leading to improved generalization. Our work also shows a related investigation in Figure A9 that, compared with dense models and random pruned subnetworks, RB tickets found by the standard training have much flatter loss landscapes, serving a high-quality starting point for further robustification. This occurs because flatness of the loss surface is often believed to indicate the standard generalization. Similarly, as advocated by Wu et al. (2020a); Hein & Andriushchenko (2017), a flatter adversarial loss landscape also effectively shrinks the robustness generalization gap. This “flatness preference” of adversarial robustness has been revealed by numerous empirical defense mechanisms, including Hessian/curvature-based regularization (Moosavi-Dezfooli et al., 2019), learned weight and logits smoothening (Chen et al., 2021e), gradient magnitude penalty (Wang & Zhang, 2019), smoothening with random noise (Liu et al., 2018), or entropy regularization (Jagatap et al., 2020).
These observations make the main cornerstone for our proposal and provide possible interpretations to the surprising finding that the RB tickets pruned from a non-robust model can be used for obtaining well-generalizable robust models in the followed robustification. Furthermore, unlike previous costly flatness regularizers (Moosavi-Dezfooli et al., 2019), our methods not only offer a flatter starting point but also obtain substantial computational savings due to the reduced model size.
3.3 FLYING BIRD FOR ADVERSARIAL TRAINING
Introducing Flying Bird(+). Since sparse subnetworks from static pruning are unable to regret for removed elements, they may be too aggressive to capture the pivotal structural patterns. Thus, we introduce Flying Bird (FB) to conduct a thorough exploration of dynamic sparsity, which allows pruned parameters to be grown back and engages in the next round of training or pruning, as demonstrated in Figure 2. Specifically, it starts from a sparse subnetwork f(x;m θ) with a random binary mask m, and then jointly optimize model parameters and sparse connectivities simultaneously. In other words, the subnetwork’s typologies are “on the fly”, decided dynamically based on current training status. Specifically, we update Flying Bird’s sparse connectivity every ∆t epochs of adversarial training, which consists of two continually applied operations: pruning and growing. For the pruning step, p% of model weights with the lowest magnitude will be eliminated, while g% weights with the largest gradient will be added back in the growth step. Note that newly added connections are not activated in the last sparse topology, and are initialized to zero since it establishes better performance as indicated in (Evci et al., 2020a; Liu et al., 2021b). Flying Bird maintains the sparsity ratio unchanged during the full training by keeping both pruning and growing ratio p%, g% equal k% that decays with a cosine annealing schedule.
We further propose Flying Bird+, an enhanced variant of FB, capable of adaptively adjusting the sparsity and learning the right parameterization level ”on demand” during training, as shown in Figure 2. To be specific, we first record the robust generalization gap and robust validation loss at each training epoch. An increasing generalization gap of the later training stage indicates a risk of overfitting, while a plateau validation loss implies underfitting. Hence, we then analyze the fitting status according to the upward/downward trend of those measurements. If most epochs (e.g., more than 3 out of the past 5 epochs in our case) tend to see enlarged robust generalization gaps, we raise the pruning ratio p% to further trim down the network capacity. Similarly, if the majority of epochs present unchanged validation loss, we will increase the growing ratio q% to enrich the subnetwork capacity. Detailed procedures are summarized in Algorithm 2 of Appendix A1.
Rationale of Flying Bird(+). As demonstrated in Evci et al. (2020a), allowing new connections to grow yields improved flexibility in navigating the loss surfaces, which creates the opportunity to
5
escape bad local minima and search for the optimal sparse connectivity Liu et al. (2021b). Flying Bird follows a similar design philosophy that excludes least important connections (Han et al., 2015a) while activating new connections with the highest potential to decrease the training loss fastest. Recent works (Wu et al., 2020c; Liu et al., 2019) have also found enabling network (re)growth can turn a poor local minima into a saddle point that facilitates further loss decrease. Flying Bird+ empowers the flexibility further by adaptive sparsity level control.
The flatness of loss geometry provides another view to dissect the robust generalization gain (Chen et al., 2021e; Stutz et al., 2021; Singla et al., 2021). Figure 3 compares the loss landscapes and training trajectories of dense, randomly pruned subnetworks, and Flying Brid+ robustified on CIFAR-10. We observe that Flying Bird+ converges to a wider loss valley with improved flatness, which usually suggests superior robust generalization (Wu et al., 2020a; Hein & Andriushchenko, 2017). Last but not the least, our approaches also significantly trim down both the training memory overhead and the computational complexity, enjoying extra bonus of efficient training and inference.
4 EXPERIMENT RESULTS
Datasets and architectures. Our experiments consider two popular architectures, ResNet-18 (He et al., 2016), VGG-16 (Simonyan & Zisserman, 2014) on three representative datasets, CIFAR-10, CIFAR-100 (Krizhevsky & Hinton, 2009) and Tiny-ImageNet (Deng et al., 2009). We randomly split one-tenth of the training samples as the validation dataset, and the performance is reported on the official testing dataset.
Training and evaluation details. We implement our experiments with the original PGD-based adversarial trainig (Madry et al., 2018b), in which we train the network against `∞ adversary with maximum perturbations of 8/255. 10-steps PGD for training and 20-steps PGD for evaluation are chosen with a step size α of 2/255, following Madry et al. (2018b); Chen et al. (2021e). In addition, we also use Auto-Attack (Croce & Hein, 2020) and CW Attack (Carlini & Wagner, 2017) for a more rigorous evaluation. More details are provided in Appendix A2. For each experiment, we train the network for 200 epochs with an SGD optimizer, whose momentum and weight decay are kept to 0.9 and 5× 10−4, respectively. The learning rate starts from 0.1 that decays by 10 times at 100,150 epoch and the batch size is 128, which follows Rice et al. (2020).
For Robust Bird, the threshold τ of mask distance is set as 0.1. In Flying Birds(+), we calculate the layer-wise sparsity by Ideal Gas Quotas (IGQ) (Vysogorets & Kempe, 2021) and then apply random pruning to initialize the sparse masks. FB updates the sparse connectivity per 2000 iterations of AT, with an update ratio k that starts from 50% and decays by cosine annealing. More details are referred to Appendix A2. Hyperparameters are either tuned by grid search or following Liu et al. (2021b).
Evaluation metrics. In general, we care about both the accuracy and efficiency of obtained sparse networks. To assess the accuracy, we consider both Robust Testing Accuracy (RA) and Standard Testing Accuracy (SA) which are computed on the perturbed and the original test sets, together with Robust Generalization Gap (RGG) (i.e., the gap of RA between train and test sets). Meantime, we report the floating point operations (FLOPs) of the whole training process and single image inference to measure the efficiency.
4.1 ROBUST BIRD IS A GOOD BIRD
In this section, we evaluate the effectiveness of static sparsity from diverse representative pruning approaches, including: (i) Random Pruning (RP), by randomly eliminating model parameters to the desired sparsity; (ii) One-shot Magnitude Pruning (OMP), which globally removes a certain ratio of lowest-magnitude weights; (iii) Pruning at Initialization algorithms. Three advanced methods, i.e., SNIP (Lee et al., 2019), GraSP (Wang et al., 2020) and SynFlow (Tanaka et al., 2020), are considered, which identify the subnetworks at initialization respect to certain criterion of gradient flow. (iv) Ideal Gas Quotas (IGS) (Vysogorets & Kempe, 2021). It adopts random pruning based on pre-calculated layer-wise sparsity which draws intuitive analogies from physics. (v) Robust Bird (RB), which can be regarded as an early stopped OMP. (vi) Small Dense. It is an important sanity check via considering smaller dense networks with the same parameter counts as the ones of sparse networks. Comprehensive results of these subnetworks at 80% and 90% sparsity are reported in Table 1, where the chosen sparsity follows routine options (Evci et al., 2020a; Liu et al., 2021b).
6
As shown in Table 1, we first observe the occurrence of poor robust generalization with 38.82% RA gap and robust overfitting with 7.49% RA degradation, when training the dense network (Baseline). Fortunately, coincided with our claims, injecting appropriate sparsity effectively tackle the issue. For instance, RB greatly shrinks the RGG by 15.45%/22.20% at 80/90% sparsity, while also mitigates robust overfitting by 2.53% ∼ 4.08%. Furthermore, comparing all static pruning methods, we find that (1) Small Dense and RP behave the worst, which suggests the identified sparse typologies play important roles rather than reduced network capacity only; (2) RB shows clear advantages to OMP in terms of all measurements, especially for 78.32% ∼ 84.80% training FLOPs savings. It validates our RB proposal that a few epochs of standard training are enough to learn a high-quality sparse structure for further robustification, and thus there is no need to complete the full training in the tickets finding stage like traditional OMP. (3) SynFlow and IGQ approaches have the best RA and SA, while RB obtains the superior robust generalization among static pruning approaches.
Finally, we explore the influence of training regimes during the RB ticket finding on CIFAR-100 with ResNet-18. Table A6 demonstrates that RB tickets perform best when found with the cheapest standard training. Specifically, at 90% and 95% sparsity, SGD RB tickets outperform both Fast AT (Wong et al., 2020) and PGD-10 RB tickets with up to 1.27% higher RA and 1.86% narrower RGG. Figure A7 offers a possible explanation for this phenomenon: the SGD training scheme more quickly develops high-level network connections, during the early epochs of training (Achille et al., 2019). As a result, RB Tickets pruned from the model trained with SGD achieve superior quality.
4.2 FLYING BIRD IS A BETTER BIRD
In this section, we discuss the advantages of dynamic sparsity and show that our Flying Bird(+) is a superior bird. Table 1 examines the effectiveness of FB(+) on CIFAR-10 with ResNet-18, and several consistent observations can be drawn: ¶ FB(+) achieve 9.92% ∼ 23.66% RGG reduction, 2.24% ∼ 5.88% decrease for robust overfitting, compared with the dense network. And FB+ at 80% sparsity even pushes the RA 0.60% higher. · Although the smaller dense network shows the leading performance w.r.t improving robust generalization, the robustness has been largely sacrificed, with up to 4.29% RA degradation, suggesting that only reducing models’ parameter counts is insufficient to keep satisfactory SA/RA. ¸ FB and FB+ achieve superior performance of RA for both the best and final checkpoints across all methods, including RB. ¹ Regardless of small dense and random pruning due to their poor robustness, FB+ reaches the most impressive robust generalization (rank #1 or #2) with the least training and inference costs. Precisely, FB+ obtains 84.46% ∼ 91.37% training FLOPs and 84.46% ∼ 93.36% inference FLOPs saving, i.e., Flying Bird+ is SUPER light-weight.
7
Superior performance across datasets and architectures. We further evaluate the performance of FB(+) across various datasets (CIFAR-10, CIFAR-100 and Tiny-ImageNet) and architectures (ResNet-18 and VGG-16). Table 2 and 3 display that both static and dynamic sparsity of our proposals serve effective remedies for improving robust generalization and mitigating robust overfitting, with 4.43% ∼ 15.45%, 14.99% ∼ 34.44% and 21.62% ∼ 23.60% RGG reduction across different architectures on CIFAR-10, CIFAR-100 and Tiny-ImageNet, respectively. Moveover, both RB and FB(+) gain significant efficiency, with up to 87.83% training and inference FLOPs savings.
Superior performance across improved attacks. Additionally, we verify both RB and FB(+) under improved attacks, i.e., Auto-Attack (Croce & Hein, 2020) and CW Attack (Carlini & Wagner, 2017). As shown in Table A8, our approaches shrink the robust generalization gap by up to 30.76% on CIFAR-10/100, and largely mitigate robust overfitting. This piece of evidence shows our proposal’s effectiveness sustained across diverse attacks.
Combining FB+ with existing start-of-the-art (SOTA) mitigation. Previous works (Chen et al., 2021e; Zhang et al., 2021a; Wu et al., 2020b) point out that smoothening regularizations (e.g., KD (Hinton et al., 2015) and SWA (Izmailov et al., 2018)) help robust generalization and lead to SOTA robust accuracies. We combine them with our FB+ and collect the robust accuracy on CIFAR-10 with ResNet-18 in Figure 4. The extra robustness gains from FB+ imply that they makes complementary contributions.
Excluding obfuscated gradients. A common “counterfeit” of robustness improvements is less effective adversarial examples resulted from obfuscated gradients (Athalye et al., 2018). Table A7 demonstrates the maintained enhanced robustness under unseen transfer attacks, which excludes the possibility of gradient masking. More are referred to Section A3.
4.3 ABLATION STUDY AND VISUALIZATION
Different sparse initialization and update frequency. As two major components in the dynamic sparsity exploration (Evci et al., 2020a), we conduct thorough ablation studies in Table 4 and 5. We found the performance of Flying Bird+ is more sensitive to different sparse initialization; using SNIP to produce initial layer-wise sparsity and updating the connections per 2000 iterations serves the superior configuration for FB+.
8
Table 4: Ablation of different sparse initialization in Flying Bird+. Subnetwroks at 80% initial sparsity are chosen on CIFAR-10 with ResNet-18.
Table 5: Ablation of different update frequency in Flying Bird+. Subnetworks at 80% initial sparsity are chosen on CIFAR-10 with ResNet-18.
Final checkpoint loss landscapes. From visualizations in Figure 5, FB and FB+ converge to much flatter loss valleys, which evidences their effectiveness in closing robust generalization gaps.
Attention and saliency maps. To visually inspect the benefits of our proposal, here we provide attention and saliency maps generated by Grad-GAM (Selvaraju et al., 2017) and tools in (Smilkov et al., 2017). Comparing the dense model to our “talented birds” (e.g., FB+), Figure 6 shows that our approaches have enhanced concentration on main objects, and are capable of capturing more local feature information, aligning better with human perception.
1
Dense
Adversarial Samples
Random Pruning
SNIP
Flying Bird
Robust Bird
Flying Bird+
+
Heatmap Saliency Map
Figure 6: (Left) Visualization of attention heatmaps on adversarial images based on Grad-Cam (Selvaraju et al., 2017). (Right) Saliency map visualization on adversarial samples (Smilkov et al., 2017).
5 CONCLUSION
We show the adversarial training of dense DNNs incurs a severe robust generalization gap, which can be effectively and efficiently resolved by injecting appropriate sparsity. Our proposed Robust Bird and Flying Bird(+) with static and dynamic sparsity, significantly mitigate the robust generalization gap while retaining competitive standard/robust accuracy, besides substantially reduced computation. Our future works plan to investigate channel- and block-wise sparse structures.
9
A1 MORE TECHNIQUE DETAILS
Algorithms of Robust Bird and Flying Bird(+). Here we present the detailed procedure to identify robust bird and flying bird(+), as summarized in algorithm 1 and 2. Note that for the increasing frequency on Line 10 and 11 in algorithm 2, we compare the measurements stored in the queue between two consequent epochs and calculate the frequency of increasing.
Algorithm 1: Finding a Robust Bird Input: f(x; θ0) w. initialization θ0, target sparsity s%, FIFO queue Q with length l, threshold τ Output: Robust bird f(x;mt∗ θT)
1 while t < tmax do 2 Update network parameters θt ← θt−1 via standard training 3 Apply static pruning towards target sparsity s% and obtain the sparse mask mt 4 Calculate the Hamming distance δH(mt,mt−1), append result to Q 5 t← t+ 1 6 if max(Q) < τ then 7 t∗ ← t 8 Rewind f(x;mt∗ θt∗)→ f(x;mt∗ θ0) 9 Training f(x;mt∗ θ0) via PGD-AT for T epochs
10 return f(x;mt∗ θT) 11 end 12 end
Algorithm 2: Finding a Flying Bird(+) Input: Initialization parameters θ0, sparse masks m of sparsity s%, FIFO queue Qp andQg
with length l, pruning and growth increasing ratio δp and δg , update threshold , optimize interval ∆t, parameter update ratio k%, ratio update starting point tstart
Output: Flying bird(+) f(x;m θT) 1 while t < T do 2 Update network parameters θt ← θt−1 via PGD-AT; 3 # Record training statistics 4 Add robust generalization gap between train and validation set to Qp 5 Add robust validation loss to Qg 6 # Update sparse masks m 7 if (t mod ∆t) == 0 then 8 |---Optional for Flying Bird+---| 9 # Update pruning and growth ratio p%, g%
10 if t > tstart and increasing frequency of Qp ≥ : p = (1 + δp)× k else p = k 11 if t > tstart and increasing frequency of Qg ≥ : g = (1 + δg)× k else g = k 12 |---Optional for Flying Bird+---| 13 Prune p% parameters with smallest weight magnitude 14 Grow g% parameters with largest gradient 15 Update sparse mask m accordingly 16 end 17 end
A2 MORE IMPLEMENTATION DETAILS
A2.1 OTHER COMMON DETAILS
We select two checkpoints during training: best, which has the best RA values on the validation set, and final, i.e., the last checkpoint. And we report both RA and SA of these two checkpoints on test sets. Apart from the robust generalization gap, we also show the extent of robust overfitting numerically by the difference of RA between best and final. Furthermore, we calculate the FLOPs
A17
at both training and inference stages to evaluate the prices of obtaining and exploiting the subnetworks respectively, in which we approximate the FLOPs of the back-propagation to be twice that of forwarding propagation (Yang et al., 2020).
A2.2 MORE DETAILS ABOUT ROBUST BIRD
For the experiments of RB tickets finding, we comprehensively study three training regimes: standard training with stochastic gradient descent (SGD), adversarial training with PGD-10 AT (Madry et al., 2018b), and Fast AT (Wong et al., 2020). Following Pang et al. (2021), we train the network with an SGD optimizer of 0.9 momentum and 5 × 10−4 weight decay. We use a batch size of 128. For the experiments of PGD-10 AT, we adopt the `∞ PGD attack with a maximum perturbation = 8/255 and a step size α = 2/255. And the learning rate starts from 0.1, then decays by ten times at 50, 150 epoch. As for fast AT, we use a cyclic schedule with a maximum learning rate equals 0.2.
A2.3 MORE DETAILS ABOUT FLYING BIRD(+)
For the experiments of Flying Bird+, the increasing ratio of pruning and growth δp, δq is kept default to 0.4% and 0.05%, respectively.
A3 MORE EXPERIMENT RESULTS
A3.1 MORE RESULTS ABOUT ROBUST BIRD
Accuracy during RB Tickets Finding Figure A7 shows the curve of standard test accuracy during the training phase of RB ticket finding. We can observe the SGD training scheme develops highlevel network connections much faster than the others, which provides a possible explanation for the superior quality of RB tickets from SGD.
0 5 10 15 20 25 30 Epoch
30 40 50 60 70 80 St an da rd A cc ur ac y %
RB Ticket Finding Performance
PGD-10 SGD FAST AT
Figure A7: Standard accuracy (SA) of PGD-10, SGD, and Fast AT during the RB ticket finding phase.
Mask Similarity Visualization. Figure A8 visualizes the dynamic similarity scores for each epoch among masks found via SGD, Fast AT, and PGD-10. Specifically, the similarity scores (You et al., 2020) reflect the Hamming distance between a pair of masks. We notice that masks found by SGD and PGD-10 share more common structures. A possible reason is that Fast AT usually adopts a cyclic learning rate schedule, while SGD and PGD use a multi-step decay schedule.
Different training regimes for finding RB tickets. We denote the subnetworks identified by standard training with SGD, adversarial training with Fast AT (Wong et al., 2020) and adversarial train-
A18
0 20 40 60 80 100
Fast AT Mask
0 20 40 60 80
100PG D-
10 M
as k
0 20 40 60 80 100
Fast AT Mask
0 20 40 60 80
100
SG D
M as
k
0 20 40 60 80 100
PGD-10 Mask
0 20 40 60 80
100
SG D
M as
k
0.50 0.55 0.60 0.65 0.70 0.75
Figure A8: Similarity scores by epoch among masks found via Fast AT, SGD, and PGD-10. A brighter color denotes higher similarity.
Table A6: Comparison results of different training regimes for RB ticket finding on CIFAR-100 with ResNet-18. The subnetworks at 90% and 95% are selected here.
Sparsity(%) Settings Roubst Accuarcy Standard Accuarcy Robust
Best Final Diff. Best Final Diff. Generalization
0 Baseline 26.93 19.62 7.31 52.03 53.91 −1.88 54.56
90
SGD tickets 25.83 23.40 2.43 49.35 53.51 −4.16 18.37↓ 36.19 Fast AT tickets 25.15 22.88 2.27 51.00 51.75 −0.75 20.23↓ 34.33 PGD-10 tickets 25.34 22.96 2.38 52.01 53.27 −1.26 20.03↓ 34.53
95
SGD tickets 24.77 24.12 0.65 49.88 50.89 −1.01 9.18↓ 45.38 Fast AT tickets 23.50 22.46 1.04 41.67 43.19 −1.52 9.53↓ 45.03 PGD-10 tickets 24.44 23.77 0.67 49.30 50.65 −1.35 9.86↓ 44.70
ing with PGD-10 AT as SGD tickets, Fast AT tickets, and PGD-10 tickets, respectively. Table A6 demonstrate the SGD tickets has the best performance.
Loss Landscape Visualization We visualize the loss landscape of the dense network, random pruned subnetwork, and robust bird tickets at 30% sparsity in Figure A9. Compared with the dense model and random pruned subnetwork, RB tickets found by the standard training shows much flatter loss landscapes, which provide a high-quality starting point for further robustification.
A3.2 MORE RESULTS ABOUT FLYING BIRD(+)
Excluding Obfuscated Gradients. To exclude this possibility of gradient masking, we show that our methods maintain improved robustness under unseen transfer attacks. As shown in Table A7, the left part represents the testing accuracy of perturbed test samples from an unseen robust model, and the right part shows the transfer testing performance on an unseen robust model (here we use a separately robustified ResNet-50 with PGD-10 on CIFAR-100).
Performance under Improved Attacks. We report the performance of both RB and FB(+) under Auto-Attack (Croce & Hein, 2020) and CW Attack (Carlini & Wagner, 2017). For Auto-Attack, we keep the default setting with = 8255 . And for CW Attack we perform 1 search step on C with an initial constant of 0.1. And we use 100 iterations for each search step with the learning rate of 0.01. As shown in Table A8, both RB and FB(+) outperform the dense counterpart in terms of robust generalization. And FB+ achieves superior performance.
More Datasets and Architectures We report more results of different sparsification methods across diverse datasets and architectures at Table A9, A10, A11 and A12, from which we observe our approaches are capable of improving robust generalization and mitigating robust overfitting.
A19
Dense Models RB Tickets (30%) Random Pruning (30%)
Figure A9: Loss landscapes visualizations (Engstrom et al., 2018; Chen et al., 2021e) of the dense model (unpruned), random pruned subnetwork at 30% sparsity, and Robust Bird (RB) tickets at 30% sparsity found by the standard training. The ResNet-18 backbone with the same original initialization on CIFAR-10 is adopted here. Results demonstrate that RB tickets offer a smoother and flatter starting point for further robustification in the second stage.
Table A7: Transfer attack performance from/on an unseen non-robust model, where the attacks are generated by/applied to the non-robust model. The robust generalization gap is also calculated based on transfer attack accuracies between train and test sets. We use ResNet-18 on CIFAR-10/100 and sub-networks at 80% sparsity.
Dataset Settings Transfer Attack from Unseen Model Transfer Attack on Unseen Model
Accuracy Robust Accuracy Robust
Best Final Diff. Generalization Best Final Diff. Generalization
CIFAR-10 Baseline 79.68 82.03 −2.35 16.43 70.48 79.85 −9.37 11.84 Robust Bird 77.33 81.04 −3.71 12.18 73.17 77.03 −3.86 11.49 Flying Bird 79.13 82.17 −3.04 13.49 71.59 77.19 −5.60 11.88 Flying Bird+ 79.47 81.90 −2.43 11.85 70.43 76.00 −5.57 11.42
CIFAR-100 Baseline 50.51 52.15 −1.64 45.91 48.67 54.48 −5.81 36.98 Robust Bird 47.25 51.74 −4.49 28.80 47.47 50.90 −3.43 35.82 Flying Bird 51.80 53.52 −1.72 31.98 45.56 50.61 −5.05 35.39 Flying Bird+ 50.72 53.56 −2.84 25.09 47.04 49.43 −2.39 35.09
Distributions of Adopted Sparse Initialization. We report the layer-wise sparsity of different initial sparse masks. As shown in Figure A10, we observe that subnetworks generally have better performance when the top layers remain most of the parameters.
Training Curve of Flying Bird+. Figure A11 shows the training curve of Flying Bird+, in which the red dotted lines represent the time for increasing the pruning ratio and the green dotted lines for growth ratio. The detailed training curve demonstrates the flexibility of flying bird+ for dynamically adjusting the sparsity levels.
A4 EXTRA RESULTS AND DISCUSSION
We sincerely appreciate all anonymous reviewers’ and area chairs’ constructive discussions for improving this paper. Extra results and discussions are presented in this section.
A20
Table A8: Evaluation under improved attacks (i.e., Auto-Attack and CW-Attack) on CIFAR-10/100 with ResNet-18 at 80% sparsity. The robust generalization gap is computed under improved attacks.
Dataset Settings Auto-Attack CW-Attack
Accuracy Robust Accuracy Robust
Best Final Diff. Generalization Best Final Diff. Generalization
CIFAR-10 Baseline 47.41 41.59 5.82 35.30 75.76 66.13 9.63 30.39 Robust Bird 45.90 42.45 3.45 21.58 ↓ 13.72 73.95 73.52 0.43 17.67 ↓ 12.72 Flying Bird 47.55 43.57 3.98 26.55 ↓ 8.75 75.30 72.08 3.22 21.77 ↓ 8.62 Flying Bird+ 47.06 44.09 3.17 21.73 ↓ 13.57 76.00 73.83 2.17 17.77 ↓ 12.62
CIFAR-100 Baseline 23.16 17.68 5.48 49.73 45.83 36.21 9.62 57.52 Robust Bird 21.29 18.00 3.29 21.72 ↓ 28.01 43.30 42.39 0.91 30.82 ↓ 26.70 Flying Bird 22.74 19.44 3.30 25.18 ↓ 24.55 46.23 42.36 3.87 35.50 ↓ 22.02 Flying Bird+ 22.90 20.31 2.59 19.05 ↓ 30.68 45.86 43.90 1.96 26.76 ↓ 30.76
Table A9: More results of different sparcification methods on CIFAR-10 with ResNet-18.
Sparsity(%) Settings Robust Accuracy Standard Accuracy Robust
Best Final Diff. Best Final Diff. Generalization
0 Baseline 51.10 43.61 7.49 81.15 83.38 −2.23 38.82
95
Small Dense 45.99 44.55 1.44 74.26 75.64 −1.38 7.87 ↓ 30.95 Random Pruning 45.64 44.18 1.46 75.20 75.20 0.00 7.96 ↓ 30.86 OMP 47.08 46.23 0.85 78.77 79.36 −0.59 12.01 ↓ 26.81 SNIP 48.18 46.72 1.46 78.55 79.21 −0.66 9.58 ↓ 29.24 GraSP 48.58 47.15 1.43 78.95 79.44 −0.49 10.37 ↓ 28.45 SynFlow 48.93 48.22 0.71 78.70 78.90 −0.20 8.25 ↓ 30.57 IGQ 48.82 47.56 1.26 79.44 79.76 −0.32 9.33 ↓ 29.49 Robust Bird 47.53 46.48 1.05 78.33 78.78 −0.45 9.20 ↓ 29.62 Flying Bird 49.62 48.46 1.16 78.12 81.43 −3.31 13.32 ↓ 25.52 Flying Bird+ 49.37 48.84 0.53 80.33 80.28 0.05 9.27 ↓ 29.55
co nv
1
la ye
r1 .0
.c on
v1
la ye
r1 .0
.c on
v2
la ye
r1 .1
.c on
v1
la ye
r1 .1
.c on
v2
la ye
r2 .0
.c on
v1
la ye
r2 .0
.c on
v2
la ye
r2 .0
.sh or
tc ut
.0
la ye
r2 .1
.c on
v1
la ye
r2 .1
.c on
v2
la ye
r3 .0
.c on
v1
la ye
r3 .0
.c on
v2
la ye
r3 .0
.sh or
tc ut
.0
la ye
r3 .1
.c on
v1
la ye
r3 .1
.c on
v2
la ye
r4 .0
.c on
v1
la ye
r4 .0
.c on
v2
la ye
r4 .0
.sh or
tc ut
.0
la ye
r4 .1
.c on
v1
la ye
r4 .1
.c on
v2
lin ea
r
Layer Name
0.0
0.2
0.4
0.6
0.8
1.0
Sp ar
si ty
Uniform GraSP SNIP SynFlow IGQ ERK
Figure A10: Layer-wise sparisty of different initial sparse masks with ResNet-18
A4.1 MORE RESULTS OF DIFFERENT SPARSITY
We report more results of subnetworks with 40/60% sparsity on CIFAR-10/100 with ResNet-18 and VGG-16. As shown in Table A13, A14, A15 and A16, our flying bird(+) achieves consistent improvement than baseline unpruned networks, in terms of 2.45 ∼ 19.81% narrower robust generalization gaps with comparable RA and SA performance.
A4.2 MORE RESULTS ON WIDERESNET
We further evaluate our flying bird(+) with WideResNet-34-10 on CIFAR-10 and report the results on Table A17. We can observe that compared with the dense network, our methods significantly shrink the robust generalization gap by up to 13.14% and maintain comparable RA/SA performance.
A21
Table A10: More results of different sparcification methods on CIFAR-10 with VGG-16.
Sparsity(%) Settings Robust Accuracy Standard Accuracy Robust
Best Final Diff. Best Final Diff. Generalization
0 Baseline 48.33 42.73 5.60 76.84 79.73 −2.89 28.00
80
Random Pruning 46.14 40.33 5.81 74.42 76.68 −2.26 21.01 ↓ 6.99 OMP 47.90 43.19 4.71 76.60 80.02 −3.42 24.97 ↓ 3.03 SNIP 48.03 43.17 4.86 76.68 80.08 −3.40 24.71 ↓ 3.29 GraSP 47.91 42.34 5.57 75.74 78.87 −3.13 23.65 ↓ 4.35 SynFlow 48.47 45.32 3.15 77.62 79.09 −1.47 20.17 ↓ 7.83 IGQ 48.57 44.25 4.32 77.51 80.01 −2.50 22.79 ↓ 5.21 Robust Bird 47.69 41.66 6.03 75.32 78.58 −3.26 23.57 ↓ 4.43 Flying Bird 48.43 44.65 3.78 77.53 79.72 −2.19 21.01 ↓ 6.99 Flying Bird+ 48.25 45.24 3.01 77.48 79.55 −2.07 17.75 ↓ 10.25
90
Random Pruning 44.33 40.33 4.00 71.27 74.46 −3.19 15.48 ↓ 12.52 OMP 47.84 43.34 4.50 75.60 79.10 −3.50 18.29 ↓ 9.71 SNIP 47.76 44.27 3.49 75.92 79.62 −3.70 17.85 ↓ 10.15 GraSP 45.96 42.12 3.84 75.19 77.03 −1.84 15.04 ↓ 12.96 SynFlow 47.54 45.79 1.75 78.43 78.70 −0.27 14.40 ↓ 13.60 IGQ 47.79 45.12 2.67 74.87 79.19 −4.32 16.06 ↓ 11.94 Robust Bird 47.09 44.13 2.96 75.53 78.36 −2.83 16.57 ↓ 11.43 Flying Bird 48.45 45.55 2.90 75.82 79.21 −3.39 16.56 ↓ 11.44 Flying Bird+ 48.39 46.26 2.13 78.73 79.12 −0.39 12.47 ↓ 15.53
A4.3 COMPARISON WITH EFFICIENT ADVERSARIAL TRAINING METHODS
To elaborate more about training efficiency, we compare our methods with two efficient training methods. Shafahi et al. (2019) proposed Free Adversarial Training that improves training efficiency by reusing the gradient information, which is orthogonal to our approaches and can be easily combined with our methods to pursue more efficiency by replacing the PGD-10 training with Free AT.
A22
Sparsity: 80% Sparsity: 90%
Additionally, Li et al. (2020) uses magnitude pruning to locate sparse structures, which is similar to OMP reported in Table 1, except they use a smaller learning rate. Our methods achieve better performance and efficiency than OMP. Specifically, with 80% sparsity, our flying bird+ reaches a 4.49% narrower robust generalization gap and 1.54% higher RA yet only requires 87.58% less training FLOPs. Also, our methods can be easily combined with Fast AT for further training efficiency.
A23
A4.4 COMPARISON WITH OTHER PRUNING AND SPARSE TRAINING METHODS
Compared with the recent work (Özdenizci & Legenstein, 2021), our flying bird(+) is different at both levels of goal and methodologies. Firstly, Özdenizci & Legenstein (2021) pursues a superior adversarial robust testing accuracy for sparsely connected networks. While we aim to investigate the relationship between sparsity and robust generalization, and demonstrate that introducing appropriate sparsity (e.g., LTH-based static sparsity or dynamic sparsity) into adversarial training
A24
substantially alleviates the robust generalization gap and maintains comparable or even better standard/robust accuracies. Secondly, Özdenizci & Legenstein (2021) samples network connectivity from a learned posterior to form a sparse subnetwork. However, our flying bird first removes the parameters with the lowest magnitude, which ensures a small term of the first-order Taylor approximation of the loss and thus limits the impact on the output of networks (Evci et al., 2020a). And then, it allows new connectivity with the largest gradient to grow to reduce the loss quickly (Evci et al., 2020a). Furthermore, we propose an enhanced variant of Flying Bird, i.e., Flying Bird+, which not only learns the sparse topologies but also is capable of adaptively adjusting the network capacity to determine the right parameterization level “on-demand” during training, while Özdenizci & Legenstein (2021) stick to a fixed parameter budget.
Another work, HYDRA (Sehwag et al., 2020) also has several differences from our robust birds. Specifically, HYDRA starts from a robust pre-trained dense network, which requires at least hundreds of epochs for adversarial training. However, our robust bird’s pre-training only needs a few epochs of standard training. Therefore, Sehwag et al. (2020) has significantly higher computational costs, compared to ours. Then, Sehwag et al. (2020) adopt TRADES (Zhang et al., 2019) for adversarial training, which also requires auxiliary inputs of clean images, while our methods follow the classical adversarial training (Madry et al., 2018b) and only take adversarial perturbed samples as input. Moreover, for CIFAR-10 experiments, Sehwag et al. (2020) uses 500k additional pseudolabeled images from the Tiny-ImageNet dataset with a robust semi-supervised training approach. However, all our methods and experiments do not leverage any external data.
Furthermore, one concurrent work (Fu et al., 2021) demonstrates that there exist subnetworks with inborn robustness. Such randomly initialized networks have matching or even superior robust accuracy of adversarially trained networks with similar parameter counts. It’s interesting to utilize this finding for further improvement of robust generalization, and we will investigate it in future works.
A25 | 1. What is the focus of the paper regarding adversarial training?
2. What are the strengths of the proposed approach, particularly in terms of improving efficiency and generalization?
3. Are there any limitations or concerns regarding the experimental design and comparisons with other works?
4. How does the reviewer assess the novelty and significance of the paper's contributions?
5. Do you have any questions regarding the effectiveness and applicability of the proposed method? | Summary Of The Paper
Review | Summary Of The Paper
This paper studies an important topic of improving adversarial training with different sparsity forms, and the authors made positive discoveries that injecting sparsity properly would make a win-win between efficiency and generalization.
All findings are not too surprising, but the authors’ effort of presenting a particularly comprehensive empirical study is acknowledged.
Review
(+) The idea is very interesting and promising. It is reasonable to find sparsity helps both reduce overfitting and improve training efficiency. For adversarial training, that could lead to very significant cost reduction.
(+) The authors’ study on how to inject sparsity is very comprehensive. They considered two alternatives for sparse adversarial training: a static Robust Bird (RB) training, and a dynamic Flying Bird (FB) training. The former identifies critical mask structure at early training stage, while the latter continues to optimize the mask throughout the entire training.
(+) It is great the authors show that their proposed methods can be combined to boost previous SOTAs. That definitely amplifies their work’s value.
(+/-) Experiments are a bit limited, using only two models (VGG and Res18). However, the aspects being evaluated as well as the ablations are very thorough, and I kinda agree the results mostly suffice to validate their points.
(-) However, the biggest issue I have with experiments is: they did not compare with latest efficient adversarial training methods, such as “Adversarial training for free!”, NeurIPS 2019; and “Towards practical lottery ticket hypothesis for adversarial training” in arXiv 2020. The latter one is also based on sparsity and has certain overlap with Robust Bird. The authors did compare with Fast AT, but that was placed in the Appendix only.
(-) Furthermore, even standard adversarial training or adversarial pruning methods could have been made cheaper, by either early-stopping with less epochs, or using fast/free-AT to replace their more expensive AT sub-modules. The authors need to discuss whether those methods can become their competitive baselines, in a convincing way, not ignoring it.
(-) It is unclear to me whether the final robustness gain comes from the good mask structure or just sparsity itself. In particular, random pruning in Table 1 performs better than I would expect, inviting the aforementioned question.
(-) More sparsity levels should have been tried. |
ICLR | Title
Multi-Objective Online Learning
Abstract
This paper presents a systematic study of multi-objective online learning. We first formulate the framework of Multi-Objective Online Convex Optimization, which encompasses a novel multi-objective regret. This regret is built upon a sequencewise extension of the commonly used discrepancy metric Pareto suboptimality gap in zero-order multi-objective bandits. We then derive an equivalent form of the regret, making it amenable to be optimized via first-order iterative methods. To motivate the algorithm design, we give an explicit example in which equipping OMD with the vanilla min-norm solver for gradient composition will incur a linear regret, which shows that merely regularizing the iterates, as in single-objective online learning, is not enough to guarantee sublinear regrets in the multi-objective setting. To resolve this issue, we propose a novel min-regularized-norm solver that regularizes the composite weights. Combining min-regularized-norm with OMD results in the Doubly Regularized Online Mirror Multiple Descent algorithm. We further derive the multi-objective regret bound for the proposed algorithm, which matches the optimal bound in the single-objective setting. Extensive experiments on several real-world datasets verify the effectiveness of the proposed algorithm.
1 INTRODUCTION
Traditional optimization methods for machine learning are usually designed to optimize a single objective. However, in many real-world applications, we are often required to optimize multiple correlated objectives concurrently. For example, in autonomous driving (Huang et al., 2019; Lu et al., 2019b), self-driving vehicles need to solve multiple tasks such as self-localization and object identification at the same time. In online advertising (Ma et al., 2018a;b), advertising systems need to decide on the exposure of items to different users to maximize both the Click-Through Rate (CTR) and the Post-Click Conversion Rate (CVR). In most multi-objective scenarios, the objectives may conflict with each other (Kendall et al., 2018). Hence, there may not exist any single solution that can optimize all the objectives simultaneously. For example, merely optimizing CTR or CVR will degrade the performance of the other (Ma et al., 2018a;b).
Multi-objective optimization (MOO) (Marler & Arora, 2004; Deb, 2014) is concerned with optimizing multiple conflicting objectives simultaneously. It seeks Pareto optimality, where no single objective can be improved without hurting the performance of others. Many different methods for MOO have been proposed, including evolutionary methods (Murata et al., 1995; Zitzler & Thiele, 1999), scalarization methods (Fliege & Svaiter, 2000), and gradient-based iterative methods (Désidéri, 2012). Recently, the Multiple Gradient Descent Algorithm (MGDA) and its variants have been introduced to the training of multi-task deep neural networks and achieved great empirical success (Sener & Koltun, 2018), making them regain a significant amount of research interest (Lin et al., 2019; Yu et al., 2020; Liu et al., 2021). These methods compute a composite gradient based on
∗Equal contributions. †Corresponding author.
the gradient information of all the individual objectives and then apply the composite gradient to update the model parameters. The composite weights are determined by a min-norm solver (Désidéri, 2012) which yields a common descent direction of all the objectives.
However, compared to the increasingly wide application prospect, the gradient-based iterative algorithms are relatively understudied, especially in the online learning setting. Multi-objective online learning is of essential importance for reasons in two folds. First, due to the data explosion in many real-world scenarios such as web applications, making in-time predictions requires performing online learning. Second, the theoretical investigation of multi-objective online learning will lay a solid foundation for the design of new optimizers for multi-task deep learning. This is analogous to the single-objective setting, where nearly all the optimizers for training DNNs are initially analyzed in the online setting, such as AdaGrad (Duchi et al., 2011), Adam (Kingma & Ba, 2015), and AMSGrad (Reddi et al., 2018).
In this paper, we give a systematic study of multi-objective online learning. To begin with, we formulate the framework of Multi-Objective Online Convex Optimization (MO-OCO). One major challenge in deriving MO-OCO is the lack of a proper regret definition. In the multi-objective setting, in general, no single decision can optimize all the objectives simultaneously. Thus, to devise the multi-objective regret, we need to first extend the single fixed comparator used in the singleobjective regret, i.e., the fixed optimal decision, to the entire Pareto optimal set. Then we need an appropriate discrepancy metric to evaluate the gap between vector-valued losses. Intuitively, the Pareto suboptimality gap (PSG) metric, which is frequently used in zero-order multi-objective bandits (Turgay et al., 2018; Lu et al., 2019a), is a very promising candidate. PSG can yield scalarized measurements from any vector-valued loss to a given comparator set. However, we find that vanilla PSG is unsuitable for our setting since it always yields non-negative values and may be too loose. In a concrete example, we show that the naive PSG-based regret RI(T ) can even be linear w.r.t. T when the decisions are already optimal, which disqualifies it as a regret metric. To overcome the failure of vanilla PSG, we propose its sequence-wise variant termed S-PSG, which measures the suboptimality of the whole decision sequence to the Pareto optimal set of the cumulative loss function. Optimizing the resulting regret RII(T ) will drive the cumulative loss to approach the Pareto front. However, as a zero-order metric motivated geometrically, designing appropriate first-order algorithms to directly optimize it is too difficult. To resolve the issue, we derive a more intuitive equivalent form of RII(T ) via a highly non-trivial transformation.
Based on the MO-OCO framework, we develop a novel multi-objective online algorithm termed Doubly Regularized Online Mirror Multiple Descent. The key module of the algorithm is the gradient composition scheme, which calculates a composite gradient in the form of a convex combination of the gradients of all objectives. Intuitively, the most direct way to determine the composite weights is to apply the min-norm solver (Désidéri, 2012) commonly used in offline multi-objective optimization. However, directly applying min-norm is not workable in the online setting. Specifically, the composite weights in min-norm are merely determined by the gradients at the current round. In the online setting, since the gradients are adversarial, they may result in undesired composite weights, which further produce a composite gradient that reversely optimizes the loss. To rigorously verify this point, we give an example where equipping OMD with vanilla min-norm incurs a linear regret, showing that only regularizing the iterate, as in OMD, is not enough to guarantee sublinear regrets in our setting. To fix the issue, we devise a novel min-regularized-norm solver with an explicit regularization on composite weights. Equipping it with OMD results in our proposed algorithm. In theory, we derive a regret bound of O( √ T ) for DR-OMMD, which matches the optimal bound in the single-objective setting (Hazan et al., 2016) and is tight w.r.t. the number of objectives. Our analysis also shows that DR-OMMD attains a smaller regret bound than that of linearization with fixed composite weights. We show that, in the two-objective setting with linear losses, the margin between the regret bounds depends on the difference between the composite weights yielded by the two algorithms and the difference between the gradients of the two underlying objectives.
To evaluate the effectiveness of DR-OMMD, we conduct extensive experiments on several largescale real-world datasets. We first realize adaptive regularization via multi-objective optimization, and find that adaptive regularization with DR-OMMD significantly outperforms fixed regularization with linearization, which verifies the effectiveness of DR-OMMD over linearization in the convex setting. Then we apply DR-OMMD to deep online multi-task learning. The results show that DROMMD is also effective in the non-convex setting.
2 PRELIMINARIES
In this section, we briefly review the necessary background knowledge of two related fields.
2.1 MULTI-OBJECTIVE OPTIMIZATION
Multiple-objective optimization (MOO) is concerned with solving the problems of optimizing multiple objectives simultaneously (Fliege & Svaiter, 2000; Deb, 2014). In general, since different objectives may conflict with each other, there is no single solution that can optimize all the objectives at the same time, hence the conventional concept of optimality used in the single-objective setting is no longer suitable. Instead, MOO seeks to achieve Pareto optimality. In the following, we give the relevant definitions more formally. We use a vector-valued loss F = (f1, . . . , fm) to denote the objectives, where m ≥ 2 and f i : X → R, i ∈ {1, . . . ,m}, X ⊂ R, is the i-th loss function. Definition 1 (Pareto optimality). (a) For any two solutions x,x′ ∈ X , we say that x dominates x′, denoted as x ≺ x′ or x′ ≻ x, if f i(x) ≤ f i(x′) for all i, and there exists one i such that f i(x) < f i(x′); otherwise, we say that x does not dominate x′, denoted as x ⊀ x′ or x′ ⊁ x. (b) A solution x∗ ∈ X is called Pareto optimal if it is not dominated by any other solution in X .
Note that there may exist multiple Pareto optimal solutions. For example, it is easy to show that the optimizer of any single objective, i.e., x∗i ∈ argminx∈X f i(x), i ∈ {1, . . . ,m}, is Pareto optimal. Different Pareto optimal solutions reflect different trade-offs among the objectives (Lin et al., 2019). Definition 2 (Pareto front). (a) All Pareto optimal solutions form the Pareto set PX (F ). (b) The image of PX (F ) constitutes the Pareto front, denoted as P(H) = {F (x) | x ∈ PX (F )}.
Now that we have established the notion of optimality in MOO, we proceed to introduce the metrics that measure the discrepancy of an arbitrary solution x ∈ X from being optimal. Recall that, in the single-objective setting with merely one loss function f : Z → R, for any z ∈ Z , the loss difference f(z) − minz′′∈Z f(z′′) is directly qualified for the discrepancy measure. However, in MOO with more than one loss, for any x ∈ X , the loss difference F (x) − F (x′′), where x′′ ∈ PX (F ), is a vector. Intuitionally, the desired discrepancy metric shall scalarize the vector-valued loss difference and yield 0 for any Pareto optimal solution. In general, in MOO, there are two commonly used discrepancy metrics, i.e., Pareto suboptimality gap (PSG) (Turgay et al., 2018) and Hypervolume (HV) (Bradstreet, 2011). As HV is a complex volume-based metric, it is more difficult to optimize via gradient-based algorithms (Zhang & Golovin, 2020). Hence in this paper, we adopt PSG, which has already been extensively used in multi-objective bandits (Turgay et al., 2018; Lu et al., 2019a). Definition 3 (Pareto suboptimality gap1). For any x ∈ X , the Pareto suboptimality gap to a given comparator set Z ⊂ X , denoted as ∆(x;Z, F ), is defined as the minimal scalar ϵ ≥ 0 that needs to be subtracted from all entries of F (x), such that F (x)− ϵ1 is not dominated by any point in Z , where 1 denotes the all-one vector in Rm, i.e.,
∆(x;Z, F ) = inf ϵ≥0
ϵ, s.t. ∀x′′ ∈ Z, ∃ i ∈ {1, . . . ,m}, f i(x)− ϵ < f i(x′′).
Clearly, PSG is a distance-based discrepancy metric motivated from a purely geometric viewpoint. In practice, the comparator set Z is often set to be the Pareto set X ∗ = PX (F ) (Turgay et al., 2018); therein for any x ∈ K, its PSG is always non-negative and equals zero if and only if x ∈ PX (F ). Multiple Gradient Descent Algorithm (MGDA) is an offline first-order MOO algorithm (Fliege & Svaiter, 2000; Désidéri, 2012). At each iteration l ∈ {1, . . . , L} (L is the number of iterations), it first computes the gradient ∇f i(xl) of each objective, then derives the composite gradient gcompl =∑m
i=1 λ i l∇f i(xl) as a convex combination of these gradients, and finally applies g comp l to execute a gradient descent step to update the decision, i.e., xl+1 = xl − ηgcompl (η is the step size). The core part of MGDA is the module that determines the composite weights λl = (λ1l , . . . , λ m l ), given by
λl = argmin λl∈Sm
∥ ∑m
i=1 λil∇f i(xl)∥22,
where Sm = {λ ∈ Rm | ∑m i=1 λ i = 1, λi ≥ 0, i ∈ {1, . . . ,m}} is the probabilistic simplex in Rm. This is a min-norm solver, which finds the weights in the simplex that yield the minimum L2norm of the composite gradient. Thus MGDA is also called the min-norm method. Previous works
1Our definition looks a bit different from (Turgay et al., 2018). In Appendix B, we show they are equivalent.
(Désidéri, 2012; Sener & Koltun, 2018) showed that when all f i are convex functions, MGDA is guaranteed to decrease all the objectives simultaneously until it reaches a Pareto optimal decision.
2.2 ONLINE CONVEX OPTIMIZATION
Online Convex Optimization (OCO) (Zinkevich, 2003; Hazan et al., 2016) is the most commonly adopted framework for designing online learning algorithms. It can be viewed as a structured repeated game between a learner and an adversary. At each round t ∈ {1, . . . , T}, the learner is required to generate a decision xt from a convex compact set X ⊂ Rn. Then the adversary replies the learner with a convex function ft : X → R and the learner suffers the loss ft(xt). The goal of the learner is to minimize the regret with respect to the best fixed decision in hindsight, i.e.,
R(T ) = ∑T
t=1 ft(xt)− min x∗∈X ∑T t=1 ft(x ∗).
A meaningful regret is required to be sublinear in T , i.e., limT→∞ R(T )/T = 0, which implies that when T is large enough, the learner can perform as well as the best fixed decision in hindsight.
Online Mirror Descent (OMD) (Hazan et al., 2016) is a classic first-order online learning algorithm. At each round t ∈ {1, . . . , T}, OMD yields its decision via
xt+1 = argmin x∈X
η⟨∇ft(xt),x⟩+BR(x,xt),
where η is the step size, R : X → R is the regularization function, and BR(x,x′) = R(x)−R(x′)− ⟨∇R(x′),x − x′⟩ is the Bregman divergence induced by R. As a meta-algorithm, by instantiating different regularization functions, OMD can induce two important algorithms, i.e., Online Gradient Descent (Zinkevich, 2003) and Online Exponentiated Gradient (Hazan et al., 2016).
3 MULTI-OBJECTIVE ONLINE CONVEX OPTIMIZATION
In this section, we formally formulate the MO-OCO framework.
Framework overview. Analogously to single-objective OCO, MO-OCO can be viewed as a repeated game between an online learner and the adversarial environment. The main difference is that in MO-OCO, the feedback is vector-valued. The general framework of MO-OCO is given as follows. At each round t ∈ {1, . . . , T}, the learner generates a decision xt from a given convex compact decision set X ⊂ Rn. Then the adversary replies the decision with a vector-valued loss function Ft : X → Rm, whose i-th component f it : X → R is a convex function corresponding to the i-th objective, and the learner suffers the vector-valued loss Ft(xt). The goal of the learner is to generate a sequence of decisions {xt}Tt=1 to minimize a certain kind of multi-objective regret. The remaining work in framework formulation is to give an appropriate regret definition, which is the most challenging part. Recall that the single-objective regret R(T ) = ∑T t=1 ft(xt)− ∑T t=1 ft(x
∗) is defined as the difference between the cumulative loss of the actual decisions {xt}Tt=1 and that of the fixed optimal decision in hindsight x∗ ∈ argminx∈X ∑T t=1 ft(x). When defining the multiobjective analogy to R(T ), we encounter two issues. First, in the multi-objective setting, no single decision can optimize all the objectives simultaneously in general, hence we cannot compare the cumulative loss with that of any single decision. Instead, we use the the Pareto optimal set X ∗ of the cumulative loss function ∑T t=1 Ft, i.e., X ∗ = PX( ∑T t=1 Ft), which naturally aligns with the optimality concept in MOO. Second, to compare {xt}Tt=1 and X ∗ in the loss space, we need a discrepancy metric to measure the gap between vector losses. Intuitively, we can adopt the commonly used PSG metric (Turgay et al., 2018). But we find that vanilla PSG is not appropriate for OCO, which is largely different from the bandits setting. We explicate the reason in the following.
3.1 THE NAIVE REGRET BASED ON VANILLA PSG FAILS IN MO-OCO
By definition, at each round t, the difference between the decision xt and the Pareto optimal set can be evaluated by PSG ∆(xt;X ∗, Ft). Naturally, we can formulate the multi-objective regret by accumulating ∆(xt;X ∗, Ft) over all rounds, i.e.,
RI(T ) := ∑T
t=1 ∆(xt;X ∗, Ft).
Recall that the single-objective regret can also expressed as R(T ) = ∑T
t=1(ft(xt)−ft(x∗)). Hence, RI(T ) essentially extends the scalar discrepancy ft(xt)− ft(x∗) to the PSG metric ∆(xt;X ∗, Ft). However, these two discrepancy metrics have a major difference, i.e., ft(xt) − ft(x∗) can be negative, whereas ∆(xt;X ∗, Ft) is always non-negative. In previous bandits settings (Turgay et al., 2018), the discrepancy is intrinsically non-negative, since the comparator set is exactly the Pareto optimal set of the evaluated loss function. However, the non-negative property of PSG can be problematic in our setting, where the comparator set X ∗ is the Pareto set of the cumulative loss function, rather than the instantaneous loss Ft that is used for evaluation. Specifically, at some round t, the decision xt may Pareto dominate all points in X ∗ w.r.t. Ft, which corresponds to the single-objective setting where it is possible that ft(xt) < ft(x∗) at some specific round. In this case, we would expect the discrepancy metric at this round to be negative. However, PSG can only yield 0 in this case, making the regret much looser than we expect. In the following, we provide an example in which the naive regret RI(T ) is linear w.r.t. T even when the decisions xt are already optimal.
Problem instance. Set X = [−2, 2]. Let the loss function be identical among all objectives, i.e., f1t (x) = ... = f m t (x), and alternate between x and −x. Suppose the time horizon T is an even number, then the Pareto optimal set X ∗ = X . Now consider the decisions xt = 1, t ∈ {1, ..., T}. In this case, it can easily be checked that the single-objective regret of each objective is zero, indicating that these decisions are optimal for each objective. To calculate RI(T ), notice that when all the objectives are identical, PSG reduces to ∆(xt;X ∗, f1t ) = supx∗∈X max{f1t (xt) − f1t (x∗), 0} at each round t. Hence, in this case we have RI(T ) = ∑ 1≤k≤T/2(supx∗∈[−2,2] max{1 − x∗, 0} + supx∗∈[−2,2] max{x∗ − 1, 0}) = 3T , which is linear w.r.t. T . Therefore, RI(T ) is too loose to measure the suboptimality of decisions, which is unqualified as a regret metric.
3.2 THE ALTERNATIVE REGRET BASED ON SEQUENCE-WISE PSG
In light of the failure of the naive regret, we need to modify the discrepancy metric in our setting. Recall that the single-objective regret can be interpreted as the gap between the actual cumulative loss ∑T t=1 ft(xt) and its optimal value minx∈X ∑T t=1 ft(x). In analogy, we can measure the gap
between ∑T t=1 Ft(xt) and the Pareto front P∗ = PX ( ∑T
t=1 Ft). However, vanilla PSG is a pointwise metric, i.e., it can only measure the suboptimality of a decision point. To evaluate the decision sequence {xt}Tt=1, we modify its definition and propose a sequence-wise variant of PSG. Definition 4 (Sequence-wise PSG). For any decision sequence {xt}Tt=1, the sequence-wise PSG (S-PSG) to a given comparator set2 X ∗ w.r.t. the loss sequence {Ft}Tt=1 is defined as
∆({xt}Tt=1;X ∗, {Ft}Tt=1) = inf ϵ≥0
ϵ, s.t. ∀x′′ ∈ X ∗,∃ i ∈ {1, . . . ,m}, T∑
t=1
f it (xt)−ϵ < T∑
t=1
f it (x ′′).
Since X ∗ is the Pareto set of ∑T
t=1 Ft, S-PSG measures the discrepancy from the cumulative loss of the decision sequence to the Pareto front P∗. Now the regret can be directly given as
RII(T ) := ∆({xt}Tt=1;X ∗, {Ft}Tt=1).
RII(T ) has a clear physical meaning that optimizing it will impose the cumulative loss to be close to the Pareto front P∗. However, since PSG (or S-PSG) is a zero-order metric motivated in a purely geometric sense, i.e., its calculation needs to solve a constrained optimization problem with an unknown boundary {Ft(x′′) | x′′ ∈ X ∗}, it is difficult to design a first-order algorithm to optimize PSG-based regrets, not to mention the analysis. To resolve this issue, we derive an equivalent form via highly non-trivial transformations, which is more intuitive than its original form. Proposition 1. The multi-objective regret RII(T ) based on S-PSG has an equivalent form, i.e.,
RII(T ) = max {
sup x∗∈X∗ inf λ∗∈Sm ∑T t=1 λ∗⊤(Ft(xt)− Ft(x∗)), 0 } .
Remark. (i) The above form is closely related to the single-objective regret R(T ). Specifically, when m = 1, we can prove that RII(T ) = max{ ∑T t=1 Ft(xt) − minx∗∈X∗ ∑T t=1 Ft(x
∗), 0} = 2It is equivalent to use either X ∗ or X as the comparator set. See Appendix C for the detailed proof.
Algorithm 1 Doubly Regularized Online Mirror Multiple Descent (DR-OMMD) 1: Input: Convex set X , time horizon T , regularization parameter αt, learning rate ηt, regulariza-
tion function R, user preference λ0. 2: Initialize: x1 ∈ X . 3: for t = 1, . . . , T do 4: Predict xt and receive a loss function Ft : X → Rm. 5: Compute the multiple gradients ∇Ft(xt) = [∇f1t (xt), . . . ,∇fmt (xt)] ∈ Rn×m. 6: Determine the weights for the gradient composition via min-regularized-norm
λt = argmin λ∈Sm
∥∇Ft(xt)λ∥22 + αt∥λ− λ0∥1.
7: Compute the composite gradient gt = ∇Ft(xt)λt. 8: Perform online mirror descent using gt
xt+1 = argmin x∈X
ηt⟨gt,x⟩+BR(x,xt).
9: end for
max{R(T ), 0}. Note that in the regret analysis, we are more interested in the case of R(T ) ≥ 0 (where RII(T ) = R(T )), since when R(T ) < 0, it is naturally bounded by any sublinear regret bound. Hence, RII(T ) is essentially aligned with R(T ) in the single-objective setting. (ii) At its first glance, RII(T ) can be optimized via linearization with fixed weights λ0 ∈ Sm, or alternatively, optimizing a single objective i ∈ {1, ...,m}. We remark that this is not a problem of our regret definition, but an intrinsic requirement of Pareto optimality. Specifically, Pareto optimality characterizes the status where no objective can be improved without hurting others. Hence merely optimizing a single objective naturally achieves Pareto optimality. Please refer to Proposition 8 in (Emmerich & Deutz, 2018) for the rigorous proof. As a general performance metric, our regret should incorporate this special case. Later, we will design a novel algorithm based on the concept of common descent, which outperforms linearization in both theory and experiment.
4 DOUBLY REGULARIZED ONLINE MIRROR MULTIPLE DESCENT
In this section, we present the Doubly Robust Online Mirror Multiple Descent (DR-OMMD) algorithm, the protocol of which is given in Algorithm 1. At each round t, the learner first computes the gradient of the loss regarding each objective, then determines the composite weights of all these gradients, and finally applies the composite gradient to the online mirror descent step.
4.1 VANILLA MIN-NORM MAY INCUR LINEAR REGRETS
The core module of DR-OMMD is the composition of gradients. For simplicity, denote the gradients at round t in a matrix form ∇Ft(xt) = [∇f1t (xt), . . . ,∇fmt (xt)] ∈ Rn×m. Then the composite gradient is gt = ∇Ft(xt)λt, where λt is the composite weights. As illustrated in the preliminary, in the offline setting, the min-norm method (Désidéri, 2012; Sener & Koltun, 2018) is a classic method to determine the composite weights, which produces a common descent direction that can descend all the losses simultaneously. Thus, it is tempting to consider applying it to the online setting.
However, directly applying min-norm to the online setting is not workable, which may even incur linear regrets. In vanilla min-norm, the composite weights λt are determined solely by the gradients ∇Ft(xt) at the current round t, which are very sensitive to the instantaneous loss Ft. In the online setting, the losses at each round can be adversarially chosen, and thus the corresponding gradients can be adversarial. These adversarial gradients may result in undesired composite weights, which may further produce a composite gradient that even deteriorates the next prediction. In the following, we provide an example in which min-norm incurs a linear regret. We extend OMD (Hazan et al., 2016) to the multi-objective setting, where the composite weights are directly yielded by min-norm.
Problem instance. We consider a two-objective problem. The decision domain is X = {(u, v) | u+ v ≤ 12 , v − u ≤ 1 2 , v ≥ 0} and the loss function at each round is
Ft(x) = { (∥x− a∥2, ∥x− b∥2), t = 2k − 1, k = 1, 2, ...; (∥x− b∥2, ∥x− c∥2), t = 2k, k = 1, 2, ...,
where a = (−2,−1), b = (0, 1), c = (2,−1). For simplicity, we first analyze the case where the total time horizon T is an even number. Then we can compute the Pareto set of the cumulative loss∑T
t=1 Ft, i.e., X ∗ = {(u, 0) | − 1 2 ≤ u ≤ 1 2}, which locates at the x-axis. For conciseness of analysis, we instantiate OMD with L2-regularization, which results in the simple OGD algorithm (McMahan, 2011). We start at an arbitrary point x1 = (u1, v1) ∈ X satisfying v1 > 0. At each round t, suppose the decision xt = (ut, vt), then the gradient of each objective w.r.t. xt takes
g1t = { (2ut + 4, 2vt + 2), t = 2k − 1; (2ut, 2vt − 2), t = 2k.
g2t = { (2ut, 2vt − 2), t = 2k − 1; (2ut − 4, 2vt + 2), t = 2k.
Since 0 ≤ vt ≤ 12 , we observe that the second entry of either gradient alternates between positive and negative. By using min-norm, the composite weights λt can be computed as
λt = { ((1− ut − vt)/4, (3 + ut + vt)/4), t = 2k − 1; ((3− ut + vt)/4, (1 + ut − vt)/4), t = 2k.
We observe that both entries of composite weights alternative between above 12 and below 1 2 , and ∥λt+1 − λt∥1 ≥ 1. Recall that ∥λt∥1 = 1, hence the composite weights at two consecutive rounds change radically. The resulting composite gradient takes
gcompt = { (ut − vt + 1, −ut + vt − 1), t = 2k − 1; (−ut − vt − 1, −ut − vt − 1), t = 2k.
The fluctuating composite weights mix with the positive and negative second entries of gradients, making the second entry of gcompt always negative, i.e., −ut + vt − 1 < 0 and −ut − vt − 1 < 0. Hence gcompt always drives xt away from the Pareto set X ∗ that coincides with the x-axis. This essentially reversely optimizes the loss, hence increasing the regret. In fact, we can prove that it even incurs a linear regret. Due to the lack of space, we leave the proof of linear regret when T is an odd number in Appendix H. The above results of the problem instance are summarized as follows.
Proposition 2. For OMD equipped with vanilla min-norm, there exists a multi-objective online convex optimization problem, in which the resulting algorithm incurs a linear regret.
Remark. Stability is a basic requirement to ensure meaningful regrets in online learning (McMahan, 2017). In the single-objective setting, directly regularizing the iterate xt (e.g., OMD) is enough. However, as shown in the above analysis, merely regularizing xt is not enough to attain sublinear regrets in the multi-objective setting, since there is another source of instability, i.e., the composite weights, that affects the direction of composite gradients. Therefore, in multi-objective online learning, besides regularizing the iterates, we also need to explicitly regularize the composite weights.
4.2 THE ALGORITHM
Enlightened by the design of regularization in FTRL (McMahan, 2017), we consider the regularizer r(λ,λ0), where λ0 is the pre-defined composite weights that may reflect the user preference. This results in a new solver called min-regularized-norm, i.e.,
λt = argmin λ∈Sm
∥∇Ft(xt)λ∥22 + αt r(λ,λ0),
where αt is the regularization strength. Equipping OMD with the new solver, we derive the proposed algorithm. Note that beyond the regularization on the iterate xt that is intrinsic in online learning, there is another regularization on the composite weights λt in min-regularized-norm. Both regularizations are fundamental, and they together ensure stability in the multi-objective online setting. Hence we call the algorithm Doubly Regularized Online Mirror Multiple Descent (DR-OMMD).
In principle, r can take various forms such as L1-norm, L2-norm, etc. Here we adopt L1-norm since it aligns well with the simplex constraint of λ. Min-regularized-norm can be computed very efficiently. When m = 2, it has a closed-form solution. Specifically, suppose the gradients at round t are g1t and g 2 t . Set γL = (g ⊤ 2 (g2−g1)−αt)/∥g2−g1∥2 and γR = (g⊤2 (g2−g1)+αt)/∥g2−g1∥2. Given any λ0 = (γ0, 1− γ0) ∈ S2, we can compute the composite weights λt as (γt, 1− γt) where
γt = max{min{γ′′t , 1}, 0}, where γ′′t = max{min{γ0, γR}, γL}.
When m > 2, since the constraint Sm is a simplex, we can introduce a Frank-Wolfe solver (Jaggi, 2013) (see detailed protocol in Appendix E.1). We also discuss the L2-norm case in Appendix E.2.
Compared to vanilla min-norm, the composite weights in min-regularized-norm are not fully determined by the adversarial gradients. The resulting relative stability of composite weights makes the composite gradients more robust to the adversarial environment. In the following, we give a general analysis and prove that DR-OMMD indeed guarantees sublinear regrets.
4.3 THEORETICAL ANALYSIS
Our analysis is based on two conventional assumptions (Jadbabaie et al., 2015; Hazan et al., 2016). Assumption 1. The regularization function R is 1-strongly convex. In addition, the Bregman divergence is γ-Lipschitz continuous, i.e., BR(x, z)−BR(y, z) ≤ γ∥x−y∥,∀x,y, z ∈ domR, where domR is the domain of R and satisfies X ⊂ domR ⊂ Rn. Assumption 2. There exists some finite G > 0 such that for each i ∈ {1, . . . ,m}, the i-th loss f it at each round t ∈ {1, . . . , T} is differentiable and G-Lipschitz continuous w.r.t. ∥ · ∥2, i.e., |f it (x)− f it (x′)| ≤ G∥x− x′∥2. Note that in the convex setting, this assumption leads to bounded gradients, i.e., ∥∇f it (x)∥2 ≤ G for any t ∈ {1, . . . , T}, i ∈ {1, . . . ,m},x ∈ X . Theorem 1. Suppose the diameter of X is D. Assume Ft is bounded, i.e., |f it (x)| ≤ F,∀x ∈ X , t ∈ {1, . . . , T}, i ∈ {1, . . . ,m}. For any λ0 ∈ Sm, DR-OMMD attains
R(T ) ≤ γD ηT
+ ∑T
t=1 ηt 2 (∥∇Ft(xt)λt∥22 + 4F ηt ∥λt − λ0∥1).
Remark. When ηt = √ 2γD
G √ T
or √ 2γD
G √ t , αt = 4Fηt , the bound attains O( √ T ). It matches the optimal
single-objective bound w.r.t. T (Hazan et al., 2016) and is tight w.r.t. m (justified in Appendix F.2).
Comparison with linearization. Linearization with fixed weights λ0 ∈ Sm essentially optimizes the scalar loss λ⊤0 Ft with gradient gt = ∇Ft(xt)λ0. From OMD’s tight bound (Theorem 6.8 in (Orabona, 2019)), we can derive a bound γDηT + ∑T t=1 ηt 2 ∥∇Ft(xt)λ0∥ 2 2 for linearization. In comparison, when αt = 4Fηt , DR-OMMD attains a regret bound γD ηT + ∑T t=1 ηt 2 minλ∈Sm{∥∇Ft(xt)λ∥ 2 2+ αt∥λ−λ0∥1}, which is smaller than that of linearization. Note that although the bound of linearization refers to single-objective regret R(T ), the comparison is reasonable due to the consistency of the two regret metrics, i.e., RII(T ) = max{R(T ), 0} when m = 1, as proved in Proposition 1. In the following, we further investigate the margin in the two-objective setting with linear losses. Suppose the loss functions are f1t (x) = x ⊤g1t and f 2 t (x) = x ⊤g2t for some vectors g 1 t , g 2 t ∈ Rn at each round. Then we can show that the margin is at least (see Appendix F.3 for the detailed proof)
M ≥ ∑T
t=1 ηt 4 ∥λt − λ0∥22 · ∥g1t − g2t ∥22,
which indicates the benefit of DR-OMMD. Specifically, while linearization requires adequate λ0, DR-OMMD selects more proper λt adaptively; the advantange is more obvious as the gradients of different objectives vary wildly. This matches our intuition that linearization suffers from conflict gradients (Yu et al., 2020), while DR-OMMD can alleviate the conflict by pursuing common descent.
5 EXPERIMENTS
In this section, we conduct experiments to compare DR-OMMD with two baselines: (i) linearization performs single-objective online learning on scalar losses λ⊤0 Ft with pre-defined fixed λ0 ∈ Sm; (ii) min-norm equips OMD with vanilla min-norm (Désidéri, 2012) for gradient composition.
5.1 CONVEX EXPERIMENTS: ADAPTIVE REGULARIZATION
Many real-world online scenarios adopt regularization to avoid overfitting. A standard scheme is to add a term r(x) to the loss ft(x) at each round and optimize the regularized loss ft(x) + σr(x) (McMahan, 2011), where σ is a pre-defined fixed hyperparameter. The formalism of multi-objective online learning provides a novel way of regularization. As r(x) measures model complexity, it can
(a) Effect of Preference (b) Learning Curve
0 2500 5000 7500 10000 12500 # Rounds
0.31
0.33
0.35
0.37
Av er
ag e
Lo ss
lin-opt DR-OMMD
0.0 0.2 0.4 0.6 0.8 1.0 Value of 10
0.3
0.4
0.5
0.6
0.7
Av er
ag e
Lo ss
linearization DR-OMMD
Figure 1: Results to verify the effectiveness of adaptive regularization on protein. (a) Performance of DR-OMMD and linearization under varying λ0 = (λ10, 1−λ10). (b) Performance using the optimal weights λ0 = (0.1, 0.9).
(a) Task L (b) Task R
0 20000 40000 60000 # Rounds
0.6
0.7
0.8
0.9
1.0
1.1
Av er
ag e
Lo ss
DR-OMMD min-norm lin (.25,.75) lin (0.5,0.5) lin (.75,.25)
0 20000 40000 60000 # Rounds
0.6
0.8
1.0
1.2
Av er
ag e
Lo ss
DR-OMMD min-norm lin (.25,.75) lin (0.5,0.5) lin (.75,.25)
Figure 2: Results to verify the effectiveness of DR-OMMD in the non-convex setting. The two plots show the performance of DR-OMMD and various baselines on both tasks (Task L and Task R) of MultiMNIST.
be regarded as the second objective alongside the primary goal ft(x). We can augment the loss to Ft(x) = (ft(x), r(x)) and thereby cast regularized online learning into a two-objective problem. Compared to the standard scheme, our approach chooses σt = λ2t/λ 1 t in an adaptive way.
We use two large-scale online benchmark datasets. (i) protein is a bioinformatics dataset for protein type classification (Wang, 2002), which has 17 thousand instances with 357 features. (ii) covtype is a biological dataset collected from a non-stationary environment for forest cover type prediction (Blackard & Dean, 1999), which has 50 thousand instances with 54 features. We set the logistic classification loss as the first objective, and the squared L2-norm of model parameters as the second objective. Since the ultimate goal of regularization is to lift predictive performance, we measure the average loss, i.e., ∑ t≤T lt(xt)/T , where lt(xt) is the classification loss at round t.
We adopt a L2-norm ball centered at the origin with diameter K = 100 as the decision set. The learning rates are decided by a grid search over {0.1, 0.2, . . . , 3.0}. For DR-OMMD, the parameter αt is simply set as 0.1. For fixed regularization, the strength σ = (1−λ10)/λ10 is determined by some λ10 ∈ [0, 1], which is exactly linearization with weights λ0 = (λ10, 1− λ10). We run both algorithms with varying λ10 ∈ {0, 0.1, ..., 1}. In Figure 1, we plot (a) their final performance w.r.t. the choice of λ0 and (b) their learning curves with desirable λ0 (e.g., (0.1, 0.9) on protein). Other results are deferred to the appendix due to the lack of space. The results show that DR-OMMD consistently outperforms fixed regularization; the gap becomes more significant when λ0 is not properly set.
5.2 NON-CONVEX EXPERIMENTS: DEEP MULTI-TASK LEARNING
We use MultiMNIST (Sabour et al., 2017), which is a multi-task version of the MNIST dataset for image classification and commonly used in deep multi-task learning (Sener & Koltun, 2018; Lin et al., 2019). In MultiMNIST, each sample is composed of a random digit image from MNIST at the top-left and another image at the bottom-right. The goal is to classify the digit at the top-left (task L) and that at the bottom-right (task R) at the same time.
We follow (Sener & Koltun, 2018)’s setup with LeNet. Learning rates in all methods are selected via grid search over {0.0001, 0.001, 0.01, 0.1}. For linearization, we examine different weights (0.25, 0.75), (0.5, 0.5), and (0.75, 0.25). For DR-OMMD, αt is set according to Theorem 1, and the initial weights are simply set as λ0 = (0.5, 0.5). Note that in the online setting, samples arrive in a sequential manner, which is different from offline experiments where sample batches are randomly sampled from the training set. Figure 2 compares the average cumulative loss of all the examined methods. We also measure two conventional metrics in offline experiments, i.e., the training loss and test loss (Reddi et al., 2018); the results are similar and deferred to the appendix due to the lack of space. The results show that DR-OMMD outperforms counterpart algorithms using min-norm or linearization in all metrics on both tasks, validating its effectiveness in the non-convex setting.
6 CONCLUSIONS
In this paper, we give a systematic study of multi-objective online learning, encompassing a novel framework, a new algorithm, and corresponding non-trivial theoretical analysis. We believe that this work paves the way for future research on more advanced multi-objective optimization algorithms, which may inspire the design of new optimizers for multi-task deep learning.
ACKNOWLEDGMENTS
This work was supported in part by the National Key Research and Development Program of China No. 2020AAA0106300 and National Natural Science Foundation of China No. 62250008. This work was also supported by Ant Group through Ant Research Intern Program. We would like to thank Wenliang Zhong, Jinjie Gu, Guannan Zhang and Jiaxin Liu for generous support on this project.
APPENDIX
The appendix is organized as follows. Appendix A reviews related work. Appendix B validates the correctness of our definition of PSG. Appendix C discusses the domain of the comparator in S-PSG, indicating that it makes no difference whether the comparator is selected from the Pareto optimal set or from the whole domain. Appendix D provides the detailed derivation of the equivalent form of RII(T ). Appendix E discusses how to efficiently compute the composition weights for the minregularized-norm solver. Appendix F discusses the order of DR-OMMD’s regret bound with fixed or adaptive learning rate, shows the tightness of the derived bound, and provides more details on the regret comparison between DR-OMMD and linearization. Appendix G supplements more details in the experimental setup and empirical results. Appendix H and I provide detailed proofs of the remaining theoretical claims in the main paper. Finally, Appendix J supplements regret analysis of DR-OMMD in the strongly convex setting.
A RELATED WORK
In this section, we review previous work in some related fields, i.e., online learning, multi-objective optimization, multi-objective multi-armed bandits, and multi-objective Bayesian optimization.
A.1 ONLINE LEARNING
Online learning arms to make sequential predictions for streaming data. Please refer to the introduction books (Hazan et al., 2016; Orabona, 2019) for more background knowledges.
Most of the previous works on online learning are conducted in the single-objective setting. As far as we are concerned, there are only two lines of work concerning multi-objective learning. The first line of works provides a multi-objective perspective of the prediction-with-expert-advice (PEA) problem (Koolen, 2013; Koolen & Van Erven, 2015). Specifically, they view each individual expert as a multi-objective criterion, and characterize the Pareto optimal trade-offs among different experts. These works have two main distinctions from our proposed MO-OCO. First, they are still built upon the original PEA problem where the payoff of each expert (or decision) is a scalar, while we focus on vectoral payoffs. Second, their framework is restricted to an absolute loss game, whereas our framework is general and can be applied to any coordinate-wise convex loss functions.
The second line of work studies online learning with vectoral payoffs via Blackwell approachability (Blackwell, 1956; Mannor et al., 2014; Abernethy et al., 2011). In their framework, the learner is given a target set T ⊂ Rm and its goal is to generate decisions {xt}Tt=1 to minimize the distance between the average loss ∑T t=1 lt(xt)/T and the target set T . There are two major differences between Blackwell approachability and our proposed MO-OCO: previous works on Blackwell approachability are zero-order methods and the target set T is often known beforehand (also see the discussion in (Busa-Fekete et al., 2017)), while in MO-OCO we intend to develop a first-order method to reach the unknown Pareto front.
A.2 MULTI-OBJECTIVE OPTIMIZATION
Multi-objective optimization aims to optimize multiple objectives concurrently. Most of the previous works on multi-objective optimization are conducted in the offline setting, including the batch optimization setting (Désidéri, 2012; Liu et al., 2021) and the stochastic optimization setting (Sener & Koltun, 2018; Lin et al., 2019; Yu et al., 2020; Chen et al., 2020; Javaloy & Valera, 2021). These methods are based on gradient composition, and have shown very promising results in multi-task learning applications.
Despite the existence of previous works on multi-objective optimization, as the first work of multiobjective optimization in the OCO setting, our work is largely different from them in three aspects. First, we contribute the first formal framework of multi-objective online convex optimization. In particular, our framework is based on a novel equivalent transformation of the PSG metric, which is intrinsically different from previous offline optimization frameworks. Second, we provide a showcase in which a commonly used method in the offline setting, namely min-norm (Désidéri, 2012; Sener & Koltun, 2018), fail to attain sublinear regret in online setting. Our proposed min-regularized-norm
is a novel design when tailoring offline methods to the online setting. Third, the regret analysis of multi-objective online learning is intrinsically different from the convergence analysis in the offline setting (Yu et al., 2020).
A.3 MULTI-OBJECTIVE MULTI-ARMED BANDITS
Another branch of related works study multi-objective optimization in the multi-armed bandits setting (Busa-Fekete et al., 2017; Tekin & Turğay, 2018; Turgay et al., 2018; Lu et al., 2019a; Degenne et al., 2019). Among these works, the most relevant one to ours is (Turgay et al., 2018), which introduces the Pareto suboptimality gap (PSG) metric to characterize the multi-objective regret in the bandits setting, and proposes a zero-order zooming algorithm to minimize the regret.
In this work, our regret definition also utilizes the PSG metric (Turgay et al., 2018). However, as the first study of multi-objective optimization in the OCO setting, our work is intrinsically different from these previous works in the following aspects. First, as PSG is a zero-order metric, we perform a novel equivalent transformation, making it amenable to the OCO setting. Second, our proposed algorithm is a first-order multiple gradient algorithm, whose design principles are completely distinct from zero-order algorithms. For example, the concept of the stability of composite weights does not even exist in the design of previous zero-order methods for multi-objective bandits (Turgay et al., 2018; Lu et al., 2019a). Third, the regret analysis of MO-OCO is intrinsically different from that in the bandits setting.
A.4 MULTI-OBJECTIVE BAYESIAN OPTIMIZATION
The final area related to our work is multi-objective Bayesian optimization (Zhang & Golovin, 2020; Konakovic Lukovic et al., 2020; Chowdhury & Gopalan, 2021; Maddox et al., 2021; Daulton et al., 2022), which studies Bayesian optimization with vector-valued feedback. There are two branches of works in this area, using different notions of regret. The first branch is based on scalarization, which adopts the expectation of the gap between scalarized losses over some given distribution (Chowdhury & Gopalan, 2021) as the regret. In this approach, the distribution of scalarization can be understood as a set of preference, which needs to be known beforehand. The second branch is based on Pareto optimality (Zhang & Golovin, 2020), which uses hypervolume as the discrepancy metric and adopt the gap between the true Pareto front and the estimated Pareto front as the regret.
As the first work on multi-objective optimization in the OCO setting, our work is largely different from these works in the following aspects. First, the regret definitions are different. Specifically, compared to the first branch based on scalarization, our regret definition is purely motivated by Pareto optimality, which does not need any preference in advance; compared to the second branch using hypervolume, we note that hypervolume is mainly used for Pareto front approximation, which is unsuitable to our adversarial setting where the goal is to impose the cumulative loss to reach the Pareto front. Second, multi-objective Bayesian optimization is conducted in a stochastic setting, which typically assumes that the losses follow some Gaussian distribution, whereas our work is conducted in the adversarial setting where the losses can be generated arbitrarily.
B AN EQUIVALENT DEFINITION OF PSG
Recall that in Definition 3, we formulate the PSG metric as a constrained optimization problem. We note that, since the PSG metric is based on the notion of “non-dominance” (Turgay et al., 2018), its most direct form is actually
∆′(x;K∗, F ) = inf ϵ≥0 ϵ,
s.t. ∀x′′ ∈ K∗,∃i ∈ {1, . . . ,m}, f i(x)− ϵ < f i(x′′) or ∀i ∈ {1, . . . ,m}, f i(x)− ϵ = f i(x′′).
At the first glance, the above definition seems to be quite different from Definition 3, since it has an extra condition “∀i ∈ {1, . . . ,m}, f i(x) − ϵ = f i(x′′)”. In the following, we prove that both definitions actually yield the same value due to the infimum operation on ϵ.
Specifically, for any possible pair (x,K∗, F ), we denote ∆′(x;K∗, F ) = ϵ′0 and ∆(x;K∗, F ) = ϵ0. By comparing the constraints of both definitions, it is obvious that ϵ0 must satisfy the constraint
of ∆′(x;K∗, F ), hence the infimum operation guarantees that ϵ′0 ≤ ϵ0. It remains to prove that ϵ′0 ≥ ϵ0. To this end, we only need to show that ϵ′0 + ξ satisfies the constraint of ∆(x;K∗, F ) for any ξ > 0. Consider an arbitrary x′′ ∈ K∗. From the definition of ∆′(x;K∗, F ), we know that either ∃i ∈ {1, . . . ,m}, f i(x) − ϵ′0 < f i(x′′) or ∀i ∈ {1, . . . ,m}, f i(x) − ϵ′0 = f i(x′′). Whichever condition holds, we must have ∃i ∈ {1, . . . ,m}, f i(x)−ϵ′0−ξ < f i(x′′) for any ξ > 0. Since it holds for any x′′ ∈ K∗, ϵ′0 + ξ lies in the feasible region of ∆(x;K∗, F ), hence we have ϵ0 ≤ ϵ′0 + ξ,∀ξ > 0 and thus ϵ0 ≤ ϵ′0. In summary, we have ∆′(x;K∗, F ) = ∆(x;K∗, F ) for any pair (x,K∗, F ).
C DISCUSSION ON THE DOMAIN OF THE COMPARATOR IN S-PSG
Recall that in Definition 4, the comparator x′ in S-PSG is selected from the Pareto optimal set X ∗ of the cumulative loss ∑T t=1 Ft. This actually stems from the original definition of PSG (Turgay et al., 2018), which uses the Pareto optimal set as the comparator set. In fact, comparing with Pareto optimal decisions in X ∗ is already enough to measure the suboptimality of any decision sequence {xt}Tt=1. The reason is that, for any non-optimal decision x′ ∈ X − X ∗, there must exist some Pareto optimal decision x′′ ∈ X ∗ that dominates x′, hence the suboptimality metric does not need to compare with this non-optimal decision x′. In other words, even if we extend the comparator set in S-PSG to the whole domain X , the modified form will be equivalent to the original form based on the Pareto optimal set X ∗. In the following, we strictly prove this equivalence ∆({xt}Tt=1;X , {Ft}Tt=1) = ∆({xt}Tt=1;X ∗, {Ft}Tt=1). Specifically, we modify the definition of S-PSG and let the comparator domain X ′ be any subset of the decision domain X , i.e.,
∆({xt}Tt=1;X ′, {Ft}Tt=1) = inf ϵ≥0
ϵ, s.t. ∀x′′ ∈ X ′,∃i ∈ {1, . . . ,m}, T∑
t=1
f it (xt)− ϵ < T∑
t=1
f it (x ′′).
Then the modified regret based on the whole domain X takes R′II(T ) = ∆({xt}Tt=1;X , {Ft}Tt=1). Now we begin to prove the equivalence ∆({xt}Tt=1;X , {Ft}Tt=1) = ∆({xt}Tt=1;X ∗, {Ft}Tt=1). For any X ′ ⊂ X , let E(X ′) denote the constraint of ∆({xt}Tt=1;X ′, {Ft}Tt=1), i.e.,
E(X ′) = {ϵ ≥ 0 | ∀x′′ ∈ X ′,∃i ∈ {1, . . . ,m}, T∑
t=1
f it (xt)− ϵ < T∑
t=1
f it (x ′′)},
then ∆({xt}Tt=1;X ′, {Ft}Tt=1) = inf E(X ′). Hence, we just need to prove inf E(X ) = inf E(X ∗). On the one hand, since X ∗ ⊂ X , from the above definition of S-PSG, it is easy to check that for any ϵ ∈ E(X ), it must satisfy ϵ ∈ E(X ∗). Hence, we have E(X ) ⊂ E(X ∗). On the other hand, given any ϵ ∈ E(X ∗), we now check that ϵ ∈ E(X ). To this end, we consider an arbitrary point x′′ ∈ X in two cases. (i) If x′′ ∈ X ∗, since ϵ ∈ E(X ∗), we naturally have ∑T t=1 f i0 t (xt) − ϵ < ∑T t=1 f i0 t (x
′′) for some i0. (ii) If x′′ /∈ X ∗, since X ∗ is the Pareto optimal set of ∑T t=1 Ft, there must exist some Pareto optimal decision x̂ ∈ X ∗ that dominates x′′
w.r.t. ∑T t=1 Ft, which means that ∑T t=1 f i t (x̂) ≤ ∑T t=1 f i t (x
′′) for all i ∈ {1, ...,m}. Notice that ϵ ∈ E(X ∗) gives ∑T t=1 f i0 t (xt) − ϵ < ∑T t=1 f
i0 t (x̂) for some i0, hence in this case we also have∑T
t=1 f i0 t (xt)− ϵ < ∑T t=1 f i0 t (x
′′). Combining the above two cases, we prove that ϵ ∈ E(X ), and consequently E(X ∗) ⊂ E(X ). In summary, we have E(X ) = E(X ∗), hence ∆({xt}Tt=1;X , {Ft}Tt=1) = inf E(X ) = inf E(X ∗) = ∆({xt}Tt=1;X ∗, {Ft}Tt=1). Therefore, it makes no difference whether the comparator in RII(T ) is generated from the Pareto optimal set X ∗ or from the whole domain X .
D DERIVATION OF THE EQUIVALENT MULTI-OBJECTIVE REGRET FORM
In this section, We strictly derive the equivalent form of RII(T ) in Proposition 1, which is highly non-trivial and forms the basis of the subsequent algorithm design and theoretical analysis.
Proof of Proposition 1. Recall that the PSG metric used in RII(T ) is an extension of vanilla PSG to leverage any decision sequence. To motivate the analysis, we first investigate vanilla PSG ∆(x;X ∗, F ) that deals with a single decision x, and derive a useful lemma as follows. Lemma 1. Vanilla PSG has an equivalent form, i.e.,
∆(x;X ∗, F ) = sup x∗∈X∗ inf λ∈Sm λ⊤(F (x)− F (x))+,
where for any vector l = (l1, ..., lm) ∈ Rm, the truncation (l)+ produces a vector whose i-th entry equals to max{li, 0} for all i ∈ {1, ...,m}.
Proof. In the definition of PSG, the evaluated decision x is compared to all Pareto optimal points x′ ∈ X ∗. For any fixed comparator x′ ∈ X ∗, we define the pair-wise suboptimality gap w.r.t. F between decisions x and x′ as follows
δ(x;x′, F ) = inf ϵ≥0 {ϵ | F (x)− ϵ1 ⊁ F (x′)}.
Hence, PSG can be expressed as
∆(x;X ∗, F ) = sup x′∈X∗ δ(x;x′, F ).
To proceed, we analyze the pair-wise gap δ(x;x′, F ). From its definition, we know that δ(x;x′, F ) measures the minimal non-negative value that needs to be subtracted from each entry of F (x) until it is not dominated by x′. Now we consider two cases.
(i) If F (x) ⊁ F (x′), i.e., fk0(x) ≤ fk0(x′) for some k0 ∈ {1, ...,m}, nothing needs to be subtracted from F (x) and we directly have δ(x;x′, F ) = 0.
(ii) If F (x) ≻ F (x′), we have fk(x) ≥ fk(x′) for all k ∈ {1, ...,m}, which obviously violates the condition F (x) − ϵ1 ⊁ F (x′) when ϵ = 0. Now let us gradually increase ϵ from zero. Notice that such a condition holds only when there there exists some k0 satisfying fk0(x) − ϵ ≤ fk0(x′), or equivalently ϵ ≥ fk0(x) − fk0(x′). Hence, in this case, we have δ(x;x′, F ) = mink∈{1,...,m}{fk(x)− fk(x′)}. Combining the above two cases, we derive an equivalent form of the pair-wise suboptimality gap. Specifically, we can easily check that the following form holds for both cases, i.e.,
δ(x;x′, F ) = min k∈{1,...,m} max{fk(x)− fk(x′), 0}.
To relate the above form with F , denote Um = {ek | 1 ≤ k ≤ m} as the set of all unit vector in Rm, then we equivalently have
δ(x;x′, F ) = min λ∈Um λ⊤(F (x)− F (x′))+.
Now the calculation of δ(x;x′, F ) is transformed into a minimization problem over λ ∈ Um. Since Um is a discrete set, we can apply a linear relaxation trick. Specifically, we now turn to minimize the scalar p(λ) = λ⊤ max{F (x)−F (x′), 0} over the convex curvature of Um, which is exactly the probability simplex Sm = {λ ∈ Rm | λ ⪰ 0, ∥λ∥1 = 1}. Note that Um contains all the vertexes of Sm. Since infλ∈Sm p(λ) is a linear optimization problem, the minimal point λ∗ must be a vertex of the simplex, i.e., λ∗ ∈ Um. Hence, the relaxed problem is equivalent to the original problem, namely,
δ(x;x′, F ) = min λ∈Um λ⊤(F (x)− F (x′))+ = inf λ∈Sm λ⊤(F (x)− F (x′))+.
Taking the supremum of both sides over x′ ∈ X ∗, we prove the lemma. ■
The above lemma can be naturally extended to the sequence-wise variant S-PSG. Specifically, we can extend the pair-wise suboptimality gap δ(x;x′, F ) to measure any decision sequence, which now becomes
δ({xt}Tt=1;x′, {Ft}Tt=1) = inf ϵ≥0
{ϵ | T∑
t=1
Ft(xt)− ϵ1 ⊁ T∑
t=1
Ft(x ′)}.
Then S-PSG can be expressed as
∆({xt}Tt=1;X ∗, {Ft}Tt=1) = sup x∗∈X∗ δ({xt}Tt=1;x∗, {Ft}Tt=1).
Similar to the derivation of the above lemma, by investigating the relation between ∑T
t=1 Ft(xt) and ∑T
t=1 Ft(x ′), we can derive an equivalent form of δ({xt}Tt=1;x′, {Ft}Tt=1) as
δ({xt}Tt=1;x′, {Ft}Tt=1) = min k∈{1,...,m}
max{ T∑
t=1
fkt (x)− T∑
t=1
fkt (x ′), 0},
and further
δ({xt}Tt=1;x′, {Ft}Tt=1) = inf λ∈Sm λ⊤( T∑ t=1 Ft(xt)− T∑ t=1 Ft(x ′))+.
Hence, the S-PSG-based regret form can be expressed as
RII(T ) = sup x∗∈X∗ inf λ∈Sm
λ⊤( T∑
t=1
Ft(xt)− T∑
t=1
Ft(x ∗))+.
The max-min form of RII(T ) has a truncation operation (·)+, which brings irregularity to the regret form. To handle the truncation operation, we utilize the following lemma:
Lemma 2. (a) For any l ∈ Rm, we have infλ∈Sm λ⊤(l)+ = max{infλ∈Sm λ⊤l, 0}. (b) For any h : X → R, we have supx∈X max{h(x), 0} = max{supx∈X h(x), 0}.
Proof. To prove the first statement, we consider the following two cases. (i) If l ≻ 0, then (l)+ = l. For any λ ∈ Sm, we have λ⊤(l)+ = λ⊤l > 0. Taking the infimum over λ ∈ Sm on both sides, we have infλ⊤Sm λ⊤(l)+ = infλ∈Sm λ⊤l ≥ 0. Moreover, from the last equation we have max{infλ∈Sm λ⊤l, 0} = infλ∈Sm λ⊤l, which proves the statement in this case. (ii) If l ⊁ 0, then li ≤ 0 for some i ∈ {1, ...,m}. Set ei as the i-th unit vector in Rm, then we have e⊤i l ≤ 0. One the one hand, since ei ∈ Sm, we have infλ∈Sm λ⊤l ≤ e⊤i l ≤ 0, and further max{infλ∈Sm λ⊤l, 0} = 0. On the other hand, notice that e⊤i (l)+ = 0 and λ⊤(l)+ ≥ 0 for any λ ∈ Sm, then infλ∈Sm λ⊤(l)+ = e⊤i (l)+ = 0. Hence, the statement also holds in this case. To prove the second statement, we also consider two cases. (i) If h(x0) > 0 for some x0 ∈ X , then supx∈X h(x) ≥ h(x0) > 0, and max{supx∈X h(x), 0} = supx∈X h(x). Since we also have supx∈X max{h(x), 0} = supx∈X h(x), the statement holds in this case. (ii) If h(x) ≤ 0 for all x ∈ X , then supx∈X h(x) ≤ 0, and thus max{supx∈X h(x), 0} = 0. Meanwhile, for any x ∈ X , we have max{h(x)} = 0, which validates the statement in this case.
■
From the above lemma, we directly have
RII(T ) = sup x∗∈X∗ max{ inf λ∈Sm
λ⊤( T∑ t=1 Ft(xt)− T∑ t=1 Ft(x ∗)), 0}
= max{ sup x∗∈X∗ inf λ∈Sm
λ⊤( T∑ t=1 Ft(xt)− T∑ t=1 Ft(x ∗)), 0},
which derives the desired equivalent form. ■
E CALCULATION OF MIN-REGULARIZED-NORM
In this section, we discuss how to efficiently calculate the solutions to min-regularized-norm with L1-norm and L2-norm.
Algorithm 2 Frank-Wolfe Solver for Min-Regularized-Norm with L1-Norm 1: Initialize: λt = (γ1t , . . . , γmt ) = ( 1m , . . . , 1 m ).
2: Compute the matrix U = ∇Ft(xt)⊤∇Ft(xt), i.e., Uij = ∇f it (xt)⊤∇f j t (xt),∀i, j ∈
{1, . . . ,m}. 3: repeat 4: Select an index k ∈ argmaxi∈{1,...,m}{ ∑m j=1 γ j tU
ij + α sgn(γit − γi0)}. 5: Compute δ ∈ argmin0≤δ≤1 ∥∥δ∇fkt (xt) + (1− δ)∇Ft(xt)λt∥∥22+α∥δ(ek−λt)+λt−λ0∥1. 6: Update λt = (1− δ)λt + δek. 7: until δ ∼ 0 or Number of Iteration Limits 8: return λt.
E.1 L1-NORM
Similar to (Sener & Koltun, 2018), we first consider the setting of two objectives, namely m = 2. In this case, for any λ = (γ, 1− γ),λ0 = (γ0, 1− γ0) ∈ S2, the L1-regularization ∥λ− λ0∥1 equals to 2|γ − γ0|. Hence min-regularized-norm with L1-norm at round t reduces to λt = (γt, 1 − γt) where
γt ∈ argmin 0≤γ≤1 ∥γg1 + (1− γ)g2∥22 + 2α|γ − γ0|.
Interestingly, the above problem has a closed-form solution.
Proposition 3. Set γL = (g⊤2 (g2−g1)−α)/∥g2−g1∥22, and γR = (g⊤2 (g2−g1)+α)/∥g2−g1∥22. Then min-regularized-norm with L1-norm produces weights λt = (γt, 1− γt) where
γt = max{min{γ′′t , 1}, 0}, where γ′′t = max{min{γ0, γR}, γL}.
Proof. We solve the following two quadratic sub-problems, i.e.,
min 0≤γ≤γ0
h1(γ) = ∥γg1 + (1− γ)g2∥22 + 2α(γ0 − γ),
as well as min
γ0≤γ≤1 h2(γ) = ∥γg1 + (1− γ)g2∥22 + 2α(γ − γ0).
It can be checked that in the former sub-problem, h1 monotonously decreases on (−∞, γR] and increases on [γR,+∞); in the latter sub-problem, h2 monotonously decreases on (−∞, γL] and increases on [γL,+∞). Since each sub-problem has its constraint ([0, γ0] or [γ0, 1]), the solution to the original optimization problem can then be derived by comparing the optimal values of the two sub-problems with their constraints. Specifically, notice that γL ≤ γR and 0 ≤ γ0 ≤ 1, and we can consider the following three cases.
(i) When 0 ≤ γ0 ≤ γL ≤ γR, then h1 monotonously decreases on [0, γ0] and its minimum on [0, γ0] is h1(γ0). Notice that h1(γ0) = h2(γ0). For the sub-problem of h2, we further consider two situations: (i-a) If γL ≤ 1, then γL ∈ [γ0, 1], hence the minimum of h2 on [γ0, 1] is h2(γL). Since h2(γL) ≤ h2(γ0) = h1(γ0), the minimal point of the original problem is γL, and hence γt = γL. (i-b) If γL > 1, then h2 monotonously decreases on [γ0, 1], and we surely have h2(1) ≤ h2(γ0) = h1(γ0). Hence γt = 1 in this situation. Combining the above two situations, we have γt = min{γL, 1} in this case. (ii) When γL ≤ γR ≤ γ0 ≤ 1, then h2 monotonously increases on [γ0, 1] and its minimum on [γ0, 1] is h2(γ0). Notice that h1(γ0) = h2(γ0). For the sub-problem of h1, similar to the first case, we also consider two situations: (ii-a) If γR ≥ 0, then γR ∈ [0, γ0], hence the minimum of h1 on [0, γ0] is h1(γR). Since h1(γR) ≤ h1(γ0) = h2(γ0), the minimal point of the original problem is γR, and hence γt = γR. (ii-b) If γR < 0, then h1 monotonously increases on [0, γ0]. Hence we have h1(0) ≤ h1(γ0) = h2(γ0). Hence the solution to the original problem γt = 0. Combining the above two situations, we have γt = max{γR, 0} in this case.
Algorithm 3 Frank-Wolfe Solver for Min-Regularized-Norm with L2-Norm 1: Initialize: λt = (γ1t , . . . , γmt ) = ( 1m , . . . , 1 m ).
2: Compute the matrix U = ∇Ft(xt)⊤∇Ft(xt), i.e., Uij = ∇f it (xt)⊤∇f j t (xt),∀i, j ∈
{1, . . . ,m}. 3: repeat 4: Select an index k ∈ argmaxi∈{1,...,m}{ ∑m j=1 γ j tU
ij + α(γit − γi0)}. 5: Compute δ ∈ argmin0≤δ≤1 ∥δ∇fkt (xt))+(1−δ)∇Ft(xt)λt∥22+α∥δ(ek−λt)+λt−λ0∥22,
which has an analytical form
δ = max{min{ (∇Ft(xt)λt −∇f k t (xt)) ⊤∇Ft(xt)λt + α∥ek − λt∥22 ∥∇Ft(xt)λt −∇fkt (xt)∥22 + α(ek − λt)⊤(λt − λ0) , 1}, 0}.
6: Update λt = (1− δ)λt + δek. 7: until δ ∼ 0 or Number of Iteration Limits 8: return λt.
(iii) When γL < γ0 < γR, then h1 monotonously decreases on [0, γ0] and h2 monotonously increases on [γ0, 1]. Hence each sub-problem attains its minimum at γ0, and thus γt = γ0.
Summarizing the above three cases gives
γt = min{γL, 1}, γ0 ≤ γL; max{γR, 0}, γ0 ≥ γR;
γ0, otherwise.
We can further rewrite the above formula into a compact form as follows, which can be checked case-by-case.
γt = max{min{γ′′t , 1}, 0}, where γ′′t = max{min{γ0, γR}, γL}, This gives the closed-form solution of min-regularized-norm when m = 2. ■
Now that we have derived the closed-form solution to the min-regularized-norm | 1. What is the focus of the paper regarding online convex optimization?
2. What are the strengths of the proposed approach, particularly in tackling the challenges of multi-dimensional optimization?
3. Do you have any concerns or questions about the effectiveness of the algorithm, such as its ability to handle non-convex objectives?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper considers a multi-dimensional version of the online convex optimization. Concretely, the problem studies a sequential interaction between a predictor and an adversarial environment -- at each time, the predictor picks a point in a convex set and the environment subsequently picks M convex functions from the domain to the reals. The goal of the predictor is to make sequential predictions so as to minimize regret with respect to the Pareto frontier of the sum of the M dimensional functions over the time horizon.
The paper contributes to a rigorous definition of regret, provides a variational formula for the same which then aids in analysis. The first observation the paper makes is that directly plugging in the one dimensional OMD to the standard iterative multi-objective MO descent algorithm leads to linear regret. The key observation made here was that the offline algorithm optimizes the mixing weights for the gradients of the M function to the current iterate. However, in the online case, since the environment is adversarial and the regret is with respect to hindsight, converting naively the offline iterative algorithm to an online one yields linear regret. The paper then shows that the simple idea of solving two regularized optimizations -- one to choose the mixing weights and a separate one to choose the next iterate -- yields low regret.
Strengths And Weaknesses
Strengths -- Very well written paper. The style was nice -- presenting the problem, showing through detailed examples why simple approaches fail and then using those examples to build up to the final algorithm.
Weakness -- The intuition and the effect of \lambda_0 in the algorithm is not clear. This is however a nit-pick and I believe can easily be fixed in the write-up.
Clarity, Quality, Novelty And Reproducibility
Clarity -- Very well written
Novelty -- Original insights and crisp communication of those insights
Reproducibility -- Yes |
ICLR | Title
Multi-Objective Online Learning
Abstract
This paper presents a systematic study of multi-objective online learning. We first formulate the framework of Multi-Objective Online Convex Optimization, which encompasses a novel multi-objective regret. This regret is built upon a sequencewise extension of the commonly used discrepancy metric Pareto suboptimality gap in zero-order multi-objective bandits. We then derive an equivalent form of the regret, making it amenable to be optimized via first-order iterative methods. To motivate the algorithm design, we give an explicit example in which equipping OMD with the vanilla min-norm solver for gradient composition will incur a linear regret, which shows that merely regularizing the iterates, as in single-objective online learning, is not enough to guarantee sublinear regrets in the multi-objective setting. To resolve this issue, we propose a novel min-regularized-norm solver that regularizes the composite weights. Combining min-regularized-norm with OMD results in the Doubly Regularized Online Mirror Multiple Descent algorithm. We further derive the multi-objective regret bound for the proposed algorithm, which matches the optimal bound in the single-objective setting. Extensive experiments on several real-world datasets verify the effectiveness of the proposed algorithm.
1 INTRODUCTION
Traditional optimization methods for machine learning are usually designed to optimize a single objective. However, in many real-world applications, we are often required to optimize multiple correlated objectives concurrently. For example, in autonomous driving (Huang et al., 2019; Lu et al., 2019b), self-driving vehicles need to solve multiple tasks such as self-localization and object identification at the same time. In online advertising (Ma et al., 2018a;b), advertising systems need to decide on the exposure of items to different users to maximize both the Click-Through Rate (CTR) and the Post-Click Conversion Rate (CVR). In most multi-objective scenarios, the objectives may conflict with each other (Kendall et al., 2018). Hence, there may not exist any single solution that can optimize all the objectives simultaneously. For example, merely optimizing CTR or CVR will degrade the performance of the other (Ma et al., 2018a;b).
Multi-objective optimization (MOO) (Marler & Arora, 2004; Deb, 2014) is concerned with optimizing multiple conflicting objectives simultaneously. It seeks Pareto optimality, where no single objective can be improved without hurting the performance of others. Many different methods for MOO have been proposed, including evolutionary methods (Murata et al., 1995; Zitzler & Thiele, 1999), scalarization methods (Fliege & Svaiter, 2000), and gradient-based iterative methods (Désidéri, 2012). Recently, the Multiple Gradient Descent Algorithm (MGDA) and its variants have been introduced to the training of multi-task deep neural networks and achieved great empirical success (Sener & Koltun, 2018), making them regain a significant amount of research interest (Lin et al., 2019; Yu et al., 2020; Liu et al., 2021). These methods compute a composite gradient based on
∗Equal contributions. †Corresponding author.
the gradient information of all the individual objectives and then apply the composite gradient to update the model parameters. The composite weights are determined by a min-norm solver (Désidéri, 2012) which yields a common descent direction of all the objectives.
However, compared to the increasingly wide application prospect, the gradient-based iterative algorithms are relatively understudied, especially in the online learning setting. Multi-objective online learning is of essential importance for reasons in two folds. First, due to the data explosion in many real-world scenarios such as web applications, making in-time predictions requires performing online learning. Second, the theoretical investigation of multi-objective online learning will lay a solid foundation for the design of new optimizers for multi-task deep learning. This is analogous to the single-objective setting, where nearly all the optimizers for training DNNs are initially analyzed in the online setting, such as AdaGrad (Duchi et al., 2011), Adam (Kingma & Ba, 2015), and AMSGrad (Reddi et al., 2018).
In this paper, we give a systematic study of multi-objective online learning. To begin with, we formulate the framework of Multi-Objective Online Convex Optimization (MO-OCO). One major challenge in deriving MO-OCO is the lack of a proper regret definition. In the multi-objective setting, in general, no single decision can optimize all the objectives simultaneously. Thus, to devise the multi-objective regret, we need to first extend the single fixed comparator used in the singleobjective regret, i.e., the fixed optimal decision, to the entire Pareto optimal set. Then we need an appropriate discrepancy metric to evaluate the gap between vector-valued losses. Intuitively, the Pareto suboptimality gap (PSG) metric, which is frequently used in zero-order multi-objective bandits (Turgay et al., 2018; Lu et al., 2019a), is a very promising candidate. PSG can yield scalarized measurements from any vector-valued loss to a given comparator set. However, we find that vanilla PSG is unsuitable for our setting since it always yields non-negative values and may be too loose. In a concrete example, we show that the naive PSG-based regret RI(T ) can even be linear w.r.t. T when the decisions are already optimal, which disqualifies it as a regret metric. To overcome the failure of vanilla PSG, we propose its sequence-wise variant termed S-PSG, which measures the suboptimality of the whole decision sequence to the Pareto optimal set of the cumulative loss function. Optimizing the resulting regret RII(T ) will drive the cumulative loss to approach the Pareto front. However, as a zero-order metric motivated geometrically, designing appropriate first-order algorithms to directly optimize it is too difficult. To resolve the issue, we derive a more intuitive equivalent form of RII(T ) via a highly non-trivial transformation.
Based on the MO-OCO framework, we develop a novel multi-objective online algorithm termed Doubly Regularized Online Mirror Multiple Descent. The key module of the algorithm is the gradient composition scheme, which calculates a composite gradient in the form of a convex combination of the gradients of all objectives. Intuitively, the most direct way to determine the composite weights is to apply the min-norm solver (Désidéri, 2012) commonly used in offline multi-objective optimization. However, directly applying min-norm is not workable in the online setting. Specifically, the composite weights in min-norm are merely determined by the gradients at the current round. In the online setting, since the gradients are adversarial, they may result in undesired composite weights, which further produce a composite gradient that reversely optimizes the loss. To rigorously verify this point, we give an example where equipping OMD with vanilla min-norm incurs a linear regret, showing that only regularizing the iterate, as in OMD, is not enough to guarantee sublinear regrets in our setting. To fix the issue, we devise a novel min-regularized-norm solver with an explicit regularization on composite weights. Equipping it with OMD results in our proposed algorithm. In theory, we derive a regret bound of O( √ T ) for DR-OMMD, which matches the optimal bound in the single-objective setting (Hazan et al., 2016) and is tight w.r.t. the number of objectives. Our analysis also shows that DR-OMMD attains a smaller regret bound than that of linearization with fixed composite weights. We show that, in the two-objective setting with linear losses, the margin between the regret bounds depends on the difference between the composite weights yielded by the two algorithms and the difference between the gradients of the two underlying objectives.
To evaluate the effectiveness of DR-OMMD, we conduct extensive experiments on several largescale real-world datasets. We first realize adaptive regularization via multi-objective optimization, and find that adaptive regularization with DR-OMMD significantly outperforms fixed regularization with linearization, which verifies the effectiveness of DR-OMMD over linearization in the convex setting. Then we apply DR-OMMD to deep online multi-task learning. The results show that DROMMD is also effective in the non-convex setting.
2 PRELIMINARIES
In this section, we briefly review the necessary background knowledge of two related fields.
2.1 MULTI-OBJECTIVE OPTIMIZATION
Multiple-objective optimization (MOO) is concerned with solving the problems of optimizing multiple objectives simultaneously (Fliege & Svaiter, 2000; Deb, 2014). In general, since different objectives may conflict with each other, there is no single solution that can optimize all the objectives at the same time, hence the conventional concept of optimality used in the single-objective setting is no longer suitable. Instead, MOO seeks to achieve Pareto optimality. In the following, we give the relevant definitions more formally. We use a vector-valued loss F = (f1, . . . , fm) to denote the objectives, where m ≥ 2 and f i : X → R, i ∈ {1, . . . ,m}, X ⊂ R, is the i-th loss function. Definition 1 (Pareto optimality). (a) For any two solutions x,x′ ∈ X , we say that x dominates x′, denoted as x ≺ x′ or x′ ≻ x, if f i(x) ≤ f i(x′) for all i, and there exists one i such that f i(x) < f i(x′); otherwise, we say that x does not dominate x′, denoted as x ⊀ x′ or x′ ⊁ x. (b) A solution x∗ ∈ X is called Pareto optimal if it is not dominated by any other solution in X .
Note that there may exist multiple Pareto optimal solutions. For example, it is easy to show that the optimizer of any single objective, i.e., x∗i ∈ argminx∈X f i(x), i ∈ {1, . . . ,m}, is Pareto optimal. Different Pareto optimal solutions reflect different trade-offs among the objectives (Lin et al., 2019). Definition 2 (Pareto front). (a) All Pareto optimal solutions form the Pareto set PX (F ). (b) The image of PX (F ) constitutes the Pareto front, denoted as P(H) = {F (x) | x ∈ PX (F )}.
Now that we have established the notion of optimality in MOO, we proceed to introduce the metrics that measure the discrepancy of an arbitrary solution x ∈ X from being optimal. Recall that, in the single-objective setting with merely one loss function f : Z → R, for any z ∈ Z , the loss difference f(z) − minz′′∈Z f(z′′) is directly qualified for the discrepancy measure. However, in MOO with more than one loss, for any x ∈ X , the loss difference F (x) − F (x′′), where x′′ ∈ PX (F ), is a vector. Intuitionally, the desired discrepancy metric shall scalarize the vector-valued loss difference and yield 0 for any Pareto optimal solution. In general, in MOO, there are two commonly used discrepancy metrics, i.e., Pareto suboptimality gap (PSG) (Turgay et al., 2018) and Hypervolume (HV) (Bradstreet, 2011). As HV is a complex volume-based metric, it is more difficult to optimize via gradient-based algorithms (Zhang & Golovin, 2020). Hence in this paper, we adopt PSG, which has already been extensively used in multi-objective bandits (Turgay et al., 2018; Lu et al., 2019a). Definition 3 (Pareto suboptimality gap1). For any x ∈ X , the Pareto suboptimality gap to a given comparator set Z ⊂ X , denoted as ∆(x;Z, F ), is defined as the minimal scalar ϵ ≥ 0 that needs to be subtracted from all entries of F (x), such that F (x)− ϵ1 is not dominated by any point in Z , where 1 denotes the all-one vector in Rm, i.e.,
∆(x;Z, F ) = inf ϵ≥0
ϵ, s.t. ∀x′′ ∈ Z, ∃ i ∈ {1, . . . ,m}, f i(x)− ϵ < f i(x′′).
Clearly, PSG is a distance-based discrepancy metric motivated from a purely geometric viewpoint. In practice, the comparator set Z is often set to be the Pareto set X ∗ = PX (F ) (Turgay et al., 2018); therein for any x ∈ K, its PSG is always non-negative and equals zero if and only if x ∈ PX (F ). Multiple Gradient Descent Algorithm (MGDA) is an offline first-order MOO algorithm (Fliege & Svaiter, 2000; Désidéri, 2012). At each iteration l ∈ {1, . . . , L} (L is the number of iterations), it first computes the gradient ∇f i(xl) of each objective, then derives the composite gradient gcompl =∑m
i=1 λ i l∇f i(xl) as a convex combination of these gradients, and finally applies g comp l to execute a gradient descent step to update the decision, i.e., xl+1 = xl − ηgcompl (η is the step size). The core part of MGDA is the module that determines the composite weights λl = (λ1l , . . . , λ m l ), given by
λl = argmin λl∈Sm
∥ ∑m
i=1 λil∇f i(xl)∥22,
where Sm = {λ ∈ Rm | ∑m i=1 λ i = 1, λi ≥ 0, i ∈ {1, . . . ,m}} is the probabilistic simplex in Rm. This is a min-norm solver, which finds the weights in the simplex that yield the minimum L2norm of the composite gradient. Thus MGDA is also called the min-norm method. Previous works
1Our definition looks a bit different from (Turgay et al., 2018). In Appendix B, we show they are equivalent.
(Désidéri, 2012; Sener & Koltun, 2018) showed that when all f i are convex functions, MGDA is guaranteed to decrease all the objectives simultaneously until it reaches a Pareto optimal decision.
2.2 ONLINE CONVEX OPTIMIZATION
Online Convex Optimization (OCO) (Zinkevich, 2003; Hazan et al., 2016) is the most commonly adopted framework for designing online learning algorithms. It can be viewed as a structured repeated game between a learner and an adversary. At each round t ∈ {1, . . . , T}, the learner is required to generate a decision xt from a convex compact set X ⊂ Rn. Then the adversary replies the learner with a convex function ft : X → R and the learner suffers the loss ft(xt). The goal of the learner is to minimize the regret with respect to the best fixed decision in hindsight, i.e.,
R(T ) = ∑T
t=1 ft(xt)− min x∗∈X ∑T t=1 ft(x ∗).
A meaningful regret is required to be sublinear in T , i.e., limT→∞ R(T )/T = 0, which implies that when T is large enough, the learner can perform as well as the best fixed decision in hindsight.
Online Mirror Descent (OMD) (Hazan et al., 2016) is a classic first-order online learning algorithm. At each round t ∈ {1, . . . , T}, OMD yields its decision via
xt+1 = argmin x∈X
η⟨∇ft(xt),x⟩+BR(x,xt),
where η is the step size, R : X → R is the regularization function, and BR(x,x′) = R(x)−R(x′)− ⟨∇R(x′),x − x′⟩ is the Bregman divergence induced by R. As a meta-algorithm, by instantiating different regularization functions, OMD can induce two important algorithms, i.e., Online Gradient Descent (Zinkevich, 2003) and Online Exponentiated Gradient (Hazan et al., 2016).
3 MULTI-OBJECTIVE ONLINE CONVEX OPTIMIZATION
In this section, we formally formulate the MO-OCO framework.
Framework overview. Analogously to single-objective OCO, MO-OCO can be viewed as a repeated game between an online learner and the adversarial environment. The main difference is that in MO-OCO, the feedback is vector-valued. The general framework of MO-OCO is given as follows. At each round t ∈ {1, . . . , T}, the learner generates a decision xt from a given convex compact decision set X ⊂ Rn. Then the adversary replies the decision with a vector-valued loss function Ft : X → Rm, whose i-th component f it : X → R is a convex function corresponding to the i-th objective, and the learner suffers the vector-valued loss Ft(xt). The goal of the learner is to generate a sequence of decisions {xt}Tt=1 to minimize a certain kind of multi-objective regret. The remaining work in framework formulation is to give an appropriate regret definition, which is the most challenging part. Recall that the single-objective regret R(T ) = ∑T t=1 ft(xt)− ∑T t=1 ft(x
∗) is defined as the difference between the cumulative loss of the actual decisions {xt}Tt=1 and that of the fixed optimal decision in hindsight x∗ ∈ argminx∈X ∑T t=1 ft(x). When defining the multiobjective analogy to R(T ), we encounter two issues. First, in the multi-objective setting, no single decision can optimize all the objectives simultaneously in general, hence we cannot compare the cumulative loss with that of any single decision. Instead, we use the the Pareto optimal set X ∗ of the cumulative loss function ∑T t=1 Ft, i.e., X ∗ = PX( ∑T t=1 Ft), which naturally aligns with the optimality concept in MOO. Second, to compare {xt}Tt=1 and X ∗ in the loss space, we need a discrepancy metric to measure the gap between vector losses. Intuitively, we can adopt the commonly used PSG metric (Turgay et al., 2018). But we find that vanilla PSG is not appropriate for OCO, which is largely different from the bandits setting. We explicate the reason in the following.
3.1 THE NAIVE REGRET BASED ON VANILLA PSG FAILS IN MO-OCO
By definition, at each round t, the difference between the decision xt and the Pareto optimal set can be evaluated by PSG ∆(xt;X ∗, Ft). Naturally, we can formulate the multi-objective regret by accumulating ∆(xt;X ∗, Ft) over all rounds, i.e.,
RI(T ) := ∑T
t=1 ∆(xt;X ∗, Ft).
Recall that the single-objective regret can also expressed as R(T ) = ∑T
t=1(ft(xt)−ft(x∗)). Hence, RI(T ) essentially extends the scalar discrepancy ft(xt)− ft(x∗) to the PSG metric ∆(xt;X ∗, Ft). However, these two discrepancy metrics have a major difference, i.e., ft(xt) − ft(x∗) can be negative, whereas ∆(xt;X ∗, Ft) is always non-negative. In previous bandits settings (Turgay et al., 2018), the discrepancy is intrinsically non-negative, since the comparator set is exactly the Pareto optimal set of the evaluated loss function. However, the non-negative property of PSG can be problematic in our setting, where the comparator set X ∗ is the Pareto set of the cumulative loss function, rather than the instantaneous loss Ft that is used for evaluation. Specifically, at some round t, the decision xt may Pareto dominate all points in X ∗ w.r.t. Ft, which corresponds to the single-objective setting where it is possible that ft(xt) < ft(x∗) at some specific round. In this case, we would expect the discrepancy metric at this round to be negative. However, PSG can only yield 0 in this case, making the regret much looser than we expect. In the following, we provide an example in which the naive regret RI(T ) is linear w.r.t. T even when the decisions xt are already optimal.
Problem instance. Set X = [−2, 2]. Let the loss function be identical among all objectives, i.e., f1t (x) = ... = f m t (x), and alternate between x and −x. Suppose the time horizon T is an even number, then the Pareto optimal set X ∗ = X . Now consider the decisions xt = 1, t ∈ {1, ..., T}. In this case, it can easily be checked that the single-objective regret of each objective is zero, indicating that these decisions are optimal for each objective. To calculate RI(T ), notice that when all the objectives are identical, PSG reduces to ∆(xt;X ∗, f1t ) = supx∗∈X max{f1t (xt) − f1t (x∗), 0} at each round t. Hence, in this case we have RI(T ) = ∑ 1≤k≤T/2(supx∗∈[−2,2] max{1 − x∗, 0} + supx∗∈[−2,2] max{x∗ − 1, 0}) = 3T , which is linear w.r.t. T . Therefore, RI(T ) is too loose to measure the suboptimality of decisions, which is unqualified as a regret metric.
3.2 THE ALTERNATIVE REGRET BASED ON SEQUENCE-WISE PSG
In light of the failure of the naive regret, we need to modify the discrepancy metric in our setting. Recall that the single-objective regret can be interpreted as the gap between the actual cumulative loss ∑T t=1 ft(xt) and its optimal value minx∈X ∑T t=1 ft(x). In analogy, we can measure the gap
between ∑T t=1 Ft(xt) and the Pareto front P∗ = PX ( ∑T
t=1 Ft). However, vanilla PSG is a pointwise metric, i.e., it can only measure the suboptimality of a decision point. To evaluate the decision sequence {xt}Tt=1, we modify its definition and propose a sequence-wise variant of PSG. Definition 4 (Sequence-wise PSG). For any decision sequence {xt}Tt=1, the sequence-wise PSG (S-PSG) to a given comparator set2 X ∗ w.r.t. the loss sequence {Ft}Tt=1 is defined as
∆({xt}Tt=1;X ∗, {Ft}Tt=1) = inf ϵ≥0
ϵ, s.t. ∀x′′ ∈ X ∗,∃ i ∈ {1, . . . ,m}, T∑
t=1
f it (xt)−ϵ < T∑
t=1
f it (x ′′).
Since X ∗ is the Pareto set of ∑T
t=1 Ft, S-PSG measures the discrepancy from the cumulative loss of the decision sequence to the Pareto front P∗. Now the regret can be directly given as
RII(T ) := ∆({xt}Tt=1;X ∗, {Ft}Tt=1).
RII(T ) has a clear physical meaning that optimizing it will impose the cumulative loss to be close to the Pareto front P∗. However, since PSG (or S-PSG) is a zero-order metric motivated in a purely geometric sense, i.e., its calculation needs to solve a constrained optimization problem with an unknown boundary {Ft(x′′) | x′′ ∈ X ∗}, it is difficult to design a first-order algorithm to optimize PSG-based regrets, not to mention the analysis. To resolve this issue, we derive an equivalent form via highly non-trivial transformations, which is more intuitive than its original form. Proposition 1. The multi-objective regret RII(T ) based on S-PSG has an equivalent form, i.e.,
RII(T ) = max {
sup x∗∈X∗ inf λ∗∈Sm ∑T t=1 λ∗⊤(Ft(xt)− Ft(x∗)), 0 } .
Remark. (i) The above form is closely related to the single-objective regret R(T ). Specifically, when m = 1, we can prove that RII(T ) = max{ ∑T t=1 Ft(xt) − minx∗∈X∗ ∑T t=1 Ft(x
∗), 0} = 2It is equivalent to use either X ∗ or X as the comparator set. See Appendix C for the detailed proof.
Algorithm 1 Doubly Regularized Online Mirror Multiple Descent (DR-OMMD) 1: Input: Convex set X , time horizon T , regularization parameter αt, learning rate ηt, regulariza-
tion function R, user preference λ0. 2: Initialize: x1 ∈ X . 3: for t = 1, . . . , T do 4: Predict xt and receive a loss function Ft : X → Rm. 5: Compute the multiple gradients ∇Ft(xt) = [∇f1t (xt), . . . ,∇fmt (xt)] ∈ Rn×m. 6: Determine the weights for the gradient composition via min-regularized-norm
λt = argmin λ∈Sm
∥∇Ft(xt)λ∥22 + αt∥λ− λ0∥1.
7: Compute the composite gradient gt = ∇Ft(xt)λt. 8: Perform online mirror descent using gt
xt+1 = argmin x∈X
ηt⟨gt,x⟩+BR(x,xt).
9: end for
max{R(T ), 0}. Note that in the regret analysis, we are more interested in the case of R(T ) ≥ 0 (where RII(T ) = R(T )), since when R(T ) < 0, it is naturally bounded by any sublinear regret bound. Hence, RII(T ) is essentially aligned with R(T ) in the single-objective setting. (ii) At its first glance, RII(T ) can be optimized via linearization with fixed weights λ0 ∈ Sm, or alternatively, optimizing a single objective i ∈ {1, ...,m}. We remark that this is not a problem of our regret definition, but an intrinsic requirement of Pareto optimality. Specifically, Pareto optimality characterizes the status where no objective can be improved without hurting others. Hence merely optimizing a single objective naturally achieves Pareto optimality. Please refer to Proposition 8 in (Emmerich & Deutz, 2018) for the rigorous proof. As a general performance metric, our regret should incorporate this special case. Later, we will design a novel algorithm based on the concept of common descent, which outperforms linearization in both theory and experiment.
4 DOUBLY REGULARIZED ONLINE MIRROR MULTIPLE DESCENT
In this section, we present the Doubly Robust Online Mirror Multiple Descent (DR-OMMD) algorithm, the protocol of which is given in Algorithm 1. At each round t, the learner first computes the gradient of the loss regarding each objective, then determines the composite weights of all these gradients, and finally applies the composite gradient to the online mirror descent step.
4.1 VANILLA MIN-NORM MAY INCUR LINEAR REGRETS
The core module of DR-OMMD is the composition of gradients. For simplicity, denote the gradients at round t in a matrix form ∇Ft(xt) = [∇f1t (xt), . . . ,∇fmt (xt)] ∈ Rn×m. Then the composite gradient is gt = ∇Ft(xt)λt, where λt is the composite weights. As illustrated in the preliminary, in the offline setting, the min-norm method (Désidéri, 2012; Sener & Koltun, 2018) is a classic method to determine the composite weights, which produces a common descent direction that can descend all the losses simultaneously. Thus, it is tempting to consider applying it to the online setting.
However, directly applying min-norm to the online setting is not workable, which may even incur linear regrets. In vanilla min-norm, the composite weights λt are determined solely by the gradients ∇Ft(xt) at the current round t, which are very sensitive to the instantaneous loss Ft. In the online setting, the losses at each round can be adversarially chosen, and thus the corresponding gradients can be adversarial. These adversarial gradients may result in undesired composite weights, which may further produce a composite gradient that even deteriorates the next prediction. In the following, we provide an example in which min-norm incurs a linear regret. We extend OMD (Hazan et al., 2016) to the multi-objective setting, where the composite weights are directly yielded by min-norm.
Problem instance. We consider a two-objective problem. The decision domain is X = {(u, v) | u+ v ≤ 12 , v − u ≤ 1 2 , v ≥ 0} and the loss function at each round is
Ft(x) = { (∥x− a∥2, ∥x− b∥2), t = 2k − 1, k = 1, 2, ...; (∥x− b∥2, ∥x− c∥2), t = 2k, k = 1, 2, ...,
where a = (−2,−1), b = (0, 1), c = (2,−1). For simplicity, we first analyze the case where the total time horizon T is an even number. Then we can compute the Pareto set of the cumulative loss∑T
t=1 Ft, i.e., X ∗ = {(u, 0) | − 1 2 ≤ u ≤ 1 2}, which locates at the x-axis. For conciseness of analysis, we instantiate OMD with L2-regularization, which results in the simple OGD algorithm (McMahan, 2011). We start at an arbitrary point x1 = (u1, v1) ∈ X satisfying v1 > 0. At each round t, suppose the decision xt = (ut, vt), then the gradient of each objective w.r.t. xt takes
g1t = { (2ut + 4, 2vt + 2), t = 2k − 1; (2ut, 2vt − 2), t = 2k.
g2t = { (2ut, 2vt − 2), t = 2k − 1; (2ut − 4, 2vt + 2), t = 2k.
Since 0 ≤ vt ≤ 12 , we observe that the second entry of either gradient alternates between positive and negative. By using min-norm, the composite weights λt can be computed as
λt = { ((1− ut − vt)/4, (3 + ut + vt)/4), t = 2k − 1; ((3− ut + vt)/4, (1 + ut − vt)/4), t = 2k.
We observe that both entries of composite weights alternative between above 12 and below 1 2 , and ∥λt+1 − λt∥1 ≥ 1. Recall that ∥λt∥1 = 1, hence the composite weights at two consecutive rounds change radically. The resulting composite gradient takes
gcompt = { (ut − vt + 1, −ut + vt − 1), t = 2k − 1; (−ut − vt − 1, −ut − vt − 1), t = 2k.
The fluctuating composite weights mix with the positive and negative second entries of gradients, making the second entry of gcompt always negative, i.e., −ut + vt − 1 < 0 and −ut − vt − 1 < 0. Hence gcompt always drives xt away from the Pareto set X ∗ that coincides with the x-axis. This essentially reversely optimizes the loss, hence increasing the regret. In fact, we can prove that it even incurs a linear regret. Due to the lack of space, we leave the proof of linear regret when T is an odd number in Appendix H. The above results of the problem instance are summarized as follows.
Proposition 2. For OMD equipped with vanilla min-norm, there exists a multi-objective online convex optimization problem, in which the resulting algorithm incurs a linear regret.
Remark. Stability is a basic requirement to ensure meaningful regrets in online learning (McMahan, 2017). In the single-objective setting, directly regularizing the iterate xt (e.g., OMD) is enough. However, as shown in the above analysis, merely regularizing xt is not enough to attain sublinear regrets in the multi-objective setting, since there is another source of instability, i.e., the composite weights, that affects the direction of composite gradients. Therefore, in multi-objective online learning, besides regularizing the iterates, we also need to explicitly regularize the composite weights.
4.2 THE ALGORITHM
Enlightened by the design of regularization in FTRL (McMahan, 2017), we consider the regularizer r(λ,λ0), where λ0 is the pre-defined composite weights that may reflect the user preference. This results in a new solver called min-regularized-norm, i.e.,
λt = argmin λ∈Sm
∥∇Ft(xt)λ∥22 + αt r(λ,λ0),
where αt is the regularization strength. Equipping OMD with the new solver, we derive the proposed algorithm. Note that beyond the regularization on the iterate xt that is intrinsic in online learning, there is another regularization on the composite weights λt in min-regularized-norm. Both regularizations are fundamental, and they together ensure stability in the multi-objective online setting. Hence we call the algorithm Doubly Regularized Online Mirror Multiple Descent (DR-OMMD).
In principle, r can take various forms such as L1-norm, L2-norm, etc. Here we adopt L1-norm since it aligns well with the simplex constraint of λ. Min-regularized-norm can be computed very efficiently. When m = 2, it has a closed-form solution. Specifically, suppose the gradients at round t are g1t and g 2 t . Set γL = (g ⊤ 2 (g2−g1)−αt)/∥g2−g1∥2 and γR = (g⊤2 (g2−g1)+αt)/∥g2−g1∥2. Given any λ0 = (γ0, 1− γ0) ∈ S2, we can compute the composite weights λt as (γt, 1− γt) where
γt = max{min{γ′′t , 1}, 0}, where γ′′t = max{min{γ0, γR}, γL}.
When m > 2, since the constraint Sm is a simplex, we can introduce a Frank-Wolfe solver (Jaggi, 2013) (see detailed protocol in Appendix E.1). We also discuss the L2-norm case in Appendix E.2.
Compared to vanilla min-norm, the composite weights in min-regularized-norm are not fully determined by the adversarial gradients. The resulting relative stability of composite weights makes the composite gradients more robust to the adversarial environment. In the following, we give a general analysis and prove that DR-OMMD indeed guarantees sublinear regrets.
4.3 THEORETICAL ANALYSIS
Our analysis is based on two conventional assumptions (Jadbabaie et al., 2015; Hazan et al., 2016). Assumption 1. The regularization function R is 1-strongly convex. In addition, the Bregman divergence is γ-Lipschitz continuous, i.e., BR(x, z)−BR(y, z) ≤ γ∥x−y∥,∀x,y, z ∈ domR, where domR is the domain of R and satisfies X ⊂ domR ⊂ Rn. Assumption 2. There exists some finite G > 0 such that for each i ∈ {1, . . . ,m}, the i-th loss f it at each round t ∈ {1, . . . , T} is differentiable and G-Lipschitz continuous w.r.t. ∥ · ∥2, i.e., |f it (x)− f it (x′)| ≤ G∥x− x′∥2. Note that in the convex setting, this assumption leads to bounded gradients, i.e., ∥∇f it (x)∥2 ≤ G for any t ∈ {1, . . . , T}, i ∈ {1, . . . ,m},x ∈ X . Theorem 1. Suppose the diameter of X is D. Assume Ft is bounded, i.e., |f it (x)| ≤ F,∀x ∈ X , t ∈ {1, . . . , T}, i ∈ {1, . . . ,m}. For any λ0 ∈ Sm, DR-OMMD attains
R(T ) ≤ γD ηT
+ ∑T
t=1 ηt 2 (∥∇Ft(xt)λt∥22 + 4F ηt ∥λt − λ0∥1).
Remark. When ηt = √ 2γD
G √ T
or √ 2γD
G √ t , αt = 4Fηt , the bound attains O( √ T ). It matches the optimal
single-objective bound w.r.t. T (Hazan et al., 2016) and is tight w.r.t. m (justified in Appendix F.2).
Comparison with linearization. Linearization with fixed weights λ0 ∈ Sm essentially optimizes the scalar loss λ⊤0 Ft with gradient gt = ∇Ft(xt)λ0. From OMD’s tight bound (Theorem 6.8 in (Orabona, 2019)), we can derive a bound γDηT + ∑T t=1 ηt 2 ∥∇Ft(xt)λ0∥ 2 2 for linearization. In comparison, when αt = 4Fηt , DR-OMMD attains a regret bound γD ηT + ∑T t=1 ηt 2 minλ∈Sm{∥∇Ft(xt)λ∥ 2 2+ αt∥λ−λ0∥1}, which is smaller than that of linearization. Note that although the bound of linearization refers to single-objective regret R(T ), the comparison is reasonable due to the consistency of the two regret metrics, i.e., RII(T ) = max{R(T ), 0} when m = 1, as proved in Proposition 1. In the following, we further investigate the margin in the two-objective setting with linear losses. Suppose the loss functions are f1t (x) = x ⊤g1t and f 2 t (x) = x ⊤g2t for some vectors g 1 t , g 2 t ∈ Rn at each round. Then we can show that the margin is at least (see Appendix F.3 for the detailed proof)
M ≥ ∑T
t=1 ηt 4 ∥λt − λ0∥22 · ∥g1t − g2t ∥22,
which indicates the benefit of DR-OMMD. Specifically, while linearization requires adequate λ0, DR-OMMD selects more proper λt adaptively; the advantange is more obvious as the gradients of different objectives vary wildly. This matches our intuition that linearization suffers from conflict gradients (Yu et al., 2020), while DR-OMMD can alleviate the conflict by pursuing common descent.
5 EXPERIMENTS
In this section, we conduct experiments to compare DR-OMMD with two baselines: (i) linearization performs single-objective online learning on scalar losses λ⊤0 Ft with pre-defined fixed λ0 ∈ Sm; (ii) min-norm equips OMD with vanilla min-norm (Désidéri, 2012) for gradient composition.
5.1 CONVEX EXPERIMENTS: ADAPTIVE REGULARIZATION
Many real-world online scenarios adopt regularization to avoid overfitting. A standard scheme is to add a term r(x) to the loss ft(x) at each round and optimize the regularized loss ft(x) + σr(x) (McMahan, 2011), where σ is a pre-defined fixed hyperparameter. The formalism of multi-objective online learning provides a novel way of regularization. As r(x) measures model complexity, it can
(a) Effect of Preference (b) Learning Curve
0 2500 5000 7500 10000 12500 # Rounds
0.31
0.33
0.35
0.37
Av er
ag e
Lo ss
lin-opt DR-OMMD
0.0 0.2 0.4 0.6 0.8 1.0 Value of 10
0.3
0.4
0.5
0.6
0.7
Av er
ag e
Lo ss
linearization DR-OMMD
Figure 1: Results to verify the effectiveness of adaptive regularization on protein. (a) Performance of DR-OMMD and linearization under varying λ0 = (λ10, 1−λ10). (b) Performance using the optimal weights λ0 = (0.1, 0.9).
(a) Task L (b) Task R
0 20000 40000 60000 # Rounds
0.6
0.7
0.8
0.9
1.0
1.1
Av er
ag e
Lo ss
DR-OMMD min-norm lin (.25,.75) lin (0.5,0.5) lin (.75,.25)
0 20000 40000 60000 # Rounds
0.6
0.8
1.0
1.2
Av er
ag e
Lo ss
DR-OMMD min-norm lin (.25,.75) lin (0.5,0.5) lin (.75,.25)
Figure 2: Results to verify the effectiveness of DR-OMMD in the non-convex setting. The two plots show the performance of DR-OMMD and various baselines on both tasks (Task L and Task R) of MultiMNIST.
be regarded as the second objective alongside the primary goal ft(x). We can augment the loss to Ft(x) = (ft(x), r(x)) and thereby cast regularized online learning into a two-objective problem. Compared to the standard scheme, our approach chooses σt = λ2t/λ 1 t in an adaptive way.
We use two large-scale online benchmark datasets. (i) protein is a bioinformatics dataset for protein type classification (Wang, 2002), which has 17 thousand instances with 357 features. (ii) covtype is a biological dataset collected from a non-stationary environment for forest cover type prediction (Blackard & Dean, 1999), which has 50 thousand instances with 54 features. We set the logistic classification loss as the first objective, and the squared L2-norm of model parameters as the second objective. Since the ultimate goal of regularization is to lift predictive performance, we measure the average loss, i.e., ∑ t≤T lt(xt)/T , where lt(xt) is the classification loss at round t.
We adopt a L2-norm ball centered at the origin with diameter K = 100 as the decision set. The learning rates are decided by a grid search over {0.1, 0.2, . . . , 3.0}. For DR-OMMD, the parameter αt is simply set as 0.1. For fixed regularization, the strength σ = (1−λ10)/λ10 is determined by some λ10 ∈ [0, 1], which is exactly linearization with weights λ0 = (λ10, 1− λ10). We run both algorithms with varying λ10 ∈ {0, 0.1, ..., 1}. In Figure 1, we plot (a) their final performance w.r.t. the choice of λ0 and (b) their learning curves with desirable λ0 (e.g., (0.1, 0.9) on protein). Other results are deferred to the appendix due to the lack of space. The results show that DR-OMMD consistently outperforms fixed regularization; the gap becomes more significant when λ0 is not properly set.
5.2 NON-CONVEX EXPERIMENTS: DEEP MULTI-TASK LEARNING
We use MultiMNIST (Sabour et al., 2017), which is a multi-task version of the MNIST dataset for image classification and commonly used in deep multi-task learning (Sener & Koltun, 2018; Lin et al., 2019). In MultiMNIST, each sample is composed of a random digit image from MNIST at the top-left and another image at the bottom-right. The goal is to classify the digit at the top-left (task L) and that at the bottom-right (task R) at the same time.
We follow (Sener & Koltun, 2018)’s setup with LeNet. Learning rates in all methods are selected via grid search over {0.0001, 0.001, 0.01, 0.1}. For linearization, we examine different weights (0.25, 0.75), (0.5, 0.5), and (0.75, 0.25). For DR-OMMD, αt is set according to Theorem 1, and the initial weights are simply set as λ0 = (0.5, 0.5). Note that in the online setting, samples arrive in a sequential manner, which is different from offline experiments where sample batches are randomly sampled from the training set. Figure 2 compares the average cumulative loss of all the examined methods. We also measure two conventional metrics in offline experiments, i.e., the training loss and test loss (Reddi et al., 2018); the results are similar and deferred to the appendix due to the lack of space. The results show that DR-OMMD outperforms counterpart algorithms using min-norm or linearization in all metrics on both tasks, validating its effectiveness in the non-convex setting.
6 CONCLUSIONS
In this paper, we give a systematic study of multi-objective online learning, encompassing a novel framework, a new algorithm, and corresponding non-trivial theoretical analysis. We believe that this work paves the way for future research on more advanced multi-objective optimization algorithms, which may inspire the design of new optimizers for multi-task deep learning.
ACKNOWLEDGMENTS
This work was supported in part by the National Key Research and Development Program of China No. 2020AAA0106300 and National Natural Science Foundation of China No. 62250008. This work was also supported by Ant Group through Ant Research Intern Program. We would like to thank Wenliang Zhong, Jinjie Gu, Guannan Zhang and Jiaxin Liu for generous support on this project.
APPENDIX
The appendix is organized as follows. Appendix A reviews related work. Appendix B validates the correctness of our definition of PSG. Appendix C discusses the domain of the comparator in S-PSG, indicating that it makes no difference whether the comparator is selected from the Pareto optimal set or from the whole domain. Appendix D provides the detailed derivation of the equivalent form of RII(T ). Appendix E discusses how to efficiently compute the composition weights for the minregularized-norm solver. Appendix F discusses the order of DR-OMMD’s regret bound with fixed or adaptive learning rate, shows the tightness of the derived bound, and provides more details on the regret comparison between DR-OMMD and linearization. Appendix G supplements more details in the experimental setup and empirical results. Appendix H and I provide detailed proofs of the remaining theoretical claims in the main paper. Finally, Appendix J supplements regret analysis of DR-OMMD in the strongly convex setting.
A RELATED WORK
In this section, we review previous work in some related fields, i.e., online learning, multi-objective optimization, multi-objective multi-armed bandits, and multi-objective Bayesian optimization.
A.1 ONLINE LEARNING
Online learning arms to make sequential predictions for streaming data. Please refer to the introduction books (Hazan et al., 2016; Orabona, 2019) for more background knowledges.
Most of the previous works on online learning are conducted in the single-objective setting. As far as we are concerned, there are only two lines of work concerning multi-objective learning. The first line of works provides a multi-objective perspective of the prediction-with-expert-advice (PEA) problem (Koolen, 2013; Koolen & Van Erven, 2015). Specifically, they view each individual expert as a multi-objective criterion, and characterize the Pareto optimal trade-offs among different experts. These works have two main distinctions from our proposed MO-OCO. First, they are still built upon the original PEA problem where the payoff of each expert (or decision) is a scalar, while we focus on vectoral payoffs. Second, their framework is restricted to an absolute loss game, whereas our framework is general and can be applied to any coordinate-wise convex loss functions.
The second line of work studies online learning with vectoral payoffs via Blackwell approachability (Blackwell, 1956; Mannor et al., 2014; Abernethy et al., 2011). In their framework, the learner is given a target set T ⊂ Rm and its goal is to generate decisions {xt}Tt=1 to minimize the distance between the average loss ∑T t=1 lt(xt)/T and the target set T . There are two major differences between Blackwell approachability and our proposed MO-OCO: previous works on Blackwell approachability are zero-order methods and the target set T is often known beforehand (also see the discussion in (Busa-Fekete et al., 2017)), while in MO-OCO we intend to develop a first-order method to reach the unknown Pareto front.
A.2 MULTI-OBJECTIVE OPTIMIZATION
Multi-objective optimization aims to optimize multiple objectives concurrently. Most of the previous works on multi-objective optimization are conducted in the offline setting, including the batch optimization setting (Désidéri, 2012; Liu et al., 2021) and the stochastic optimization setting (Sener & Koltun, 2018; Lin et al., 2019; Yu et al., 2020; Chen et al., 2020; Javaloy & Valera, 2021). These methods are based on gradient composition, and have shown very promising results in multi-task learning applications.
Despite the existence of previous works on multi-objective optimization, as the first work of multiobjective optimization in the OCO setting, our work is largely different from them in three aspects. First, we contribute the first formal framework of multi-objective online convex optimization. In particular, our framework is based on a novel equivalent transformation of the PSG metric, which is intrinsically different from previous offline optimization frameworks. Second, we provide a showcase in which a commonly used method in the offline setting, namely min-norm (Désidéri, 2012; Sener & Koltun, 2018), fail to attain sublinear regret in online setting. Our proposed min-regularized-norm
is a novel design when tailoring offline methods to the online setting. Third, the regret analysis of multi-objective online learning is intrinsically different from the convergence analysis in the offline setting (Yu et al., 2020).
A.3 MULTI-OBJECTIVE MULTI-ARMED BANDITS
Another branch of related works study multi-objective optimization in the multi-armed bandits setting (Busa-Fekete et al., 2017; Tekin & Turğay, 2018; Turgay et al., 2018; Lu et al., 2019a; Degenne et al., 2019). Among these works, the most relevant one to ours is (Turgay et al., 2018), which introduces the Pareto suboptimality gap (PSG) metric to characterize the multi-objective regret in the bandits setting, and proposes a zero-order zooming algorithm to minimize the regret.
In this work, our regret definition also utilizes the PSG metric (Turgay et al., 2018). However, as the first study of multi-objective optimization in the OCO setting, our work is intrinsically different from these previous works in the following aspects. First, as PSG is a zero-order metric, we perform a novel equivalent transformation, making it amenable to the OCO setting. Second, our proposed algorithm is a first-order multiple gradient algorithm, whose design principles are completely distinct from zero-order algorithms. For example, the concept of the stability of composite weights does not even exist in the design of previous zero-order methods for multi-objective bandits (Turgay et al., 2018; Lu et al., 2019a). Third, the regret analysis of MO-OCO is intrinsically different from that in the bandits setting.
A.4 MULTI-OBJECTIVE BAYESIAN OPTIMIZATION
The final area related to our work is multi-objective Bayesian optimization (Zhang & Golovin, 2020; Konakovic Lukovic et al., 2020; Chowdhury & Gopalan, 2021; Maddox et al., 2021; Daulton et al., 2022), which studies Bayesian optimization with vector-valued feedback. There are two branches of works in this area, using different notions of regret. The first branch is based on scalarization, which adopts the expectation of the gap between scalarized losses over some given distribution (Chowdhury & Gopalan, 2021) as the regret. In this approach, the distribution of scalarization can be understood as a set of preference, which needs to be known beforehand. The second branch is based on Pareto optimality (Zhang & Golovin, 2020), which uses hypervolume as the discrepancy metric and adopt the gap between the true Pareto front and the estimated Pareto front as the regret.
As the first work on multi-objective optimization in the OCO setting, our work is largely different from these works in the following aspects. First, the regret definitions are different. Specifically, compared to the first branch based on scalarization, our regret definition is purely motivated by Pareto optimality, which does not need any preference in advance; compared to the second branch using hypervolume, we note that hypervolume is mainly used for Pareto front approximation, which is unsuitable to our adversarial setting where the goal is to impose the cumulative loss to reach the Pareto front. Second, multi-objective Bayesian optimization is conducted in a stochastic setting, which typically assumes that the losses follow some Gaussian distribution, whereas our work is conducted in the adversarial setting where the losses can be generated arbitrarily.
B AN EQUIVALENT DEFINITION OF PSG
Recall that in Definition 3, we formulate the PSG metric as a constrained optimization problem. We note that, since the PSG metric is based on the notion of “non-dominance” (Turgay et al., 2018), its most direct form is actually
∆′(x;K∗, F ) = inf ϵ≥0 ϵ,
s.t. ∀x′′ ∈ K∗,∃i ∈ {1, . . . ,m}, f i(x)− ϵ < f i(x′′) or ∀i ∈ {1, . . . ,m}, f i(x)− ϵ = f i(x′′).
At the first glance, the above definition seems to be quite different from Definition 3, since it has an extra condition “∀i ∈ {1, . . . ,m}, f i(x) − ϵ = f i(x′′)”. In the following, we prove that both definitions actually yield the same value due to the infimum operation on ϵ.
Specifically, for any possible pair (x,K∗, F ), we denote ∆′(x;K∗, F ) = ϵ′0 and ∆(x;K∗, F ) = ϵ0. By comparing the constraints of both definitions, it is obvious that ϵ0 must satisfy the constraint
of ∆′(x;K∗, F ), hence the infimum operation guarantees that ϵ′0 ≤ ϵ0. It remains to prove that ϵ′0 ≥ ϵ0. To this end, we only need to show that ϵ′0 + ξ satisfies the constraint of ∆(x;K∗, F ) for any ξ > 0. Consider an arbitrary x′′ ∈ K∗. From the definition of ∆′(x;K∗, F ), we know that either ∃i ∈ {1, . . . ,m}, f i(x) − ϵ′0 < f i(x′′) or ∀i ∈ {1, . . . ,m}, f i(x) − ϵ′0 = f i(x′′). Whichever condition holds, we must have ∃i ∈ {1, . . . ,m}, f i(x)−ϵ′0−ξ < f i(x′′) for any ξ > 0. Since it holds for any x′′ ∈ K∗, ϵ′0 + ξ lies in the feasible region of ∆(x;K∗, F ), hence we have ϵ0 ≤ ϵ′0 + ξ,∀ξ > 0 and thus ϵ0 ≤ ϵ′0. In summary, we have ∆′(x;K∗, F ) = ∆(x;K∗, F ) for any pair (x,K∗, F ).
C DISCUSSION ON THE DOMAIN OF THE COMPARATOR IN S-PSG
Recall that in Definition 4, the comparator x′ in S-PSG is selected from the Pareto optimal set X ∗ of the cumulative loss ∑T t=1 Ft. This actually stems from the original definition of PSG (Turgay et al., 2018), which uses the Pareto optimal set as the comparator set. In fact, comparing with Pareto optimal decisions in X ∗ is already enough to measure the suboptimality of any decision sequence {xt}Tt=1. The reason is that, for any non-optimal decision x′ ∈ X − X ∗, there must exist some Pareto optimal decision x′′ ∈ X ∗ that dominates x′, hence the suboptimality metric does not need to compare with this non-optimal decision x′. In other words, even if we extend the comparator set in S-PSG to the whole domain X , the modified form will be equivalent to the original form based on the Pareto optimal set X ∗. In the following, we strictly prove this equivalence ∆({xt}Tt=1;X , {Ft}Tt=1) = ∆({xt}Tt=1;X ∗, {Ft}Tt=1). Specifically, we modify the definition of S-PSG and let the comparator domain X ′ be any subset of the decision domain X , i.e.,
∆({xt}Tt=1;X ′, {Ft}Tt=1) = inf ϵ≥0
ϵ, s.t. ∀x′′ ∈ X ′,∃i ∈ {1, . . . ,m}, T∑
t=1
f it (xt)− ϵ < T∑
t=1
f it (x ′′).
Then the modified regret based on the whole domain X takes R′II(T ) = ∆({xt}Tt=1;X , {Ft}Tt=1). Now we begin to prove the equivalence ∆({xt}Tt=1;X , {Ft}Tt=1) = ∆({xt}Tt=1;X ∗, {Ft}Tt=1). For any X ′ ⊂ X , let E(X ′) denote the constraint of ∆({xt}Tt=1;X ′, {Ft}Tt=1), i.e.,
E(X ′) = {ϵ ≥ 0 | ∀x′′ ∈ X ′,∃i ∈ {1, . . . ,m}, T∑
t=1
f it (xt)− ϵ < T∑
t=1
f it (x ′′)},
then ∆({xt}Tt=1;X ′, {Ft}Tt=1) = inf E(X ′). Hence, we just need to prove inf E(X ) = inf E(X ∗). On the one hand, since X ∗ ⊂ X , from the above definition of S-PSG, it is easy to check that for any ϵ ∈ E(X ), it must satisfy ϵ ∈ E(X ∗). Hence, we have E(X ) ⊂ E(X ∗). On the other hand, given any ϵ ∈ E(X ∗), we now check that ϵ ∈ E(X ). To this end, we consider an arbitrary point x′′ ∈ X in two cases. (i) If x′′ ∈ X ∗, since ϵ ∈ E(X ∗), we naturally have ∑T t=1 f i0 t (xt) − ϵ < ∑T t=1 f i0 t (x
′′) for some i0. (ii) If x′′ /∈ X ∗, since X ∗ is the Pareto optimal set of ∑T t=1 Ft, there must exist some Pareto optimal decision x̂ ∈ X ∗ that dominates x′′
w.r.t. ∑T t=1 Ft, which means that ∑T t=1 f i t (x̂) ≤ ∑T t=1 f i t (x
′′) for all i ∈ {1, ...,m}. Notice that ϵ ∈ E(X ∗) gives ∑T t=1 f i0 t (xt) − ϵ < ∑T t=1 f
i0 t (x̂) for some i0, hence in this case we also have∑T
t=1 f i0 t (xt)− ϵ < ∑T t=1 f i0 t (x
′′). Combining the above two cases, we prove that ϵ ∈ E(X ), and consequently E(X ∗) ⊂ E(X ). In summary, we have E(X ) = E(X ∗), hence ∆({xt}Tt=1;X , {Ft}Tt=1) = inf E(X ) = inf E(X ∗) = ∆({xt}Tt=1;X ∗, {Ft}Tt=1). Therefore, it makes no difference whether the comparator in RII(T ) is generated from the Pareto optimal set X ∗ or from the whole domain X .
D DERIVATION OF THE EQUIVALENT MULTI-OBJECTIVE REGRET FORM
In this section, We strictly derive the equivalent form of RII(T ) in Proposition 1, which is highly non-trivial and forms the basis of the subsequent algorithm design and theoretical analysis.
Proof of Proposition 1. Recall that the PSG metric used in RII(T ) is an extension of vanilla PSG to leverage any decision sequence. To motivate the analysis, we first investigate vanilla PSG ∆(x;X ∗, F ) that deals with a single decision x, and derive a useful lemma as follows. Lemma 1. Vanilla PSG has an equivalent form, i.e.,
∆(x;X ∗, F ) = sup x∗∈X∗ inf λ∈Sm λ⊤(F (x)− F (x))+,
where for any vector l = (l1, ..., lm) ∈ Rm, the truncation (l)+ produces a vector whose i-th entry equals to max{li, 0} for all i ∈ {1, ...,m}.
Proof. In the definition of PSG, the evaluated decision x is compared to all Pareto optimal points x′ ∈ X ∗. For any fixed comparator x′ ∈ X ∗, we define the pair-wise suboptimality gap w.r.t. F between decisions x and x′ as follows
δ(x;x′, F ) = inf ϵ≥0 {ϵ | F (x)− ϵ1 ⊁ F (x′)}.
Hence, PSG can be expressed as
∆(x;X ∗, F ) = sup x′∈X∗ δ(x;x′, F ).
To proceed, we analyze the pair-wise gap δ(x;x′, F ). From its definition, we know that δ(x;x′, F ) measures the minimal non-negative value that needs to be subtracted from each entry of F (x) until it is not dominated by x′. Now we consider two cases.
(i) If F (x) ⊁ F (x′), i.e., fk0(x) ≤ fk0(x′) for some k0 ∈ {1, ...,m}, nothing needs to be subtracted from F (x) and we directly have δ(x;x′, F ) = 0.
(ii) If F (x) ≻ F (x′), we have fk(x) ≥ fk(x′) for all k ∈ {1, ...,m}, which obviously violates the condition F (x) − ϵ1 ⊁ F (x′) when ϵ = 0. Now let us gradually increase ϵ from zero. Notice that such a condition holds only when there there exists some k0 satisfying fk0(x) − ϵ ≤ fk0(x′), or equivalently ϵ ≥ fk0(x) − fk0(x′). Hence, in this case, we have δ(x;x′, F ) = mink∈{1,...,m}{fk(x)− fk(x′)}. Combining the above two cases, we derive an equivalent form of the pair-wise suboptimality gap. Specifically, we can easily check that the following form holds for both cases, i.e.,
δ(x;x′, F ) = min k∈{1,...,m} max{fk(x)− fk(x′), 0}.
To relate the above form with F , denote Um = {ek | 1 ≤ k ≤ m} as the set of all unit vector in Rm, then we equivalently have
δ(x;x′, F ) = min λ∈Um λ⊤(F (x)− F (x′))+.
Now the calculation of δ(x;x′, F ) is transformed into a minimization problem over λ ∈ Um. Since Um is a discrete set, we can apply a linear relaxation trick. Specifically, we now turn to minimize the scalar p(λ) = λ⊤ max{F (x)−F (x′), 0} over the convex curvature of Um, which is exactly the probability simplex Sm = {λ ∈ Rm | λ ⪰ 0, ∥λ∥1 = 1}. Note that Um contains all the vertexes of Sm. Since infλ∈Sm p(λ) is a linear optimization problem, the minimal point λ∗ must be a vertex of the simplex, i.e., λ∗ ∈ Um. Hence, the relaxed problem is equivalent to the original problem, namely,
δ(x;x′, F ) = min λ∈Um λ⊤(F (x)− F (x′))+ = inf λ∈Sm λ⊤(F (x)− F (x′))+.
Taking the supremum of both sides over x′ ∈ X ∗, we prove the lemma. ■
The above lemma can be naturally extended to the sequence-wise variant S-PSG. Specifically, we can extend the pair-wise suboptimality gap δ(x;x′, F ) to measure any decision sequence, which now becomes
δ({xt}Tt=1;x′, {Ft}Tt=1) = inf ϵ≥0
{ϵ | T∑
t=1
Ft(xt)− ϵ1 ⊁ T∑
t=1
Ft(x ′)}.
Then S-PSG can be expressed as
∆({xt}Tt=1;X ∗, {Ft}Tt=1) = sup x∗∈X∗ δ({xt}Tt=1;x∗, {Ft}Tt=1).
Similar to the derivation of the above lemma, by investigating the relation between ∑T
t=1 Ft(xt) and ∑T
t=1 Ft(x ′), we can derive an equivalent form of δ({xt}Tt=1;x′, {Ft}Tt=1) as
δ({xt}Tt=1;x′, {Ft}Tt=1) = min k∈{1,...,m}
max{ T∑
t=1
fkt (x)− T∑
t=1
fkt (x ′), 0},
and further
δ({xt}Tt=1;x′, {Ft}Tt=1) = inf λ∈Sm λ⊤( T∑ t=1 Ft(xt)− T∑ t=1 Ft(x ′))+.
Hence, the S-PSG-based regret form can be expressed as
RII(T ) = sup x∗∈X∗ inf λ∈Sm
λ⊤( T∑
t=1
Ft(xt)− T∑
t=1
Ft(x ∗))+.
The max-min form of RII(T ) has a truncation operation (·)+, which brings irregularity to the regret form. To handle the truncation operation, we utilize the following lemma:
Lemma 2. (a) For any l ∈ Rm, we have infλ∈Sm λ⊤(l)+ = max{infλ∈Sm λ⊤l, 0}. (b) For any h : X → R, we have supx∈X max{h(x), 0} = max{supx∈X h(x), 0}.
Proof. To prove the first statement, we consider the following two cases. (i) If l ≻ 0, then (l)+ = l. For any λ ∈ Sm, we have λ⊤(l)+ = λ⊤l > 0. Taking the infimum over λ ∈ Sm on both sides, we have infλ⊤Sm λ⊤(l)+ = infλ∈Sm λ⊤l ≥ 0. Moreover, from the last equation we have max{infλ∈Sm λ⊤l, 0} = infλ∈Sm λ⊤l, which proves the statement in this case. (ii) If l ⊁ 0, then li ≤ 0 for some i ∈ {1, ...,m}. Set ei as the i-th unit vector in Rm, then we have e⊤i l ≤ 0. One the one hand, since ei ∈ Sm, we have infλ∈Sm λ⊤l ≤ e⊤i l ≤ 0, and further max{infλ∈Sm λ⊤l, 0} = 0. On the other hand, notice that e⊤i (l)+ = 0 and λ⊤(l)+ ≥ 0 for any λ ∈ Sm, then infλ∈Sm λ⊤(l)+ = e⊤i (l)+ = 0. Hence, the statement also holds in this case. To prove the second statement, we also consider two cases. (i) If h(x0) > 0 for some x0 ∈ X , then supx∈X h(x) ≥ h(x0) > 0, and max{supx∈X h(x), 0} = supx∈X h(x). Since we also have supx∈X max{h(x), 0} = supx∈X h(x), the statement holds in this case. (ii) If h(x) ≤ 0 for all x ∈ X , then supx∈X h(x) ≤ 0, and thus max{supx∈X h(x), 0} = 0. Meanwhile, for any x ∈ X , we have max{h(x)} = 0, which validates the statement in this case.
■
From the above lemma, we directly have
RII(T ) = sup x∗∈X∗ max{ inf λ∈Sm
λ⊤( T∑ t=1 Ft(xt)− T∑ t=1 Ft(x ∗)), 0}
= max{ sup x∗∈X∗ inf λ∈Sm
λ⊤( T∑ t=1 Ft(xt)− T∑ t=1 Ft(x ∗)), 0},
which derives the desired equivalent form. ■
E CALCULATION OF MIN-REGULARIZED-NORM
In this section, we discuss how to efficiently calculate the solutions to min-regularized-norm with L1-norm and L2-norm.
Algorithm 2 Frank-Wolfe Solver for Min-Regularized-Norm with L1-Norm 1: Initialize: λt = (γ1t , . . . , γmt ) = ( 1m , . . . , 1 m ).
2: Compute the matrix U = ∇Ft(xt)⊤∇Ft(xt), i.e., Uij = ∇f it (xt)⊤∇f j t (xt),∀i, j ∈
{1, . . . ,m}. 3: repeat 4: Select an index k ∈ argmaxi∈{1,...,m}{ ∑m j=1 γ j tU
ij + α sgn(γit − γi0)}. 5: Compute δ ∈ argmin0≤δ≤1 ∥∥δ∇fkt (xt) + (1− δ)∇Ft(xt)λt∥∥22+α∥δ(ek−λt)+λt−λ0∥1. 6: Update λt = (1− δ)λt + δek. 7: until δ ∼ 0 or Number of Iteration Limits 8: return λt.
E.1 L1-NORM
Similar to (Sener & Koltun, 2018), we first consider the setting of two objectives, namely m = 2. In this case, for any λ = (γ, 1− γ),λ0 = (γ0, 1− γ0) ∈ S2, the L1-regularization ∥λ− λ0∥1 equals to 2|γ − γ0|. Hence min-regularized-norm with L1-norm at round t reduces to λt = (γt, 1 − γt) where
γt ∈ argmin 0≤γ≤1 ∥γg1 + (1− γ)g2∥22 + 2α|γ − γ0|.
Interestingly, the above problem has a closed-form solution.
Proposition 3. Set γL = (g⊤2 (g2−g1)−α)/∥g2−g1∥22, and γR = (g⊤2 (g2−g1)+α)/∥g2−g1∥22. Then min-regularized-norm with L1-norm produces weights λt = (γt, 1− γt) where
γt = max{min{γ′′t , 1}, 0}, where γ′′t = max{min{γ0, γR}, γL}.
Proof. We solve the following two quadratic sub-problems, i.e.,
min 0≤γ≤γ0
h1(γ) = ∥γg1 + (1− γ)g2∥22 + 2α(γ0 − γ),
as well as min
γ0≤γ≤1 h2(γ) = ∥γg1 + (1− γ)g2∥22 + 2α(γ − γ0).
It can be checked that in the former sub-problem, h1 monotonously decreases on (−∞, γR] and increases on [γR,+∞); in the latter sub-problem, h2 monotonously decreases on (−∞, γL] and increases on [γL,+∞). Since each sub-problem has its constraint ([0, γ0] or [γ0, 1]), the solution to the original optimization problem can then be derived by comparing the optimal values of the two sub-problems with their constraints. Specifically, notice that γL ≤ γR and 0 ≤ γ0 ≤ 1, and we can consider the following three cases.
(i) When 0 ≤ γ0 ≤ γL ≤ γR, then h1 monotonously decreases on [0, γ0] and its minimum on [0, γ0] is h1(γ0). Notice that h1(γ0) = h2(γ0). For the sub-problem of h2, we further consider two situations: (i-a) If γL ≤ 1, then γL ∈ [γ0, 1], hence the minimum of h2 on [γ0, 1] is h2(γL). Since h2(γL) ≤ h2(γ0) = h1(γ0), the minimal point of the original problem is γL, and hence γt = γL. (i-b) If γL > 1, then h2 monotonously decreases on [γ0, 1], and we surely have h2(1) ≤ h2(γ0) = h1(γ0). Hence γt = 1 in this situation. Combining the above two situations, we have γt = min{γL, 1} in this case. (ii) When γL ≤ γR ≤ γ0 ≤ 1, then h2 monotonously increases on [γ0, 1] and its minimum on [γ0, 1] is h2(γ0). Notice that h1(γ0) = h2(γ0). For the sub-problem of h1, similar to the first case, we also consider two situations: (ii-a) If γR ≥ 0, then γR ∈ [0, γ0], hence the minimum of h1 on [0, γ0] is h1(γR). Since h1(γR) ≤ h1(γ0) = h2(γ0), the minimal point of the original problem is γR, and hence γt = γR. (ii-b) If γR < 0, then h1 monotonously increases on [0, γ0]. Hence we have h1(0) ≤ h1(γ0) = h2(γ0). Hence the solution to the original problem γt = 0. Combining the above two situations, we have γt = max{γR, 0} in this case.
Algorithm 3 Frank-Wolfe Solver for Min-Regularized-Norm with L2-Norm 1: Initialize: λt = (γ1t , . . . , γmt ) = ( 1m , . . . , 1 m ).
2: Compute the matrix U = ∇Ft(xt)⊤∇Ft(xt), i.e., Uij = ∇f it (xt)⊤∇f j t (xt),∀i, j ∈
{1, . . . ,m}. 3: repeat 4: Select an index k ∈ argmaxi∈{1,...,m}{ ∑m j=1 γ j tU
ij + α(γit − γi0)}. 5: Compute δ ∈ argmin0≤δ≤1 ∥δ∇fkt (xt))+(1−δ)∇Ft(xt)λt∥22+α∥δ(ek−λt)+λt−λ0∥22,
which has an analytical form
δ = max{min{ (∇Ft(xt)λt −∇f k t (xt)) ⊤∇Ft(xt)λt + α∥ek − λt∥22 ∥∇Ft(xt)λt −∇fkt (xt)∥22 + α(ek − λt)⊤(λt − λ0) , 1}, 0}.
6: Update λt = (1− δ)λt + δek. 7: until δ ∼ 0 or Number of Iteration Limits 8: return λt.
(iii) When γL < γ0 < γR, then h1 monotonously decreases on [0, γ0] and h2 monotonously increases on [γ0, 1]. Hence each sub-problem attains its minimum at γ0, and thus γt = γ0.
Summarizing the above three cases gives
γt = min{γL, 1}, γ0 ≤ γL; max{γR, 0}, γ0 ≥ γR;
γ0, otherwise.
We can further rewrite the above formula into a compact form as follows, which can be checked case-by-case.
γt = max{min{γ′′t , 1}, 0}, where γ′′t = max{min{γ0, γR}, γL}, This gives the closed-form solution of min-regularized-norm when m = 2. ■
Now that we have derived the closed-form solution to the min-regularized-norm | 1. What is the focus of the paper regarding multi-objective online learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in defining regret?
3. Do you have any concerns or suggestions regarding the paper's contributions?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Can the proposed approach be extended to settings with partial feedback or additional regularity assumptions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies multi-objective online learning, a framework in which the learner has to optimize jointly several conflicting objectives. In particular the notion of optimality and discrepancy need to be redefined. The authors propose a definition of the regret based on the sequence-wise Pareto Suboptimality Gap. Then, they propose an algorithm based on the regularized min-norm and prove it achieves a
T
regret. Experiments complement the contributions.
Strengths And Weaknesses
Strengths
the paper is globally clear and well written
the topic is of interest to the ICLR community
Weaknesses
I am not fully convinced by the definition of the regret (which is one important contribution of the paper). Typically, in Proposition 1, isn't
λ
∗
just selecting the objective with respect to which the regret is the smallest? If the previous regret proposed was indeed too hard, this one seems on the contrary a bit easy. I point out this paper [1], where the authors study multitask Bayesian optimization. In particular in Section 2 they define a notion of regret which is the expectation (over a distribution of directions) of the scalarized regret. This seems to me a sensible definition, as opposed to the best (or the worst) possible direction. Could the authors comment on this point?
Questions
one could imagine a setting where feedback (i.e., the gradient) is received only for a subset of the objectives. How would the regret bound look like in that case?
does the DR-OMMD achieve improved regret bounds if the objectives satisfy additional regularity assumption (e.g., strong convexity, exp-concavity)? If not, what is the reason?
it seems to me that the
λ
can be interpreted as learning rates. They are many ways to optimally tune the learning rate and adapt to the comparator, see e.g. [2, Chapter 9]. Could it make sense to use them rather than the regularized min-max norm to derive a sequence of optimal
λ
t
?
[1] No-regret Algorithms for Multi-task Bayesian Optimization, Chowdhury and Gopalan 2020 [2] A Modern Introduction to Online Learning, Orabona 2020
Clarity, Quality, Novelty And Reproducibility
Globally good |
ICLR | Title
Multi-Objective Online Learning
Abstract
This paper presents a systematic study of multi-objective online learning. We first formulate the framework of Multi-Objective Online Convex Optimization, which encompasses a novel multi-objective regret. This regret is built upon a sequencewise extension of the commonly used discrepancy metric Pareto suboptimality gap in zero-order multi-objective bandits. We then derive an equivalent form of the regret, making it amenable to be optimized via first-order iterative methods. To motivate the algorithm design, we give an explicit example in which equipping OMD with the vanilla min-norm solver for gradient composition will incur a linear regret, which shows that merely regularizing the iterates, as in single-objective online learning, is not enough to guarantee sublinear regrets in the multi-objective setting. To resolve this issue, we propose a novel min-regularized-norm solver that regularizes the composite weights. Combining min-regularized-norm with OMD results in the Doubly Regularized Online Mirror Multiple Descent algorithm. We further derive the multi-objective regret bound for the proposed algorithm, which matches the optimal bound in the single-objective setting. Extensive experiments on several real-world datasets verify the effectiveness of the proposed algorithm.
1 INTRODUCTION
Traditional optimization methods for machine learning are usually designed to optimize a single objective. However, in many real-world applications, we are often required to optimize multiple correlated objectives concurrently. For example, in autonomous driving (Huang et al., 2019; Lu et al., 2019b), self-driving vehicles need to solve multiple tasks such as self-localization and object identification at the same time. In online advertising (Ma et al., 2018a;b), advertising systems need to decide on the exposure of items to different users to maximize both the Click-Through Rate (CTR) and the Post-Click Conversion Rate (CVR). In most multi-objective scenarios, the objectives may conflict with each other (Kendall et al., 2018). Hence, there may not exist any single solution that can optimize all the objectives simultaneously. For example, merely optimizing CTR or CVR will degrade the performance of the other (Ma et al., 2018a;b).
Multi-objective optimization (MOO) (Marler & Arora, 2004; Deb, 2014) is concerned with optimizing multiple conflicting objectives simultaneously. It seeks Pareto optimality, where no single objective can be improved without hurting the performance of others. Many different methods for MOO have been proposed, including evolutionary methods (Murata et al., 1995; Zitzler & Thiele, 1999), scalarization methods (Fliege & Svaiter, 2000), and gradient-based iterative methods (Désidéri, 2012). Recently, the Multiple Gradient Descent Algorithm (MGDA) and its variants have been introduced to the training of multi-task deep neural networks and achieved great empirical success (Sener & Koltun, 2018), making them regain a significant amount of research interest (Lin et al., 2019; Yu et al., 2020; Liu et al., 2021). These methods compute a composite gradient based on
∗Equal contributions. †Corresponding author.
the gradient information of all the individual objectives and then apply the composite gradient to update the model parameters. The composite weights are determined by a min-norm solver (Désidéri, 2012) which yields a common descent direction of all the objectives.
However, compared to the increasingly wide application prospect, the gradient-based iterative algorithms are relatively understudied, especially in the online learning setting. Multi-objective online learning is of essential importance for reasons in two folds. First, due to the data explosion in many real-world scenarios such as web applications, making in-time predictions requires performing online learning. Second, the theoretical investigation of multi-objective online learning will lay a solid foundation for the design of new optimizers for multi-task deep learning. This is analogous to the single-objective setting, where nearly all the optimizers for training DNNs are initially analyzed in the online setting, such as AdaGrad (Duchi et al., 2011), Adam (Kingma & Ba, 2015), and AMSGrad (Reddi et al., 2018).
In this paper, we give a systematic study of multi-objective online learning. To begin with, we formulate the framework of Multi-Objective Online Convex Optimization (MO-OCO). One major challenge in deriving MO-OCO is the lack of a proper regret definition. In the multi-objective setting, in general, no single decision can optimize all the objectives simultaneously. Thus, to devise the multi-objective regret, we need to first extend the single fixed comparator used in the singleobjective regret, i.e., the fixed optimal decision, to the entire Pareto optimal set. Then we need an appropriate discrepancy metric to evaluate the gap between vector-valued losses. Intuitively, the Pareto suboptimality gap (PSG) metric, which is frequently used in zero-order multi-objective bandits (Turgay et al., 2018; Lu et al., 2019a), is a very promising candidate. PSG can yield scalarized measurements from any vector-valued loss to a given comparator set. However, we find that vanilla PSG is unsuitable for our setting since it always yields non-negative values and may be too loose. In a concrete example, we show that the naive PSG-based regret RI(T ) can even be linear w.r.t. T when the decisions are already optimal, which disqualifies it as a regret metric. To overcome the failure of vanilla PSG, we propose its sequence-wise variant termed S-PSG, which measures the suboptimality of the whole decision sequence to the Pareto optimal set of the cumulative loss function. Optimizing the resulting regret RII(T ) will drive the cumulative loss to approach the Pareto front. However, as a zero-order metric motivated geometrically, designing appropriate first-order algorithms to directly optimize it is too difficult. To resolve the issue, we derive a more intuitive equivalent form of RII(T ) via a highly non-trivial transformation.
Based on the MO-OCO framework, we develop a novel multi-objective online algorithm termed Doubly Regularized Online Mirror Multiple Descent. The key module of the algorithm is the gradient composition scheme, which calculates a composite gradient in the form of a convex combination of the gradients of all objectives. Intuitively, the most direct way to determine the composite weights is to apply the min-norm solver (Désidéri, 2012) commonly used in offline multi-objective optimization. However, directly applying min-norm is not workable in the online setting. Specifically, the composite weights in min-norm are merely determined by the gradients at the current round. In the online setting, since the gradients are adversarial, they may result in undesired composite weights, which further produce a composite gradient that reversely optimizes the loss. To rigorously verify this point, we give an example where equipping OMD with vanilla min-norm incurs a linear regret, showing that only regularizing the iterate, as in OMD, is not enough to guarantee sublinear regrets in our setting. To fix the issue, we devise a novel min-regularized-norm solver with an explicit regularization on composite weights. Equipping it with OMD results in our proposed algorithm. In theory, we derive a regret bound of O( √ T ) for DR-OMMD, which matches the optimal bound in the single-objective setting (Hazan et al., 2016) and is tight w.r.t. the number of objectives. Our analysis also shows that DR-OMMD attains a smaller regret bound than that of linearization with fixed composite weights. We show that, in the two-objective setting with linear losses, the margin between the regret bounds depends on the difference between the composite weights yielded by the two algorithms and the difference between the gradients of the two underlying objectives.
To evaluate the effectiveness of DR-OMMD, we conduct extensive experiments on several largescale real-world datasets. We first realize adaptive regularization via multi-objective optimization, and find that adaptive regularization with DR-OMMD significantly outperforms fixed regularization with linearization, which verifies the effectiveness of DR-OMMD over linearization in the convex setting. Then we apply DR-OMMD to deep online multi-task learning. The results show that DROMMD is also effective in the non-convex setting.
2 PRELIMINARIES
In this section, we briefly review the necessary background knowledge of two related fields.
2.1 MULTI-OBJECTIVE OPTIMIZATION
Multiple-objective optimization (MOO) is concerned with solving the problems of optimizing multiple objectives simultaneously (Fliege & Svaiter, 2000; Deb, 2014). In general, since different objectives may conflict with each other, there is no single solution that can optimize all the objectives at the same time, hence the conventional concept of optimality used in the single-objective setting is no longer suitable. Instead, MOO seeks to achieve Pareto optimality. In the following, we give the relevant definitions more formally. We use a vector-valued loss F = (f1, . . . , fm) to denote the objectives, where m ≥ 2 and f i : X → R, i ∈ {1, . . . ,m}, X ⊂ R, is the i-th loss function. Definition 1 (Pareto optimality). (a) For any two solutions x,x′ ∈ X , we say that x dominates x′, denoted as x ≺ x′ or x′ ≻ x, if f i(x) ≤ f i(x′) for all i, and there exists one i such that f i(x) < f i(x′); otherwise, we say that x does not dominate x′, denoted as x ⊀ x′ or x′ ⊁ x. (b) A solution x∗ ∈ X is called Pareto optimal if it is not dominated by any other solution in X .
Note that there may exist multiple Pareto optimal solutions. For example, it is easy to show that the optimizer of any single objective, i.e., x∗i ∈ argminx∈X f i(x), i ∈ {1, . . . ,m}, is Pareto optimal. Different Pareto optimal solutions reflect different trade-offs among the objectives (Lin et al., 2019). Definition 2 (Pareto front). (a) All Pareto optimal solutions form the Pareto set PX (F ). (b) The image of PX (F ) constitutes the Pareto front, denoted as P(H) = {F (x) | x ∈ PX (F )}.
Now that we have established the notion of optimality in MOO, we proceed to introduce the metrics that measure the discrepancy of an arbitrary solution x ∈ X from being optimal. Recall that, in the single-objective setting with merely one loss function f : Z → R, for any z ∈ Z , the loss difference f(z) − minz′′∈Z f(z′′) is directly qualified for the discrepancy measure. However, in MOO with more than one loss, for any x ∈ X , the loss difference F (x) − F (x′′), where x′′ ∈ PX (F ), is a vector. Intuitionally, the desired discrepancy metric shall scalarize the vector-valued loss difference and yield 0 for any Pareto optimal solution. In general, in MOO, there are two commonly used discrepancy metrics, i.e., Pareto suboptimality gap (PSG) (Turgay et al., 2018) and Hypervolume (HV) (Bradstreet, 2011). As HV is a complex volume-based metric, it is more difficult to optimize via gradient-based algorithms (Zhang & Golovin, 2020). Hence in this paper, we adopt PSG, which has already been extensively used in multi-objective bandits (Turgay et al., 2018; Lu et al., 2019a). Definition 3 (Pareto suboptimality gap1). For any x ∈ X , the Pareto suboptimality gap to a given comparator set Z ⊂ X , denoted as ∆(x;Z, F ), is defined as the minimal scalar ϵ ≥ 0 that needs to be subtracted from all entries of F (x), such that F (x)− ϵ1 is not dominated by any point in Z , where 1 denotes the all-one vector in Rm, i.e.,
∆(x;Z, F ) = inf ϵ≥0
ϵ, s.t. ∀x′′ ∈ Z, ∃ i ∈ {1, . . . ,m}, f i(x)− ϵ < f i(x′′).
Clearly, PSG is a distance-based discrepancy metric motivated from a purely geometric viewpoint. In practice, the comparator set Z is often set to be the Pareto set X ∗ = PX (F ) (Turgay et al., 2018); therein for any x ∈ K, its PSG is always non-negative and equals zero if and only if x ∈ PX (F ). Multiple Gradient Descent Algorithm (MGDA) is an offline first-order MOO algorithm (Fliege & Svaiter, 2000; Désidéri, 2012). At each iteration l ∈ {1, . . . , L} (L is the number of iterations), it first computes the gradient ∇f i(xl) of each objective, then derives the composite gradient gcompl =∑m
i=1 λ i l∇f i(xl) as a convex combination of these gradients, and finally applies g comp l to execute a gradient descent step to update the decision, i.e., xl+1 = xl − ηgcompl (η is the step size). The core part of MGDA is the module that determines the composite weights λl = (λ1l , . . . , λ m l ), given by
λl = argmin λl∈Sm
∥ ∑m
i=1 λil∇f i(xl)∥22,
where Sm = {λ ∈ Rm | ∑m i=1 λ i = 1, λi ≥ 0, i ∈ {1, . . . ,m}} is the probabilistic simplex in Rm. This is a min-norm solver, which finds the weights in the simplex that yield the minimum L2norm of the composite gradient. Thus MGDA is also called the min-norm method. Previous works
1Our definition looks a bit different from (Turgay et al., 2018). In Appendix B, we show they are equivalent.
(Désidéri, 2012; Sener & Koltun, 2018) showed that when all f i are convex functions, MGDA is guaranteed to decrease all the objectives simultaneously until it reaches a Pareto optimal decision.
2.2 ONLINE CONVEX OPTIMIZATION
Online Convex Optimization (OCO) (Zinkevich, 2003; Hazan et al., 2016) is the most commonly adopted framework for designing online learning algorithms. It can be viewed as a structured repeated game between a learner and an adversary. At each round t ∈ {1, . . . , T}, the learner is required to generate a decision xt from a convex compact set X ⊂ Rn. Then the adversary replies the learner with a convex function ft : X → R and the learner suffers the loss ft(xt). The goal of the learner is to minimize the regret with respect to the best fixed decision in hindsight, i.e.,
R(T ) = ∑T
t=1 ft(xt)− min x∗∈X ∑T t=1 ft(x ∗).
A meaningful regret is required to be sublinear in T , i.e., limT→∞ R(T )/T = 0, which implies that when T is large enough, the learner can perform as well as the best fixed decision in hindsight.
Online Mirror Descent (OMD) (Hazan et al., 2016) is a classic first-order online learning algorithm. At each round t ∈ {1, . . . , T}, OMD yields its decision via
xt+1 = argmin x∈X
η⟨∇ft(xt),x⟩+BR(x,xt),
where η is the step size, R : X → R is the regularization function, and BR(x,x′) = R(x)−R(x′)− ⟨∇R(x′),x − x′⟩ is the Bregman divergence induced by R. As a meta-algorithm, by instantiating different regularization functions, OMD can induce two important algorithms, i.e., Online Gradient Descent (Zinkevich, 2003) and Online Exponentiated Gradient (Hazan et al., 2016).
3 MULTI-OBJECTIVE ONLINE CONVEX OPTIMIZATION
In this section, we formally formulate the MO-OCO framework.
Framework overview. Analogously to single-objective OCO, MO-OCO can be viewed as a repeated game between an online learner and the adversarial environment. The main difference is that in MO-OCO, the feedback is vector-valued. The general framework of MO-OCO is given as follows. At each round t ∈ {1, . . . , T}, the learner generates a decision xt from a given convex compact decision set X ⊂ Rn. Then the adversary replies the decision with a vector-valued loss function Ft : X → Rm, whose i-th component f it : X → R is a convex function corresponding to the i-th objective, and the learner suffers the vector-valued loss Ft(xt). The goal of the learner is to generate a sequence of decisions {xt}Tt=1 to minimize a certain kind of multi-objective regret. The remaining work in framework formulation is to give an appropriate regret definition, which is the most challenging part. Recall that the single-objective regret R(T ) = ∑T t=1 ft(xt)− ∑T t=1 ft(x
∗) is defined as the difference between the cumulative loss of the actual decisions {xt}Tt=1 and that of the fixed optimal decision in hindsight x∗ ∈ argminx∈X ∑T t=1 ft(x). When defining the multiobjective analogy to R(T ), we encounter two issues. First, in the multi-objective setting, no single decision can optimize all the objectives simultaneously in general, hence we cannot compare the cumulative loss with that of any single decision. Instead, we use the the Pareto optimal set X ∗ of the cumulative loss function ∑T t=1 Ft, i.e., X ∗ = PX( ∑T t=1 Ft), which naturally aligns with the optimality concept in MOO. Second, to compare {xt}Tt=1 and X ∗ in the loss space, we need a discrepancy metric to measure the gap between vector losses. Intuitively, we can adopt the commonly used PSG metric (Turgay et al., 2018). But we find that vanilla PSG is not appropriate for OCO, which is largely different from the bandits setting. We explicate the reason in the following.
3.1 THE NAIVE REGRET BASED ON VANILLA PSG FAILS IN MO-OCO
By definition, at each round t, the difference between the decision xt and the Pareto optimal set can be evaluated by PSG ∆(xt;X ∗, Ft). Naturally, we can formulate the multi-objective regret by accumulating ∆(xt;X ∗, Ft) over all rounds, i.e.,
RI(T ) := ∑T
t=1 ∆(xt;X ∗, Ft).
Recall that the single-objective regret can also expressed as R(T ) = ∑T
t=1(ft(xt)−ft(x∗)). Hence, RI(T ) essentially extends the scalar discrepancy ft(xt)− ft(x∗) to the PSG metric ∆(xt;X ∗, Ft). However, these two discrepancy metrics have a major difference, i.e., ft(xt) − ft(x∗) can be negative, whereas ∆(xt;X ∗, Ft) is always non-negative. In previous bandits settings (Turgay et al., 2018), the discrepancy is intrinsically non-negative, since the comparator set is exactly the Pareto optimal set of the evaluated loss function. However, the non-negative property of PSG can be problematic in our setting, where the comparator set X ∗ is the Pareto set of the cumulative loss function, rather than the instantaneous loss Ft that is used for evaluation. Specifically, at some round t, the decision xt may Pareto dominate all points in X ∗ w.r.t. Ft, which corresponds to the single-objective setting where it is possible that ft(xt) < ft(x∗) at some specific round. In this case, we would expect the discrepancy metric at this round to be negative. However, PSG can only yield 0 in this case, making the regret much looser than we expect. In the following, we provide an example in which the naive regret RI(T ) is linear w.r.t. T even when the decisions xt are already optimal.
Problem instance. Set X = [−2, 2]. Let the loss function be identical among all objectives, i.e., f1t (x) = ... = f m t (x), and alternate between x and −x. Suppose the time horizon T is an even number, then the Pareto optimal set X ∗ = X . Now consider the decisions xt = 1, t ∈ {1, ..., T}. In this case, it can easily be checked that the single-objective regret of each objective is zero, indicating that these decisions are optimal for each objective. To calculate RI(T ), notice that when all the objectives are identical, PSG reduces to ∆(xt;X ∗, f1t ) = supx∗∈X max{f1t (xt) − f1t (x∗), 0} at each round t. Hence, in this case we have RI(T ) = ∑ 1≤k≤T/2(supx∗∈[−2,2] max{1 − x∗, 0} + supx∗∈[−2,2] max{x∗ − 1, 0}) = 3T , which is linear w.r.t. T . Therefore, RI(T ) is too loose to measure the suboptimality of decisions, which is unqualified as a regret metric.
3.2 THE ALTERNATIVE REGRET BASED ON SEQUENCE-WISE PSG
In light of the failure of the naive regret, we need to modify the discrepancy metric in our setting. Recall that the single-objective regret can be interpreted as the gap between the actual cumulative loss ∑T t=1 ft(xt) and its optimal value minx∈X ∑T t=1 ft(x). In analogy, we can measure the gap
between ∑T t=1 Ft(xt) and the Pareto front P∗ = PX ( ∑T
t=1 Ft). However, vanilla PSG is a pointwise metric, i.e., it can only measure the suboptimality of a decision point. To evaluate the decision sequence {xt}Tt=1, we modify its definition and propose a sequence-wise variant of PSG. Definition 4 (Sequence-wise PSG). For any decision sequence {xt}Tt=1, the sequence-wise PSG (S-PSG) to a given comparator set2 X ∗ w.r.t. the loss sequence {Ft}Tt=1 is defined as
∆({xt}Tt=1;X ∗, {Ft}Tt=1) = inf ϵ≥0
ϵ, s.t. ∀x′′ ∈ X ∗,∃ i ∈ {1, . . . ,m}, T∑
t=1
f it (xt)−ϵ < T∑
t=1
f it (x ′′).
Since X ∗ is the Pareto set of ∑T
t=1 Ft, S-PSG measures the discrepancy from the cumulative loss of the decision sequence to the Pareto front P∗. Now the regret can be directly given as
RII(T ) := ∆({xt}Tt=1;X ∗, {Ft}Tt=1).
RII(T ) has a clear physical meaning that optimizing it will impose the cumulative loss to be close to the Pareto front P∗. However, since PSG (or S-PSG) is a zero-order metric motivated in a purely geometric sense, i.e., its calculation needs to solve a constrained optimization problem with an unknown boundary {Ft(x′′) | x′′ ∈ X ∗}, it is difficult to design a first-order algorithm to optimize PSG-based regrets, not to mention the analysis. To resolve this issue, we derive an equivalent form via highly non-trivial transformations, which is more intuitive than its original form. Proposition 1. The multi-objective regret RII(T ) based on S-PSG has an equivalent form, i.e.,
RII(T ) = max {
sup x∗∈X∗ inf λ∗∈Sm ∑T t=1 λ∗⊤(Ft(xt)− Ft(x∗)), 0 } .
Remark. (i) The above form is closely related to the single-objective regret R(T ). Specifically, when m = 1, we can prove that RII(T ) = max{ ∑T t=1 Ft(xt) − minx∗∈X∗ ∑T t=1 Ft(x
∗), 0} = 2It is equivalent to use either X ∗ or X as the comparator set. See Appendix C for the detailed proof.
Algorithm 1 Doubly Regularized Online Mirror Multiple Descent (DR-OMMD) 1: Input: Convex set X , time horizon T , regularization parameter αt, learning rate ηt, regulariza-
tion function R, user preference λ0. 2: Initialize: x1 ∈ X . 3: for t = 1, . . . , T do 4: Predict xt and receive a loss function Ft : X → Rm. 5: Compute the multiple gradients ∇Ft(xt) = [∇f1t (xt), . . . ,∇fmt (xt)] ∈ Rn×m. 6: Determine the weights for the gradient composition via min-regularized-norm
λt = argmin λ∈Sm
∥∇Ft(xt)λ∥22 + αt∥λ− λ0∥1.
7: Compute the composite gradient gt = ∇Ft(xt)λt. 8: Perform online mirror descent using gt
xt+1 = argmin x∈X
ηt⟨gt,x⟩+BR(x,xt).
9: end for
max{R(T ), 0}. Note that in the regret analysis, we are more interested in the case of R(T ) ≥ 0 (where RII(T ) = R(T )), since when R(T ) < 0, it is naturally bounded by any sublinear regret bound. Hence, RII(T ) is essentially aligned with R(T ) in the single-objective setting. (ii) At its first glance, RII(T ) can be optimized via linearization with fixed weights λ0 ∈ Sm, or alternatively, optimizing a single objective i ∈ {1, ...,m}. We remark that this is not a problem of our regret definition, but an intrinsic requirement of Pareto optimality. Specifically, Pareto optimality characterizes the status where no objective can be improved without hurting others. Hence merely optimizing a single objective naturally achieves Pareto optimality. Please refer to Proposition 8 in (Emmerich & Deutz, 2018) for the rigorous proof. As a general performance metric, our regret should incorporate this special case. Later, we will design a novel algorithm based on the concept of common descent, which outperforms linearization in both theory and experiment.
4 DOUBLY REGULARIZED ONLINE MIRROR MULTIPLE DESCENT
In this section, we present the Doubly Robust Online Mirror Multiple Descent (DR-OMMD) algorithm, the protocol of which is given in Algorithm 1. At each round t, the learner first computes the gradient of the loss regarding each objective, then determines the composite weights of all these gradients, and finally applies the composite gradient to the online mirror descent step.
4.1 VANILLA MIN-NORM MAY INCUR LINEAR REGRETS
The core module of DR-OMMD is the composition of gradients. For simplicity, denote the gradients at round t in a matrix form ∇Ft(xt) = [∇f1t (xt), . . . ,∇fmt (xt)] ∈ Rn×m. Then the composite gradient is gt = ∇Ft(xt)λt, where λt is the composite weights. As illustrated in the preliminary, in the offline setting, the min-norm method (Désidéri, 2012; Sener & Koltun, 2018) is a classic method to determine the composite weights, which produces a common descent direction that can descend all the losses simultaneously. Thus, it is tempting to consider applying it to the online setting.
However, directly applying min-norm to the online setting is not workable, which may even incur linear regrets. In vanilla min-norm, the composite weights λt are determined solely by the gradients ∇Ft(xt) at the current round t, which are very sensitive to the instantaneous loss Ft. In the online setting, the losses at each round can be adversarially chosen, and thus the corresponding gradients can be adversarial. These adversarial gradients may result in undesired composite weights, which may further produce a composite gradient that even deteriorates the next prediction. In the following, we provide an example in which min-norm incurs a linear regret. We extend OMD (Hazan et al., 2016) to the multi-objective setting, where the composite weights are directly yielded by min-norm.
Problem instance. We consider a two-objective problem. The decision domain is X = {(u, v) | u+ v ≤ 12 , v − u ≤ 1 2 , v ≥ 0} and the loss function at each round is
Ft(x) = { (∥x− a∥2, ∥x− b∥2), t = 2k − 1, k = 1, 2, ...; (∥x− b∥2, ∥x− c∥2), t = 2k, k = 1, 2, ...,
where a = (−2,−1), b = (0, 1), c = (2,−1). For simplicity, we first analyze the case where the total time horizon T is an even number. Then we can compute the Pareto set of the cumulative loss∑T
t=1 Ft, i.e., X ∗ = {(u, 0) | − 1 2 ≤ u ≤ 1 2}, which locates at the x-axis. For conciseness of analysis, we instantiate OMD with L2-regularization, which results in the simple OGD algorithm (McMahan, 2011). We start at an arbitrary point x1 = (u1, v1) ∈ X satisfying v1 > 0. At each round t, suppose the decision xt = (ut, vt), then the gradient of each objective w.r.t. xt takes
g1t = { (2ut + 4, 2vt + 2), t = 2k − 1; (2ut, 2vt − 2), t = 2k.
g2t = { (2ut, 2vt − 2), t = 2k − 1; (2ut − 4, 2vt + 2), t = 2k.
Since 0 ≤ vt ≤ 12 , we observe that the second entry of either gradient alternates between positive and negative. By using min-norm, the composite weights λt can be computed as
λt = { ((1− ut − vt)/4, (3 + ut + vt)/4), t = 2k − 1; ((3− ut + vt)/4, (1 + ut − vt)/4), t = 2k.
We observe that both entries of composite weights alternative between above 12 and below 1 2 , and ∥λt+1 − λt∥1 ≥ 1. Recall that ∥λt∥1 = 1, hence the composite weights at two consecutive rounds change radically. The resulting composite gradient takes
gcompt = { (ut − vt + 1, −ut + vt − 1), t = 2k − 1; (−ut − vt − 1, −ut − vt − 1), t = 2k.
The fluctuating composite weights mix with the positive and negative second entries of gradients, making the second entry of gcompt always negative, i.e., −ut + vt − 1 < 0 and −ut − vt − 1 < 0. Hence gcompt always drives xt away from the Pareto set X ∗ that coincides with the x-axis. This essentially reversely optimizes the loss, hence increasing the regret. In fact, we can prove that it even incurs a linear regret. Due to the lack of space, we leave the proof of linear regret when T is an odd number in Appendix H. The above results of the problem instance are summarized as follows.
Proposition 2. For OMD equipped with vanilla min-norm, there exists a multi-objective online convex optimization problem, in which the resulting algorithm incurs a linear regret.
Remark. Stability is a basic requirement to ensure meaningful regrets in online learning (McMahan, 2017). In the single-objective setting, directly regularizing the iterate xt (e.g., OMD) is enough. However, as shown in the above analysis, merely regularizing xt is not enough to attain sublinear regrets in the multi-objective setting, since there is another source of instability, i.e., the composite weights, that affects the direction of composite gradients. Therefore, in multi-objective online learning, besides regularizing the iterates, we also need to explicitly regularize the composite weights.
4.2 THE ALGORITHM
Enlightened by the design of regularization in FTRL (McMahan, 2017), we consider the regularizer r(λ,λ0), where λ0 is the pre-defined composite weights that may reflect the user preference. This results in a new solver called min-regularized-norm, i.e.,
λt = argmin λ∈Sm
∥∇Ft(xt)λ∥22 + αt r(λ,λ0),
where αt is the regularization strength. Equipping OMD with the new solver, we derive the proposed algorithm. Note that beyond the regularization on the iterate xt that is intrinsic in online learning, there is another regularization on the composite weights λt in min-regularized-norm. Both regularizations are fundamental, and they together ensure stability in the multi-objective online setting. Hence we call the algorithm Doubly Regularized Online Mirror Multiple Descent (DR-OMMD).
In principle, r can take various forms such as L1-norm, L2-norm, etc. Here we adopt L1-norm since it aligns well with the simplex constraint of λ. Min-regularized-norm can be computed very efficiently. When m = 2, it has a closed-form solution. Specifically, suppose the gradients at round t are g1t and g 2 t . Set γL = (g ⊤ 2 (g2−g1)−αt)/∥g2−g1∥2 and γR = (g⊤2 (g2−g1)+αt)/∥g2−g1∥2. Given any λ0 = (γ0, 1− γ0) ∈ S2, we can compute the composite weights λt as (γt, 1− γt) where
γt = max{min{γ′′t , 1}, 0}, where γ′′t = max{min{γ0, γR}, γL}.
When m > 2, since the constraint Sm is a simplex, we can introduce a Frank-Wolfe solver (Jaggi, 2013) (see detailed protocol in Appendix E.1). We also discuss the L2-norm case in Appendix E.2.
Compared to vanilla min-norm, the composite weights in min-regularized-norm are not fully determined by the adversarial gradients. The resulting relative stability of composite weights makes the composite gradients more robust to the adversarial environment. In the following, we give a general analysis and prove that DR-OMMD indeed guarantees sublinear regrets.
4.3 THEORETICAL ANALYSIS
Our analysis is based on two conventional assumptions (Jadbabaie et al., 2015; Hazan et al., 2016). Assumption 1. The regularization function R is 1-strongly convex. In addition, the Bregman divergence is γ-Lipschitz continuous, i.e., BR(x, z)−BR(y, z) ≤ γ∥x−y∥,∀x,y, z ∈ domR, where domR is the domain of R and satisfies X ⊂ domR ⊂ Rn. Assumption 2. There exists some finite G > 0 such that for each i ∈ {1, . . . ,m}, the i-th loss f it at each round t ∈ {1, . . . , T} is differentiable and G-Lipschitz continuous w.r.t. ∥ · ∥2, i.e., |f it (x)− f it (x′)| ≤ G∥x− x′∥2. Note that in the convex setting, this assumption leads to bounded gradients, i.e., ∥∇f it (x)∥2 ≤ G for any t ∈ {1, . . . , T}, i ∈ {1, . . . ,m},x ∈ X . Theorem 1. Suppose the diameter of X is D. Assume Ft is bounded, i.e., |f it (x)| ≤ F,∀x ∈ X , t ∈ {1, . . . , T}, i ∈ {1, . . . ,m}. For any λ0 ∈ Sm, DR-OMMD attains
R(T ) ≤ γD ηT
+ ∑T
t=1 ηt 2 (∥∇Ft(xt)λt∥22 + 4F ηt ∥λt − λ0∥1).
Remark. When ηt = √ 2γD
G √ T
or √ 2γD
G √ t , αt = 4Fηt , the bound attains O( √ T ). It matches the optimal
single-objective bound w.r.t. T (Hazan et al., 2016) and is tight w.r.t. m (justified in Appendix F.2).
Comparison with linearization. Linearization with fixed weights λ0 ∈ Sm essentially optimizes the scalar loss λ⊤0 Ft with gradient gt = ∇Ft(xt)λ0. From OMD’s tight bound (Theorem 6.8 in (Orabona, 2019)), we can derive a bound γDηT + ∑T t=1 ηt 2 ∥∇Ft(xt)λ0∥ 2 2 for linearization. In comparison, when αt = 4Fηt , DR-OMMD attains a regret bound γD ηT + ∑T t=1 ηt 2 minλ∈Sm{∥∇Ft(xt)λ∥ 2 2+ αt∥λ−λ0∥1}, which is smaller than that of linearization. Note that although the bound of linearization refers to single-objective regret R(T ), the comparison is reasonable due to the consistency of the two regret metrics, i.e., RII(T ) = max{R(T ), 0} when m = 1, as proved in Proposition 1. In the following, we further investigate the margin in the two-objective setting with linear losses. Suppose the loss functions are f1t (x) = x ⊤g1t and f 2 t (x) = x ⊤g2t for some vectors g 1 t , g 2 t ∈ Rn at each round. Then we can show that the margin is at least (see Appendix F.3 for the detailed proof)
M ≥ ∑T
t=1 ηt 4 ∥λt − λ0∥22 · ∥g1t − g2t ∥22,
which indicates the benefit of DR-OMMD. Specifically, while linearization requires adequate λ0, DR-OMMD selects more proper λt adaptively; the advantange is more obvious as the gradients of different objectives vary wildly. This matches our intuition that linearization suffers from conflict gradients (Yu et al., 2020), while DR-OMMD can alleviate the conflict by pursuing common descent.
5 EXPERIMENTS
In this section, we conduct experiments to compare DR-OMMD with two baselines: (i) linearization performs single-objective online learning on scalar losses λ⊤0 Ft with pre-defined fixed λ0 ∈ Sm; (ii) min-norm equips OMD with vanilla min-norm (Désidéri, 2012) for gradient composition.
5.1 CONVEX EXPERIMENTS: ADAPTIVE REGULARIZATION
Many real-world online scenarios adopt regularization to avoid overfitting. A standard scheme is to add a term r(x) to the loss ft(x) at each round and optimize the regularized loss ft(x) + σr(x) (McMahan, 2011), where σ is a pre-defined fixed hyperparameter. The formalism of multi-objective online learning provides a novel way of regularization. As r(x) measures model complexity, it can
(a) Effect of Preference (b) Learning Curve
0 2500 5000 7500 10000 12500 # Rounds
0.31
0.33
0.35
0.37
Av er
ag e
Lo ss
lin-opt DR-OMMD
0.0 0.2 0.4 0.6 0.8 1.0 Value of 10
0.3
0.4
0.5
0.6
0.7
Av er
ag e
Lo ss
linearization DR-OMMD
Figure 1: Results to verify the effectiveness of adaptive regularization on protein. (a) Performance of DR-OMMD and linearization under varying λ0 = (λ10, 1−λ10). (b) Performance using the optimal weights λ0 = (0.1, 0.9).
(a) Task L (b) Task R
0 20000 40000 60000 # Rounds
0.6
0.7
0.8
0.9
1.0
1.1
Av er
ag e
Lo ss
DR-OMMD min-norm lin (.25,.75) lin (0.5,0.5) lin (.75,.25)
0 20000 40000 60000 # Rounds
0.6
0.8
1.0
1.2
Av er
ag e
Lo ss
DR-OMMD min-norm lin (.25,.75) lin (0.5,0.5) lin (.75,.25)
Figure 2: Results to verify the effectiveness of DR-OMMD in the non-convex setting. The two plots show the performance of DR-OMMD and various baselines on both tasks (Task L and Task R) of MultiMNIST.
be regarded as the second objective alongside the primary goal ft(x). We can augment the loss to Ft(x) = (ft(x), r(x)) and thereby cast regularized online learning into a two-objective problem. Compared to the standard scheme, our approach chooses σt = λ2t/λ 1 t in an adaptive way.
We use two large-scale online benchmark datasets. (i) protein is a bioinformatics dataset for protein type classification (Wang, 2002), which has 17 thousand instances with 357 features. (ii) covtype is a biological dataset collected from a non-stationary environment for forest cover type prediction (Blackard & Dean, 1999), which has 50 thousand instances with 54 features. We set the logistic classification loss as the first objective, and the squared L2-norm of model parameters as the second objective. Since the ultimate goal of regularization is to lift predictive performance, we measure the average loss, i.e., ∑ t≤T lt(xt)/T , where lt(xt) is the classification loss at round t.
We adopt a L2-norm ball centered at the origin with diameter K = 100 as the decision set. The learning rates are decided by a grid search over {0.1, 0.2, . . . , 3.0}. For DR-OMMD, the parameter αt is simply set as 0.1. For fixed regularization, the strength σ = (1−λ10)/λ10 is determined by some λ10 ∈ [0, 1], which is exactly linearization with weights λ0 = (λ10, 1− λ10). We run both algorithms with varying λ10 ∈ {0, 0.1, ..., 1}. In Figure 1, we plot (a) their final performance w.r.t. the choice of λ0 and (b) their learning curves with desirable λ0 (e.g., (0.1, 0.9) on protein). Other results are deferred to the appendix due to the lack of space. The results show that DR-OMMD consistently outperforms fixed regularization; the gap becomes more significant when λ0 is not properly set.
5.2 NON-CONVEX EXPERIMENTS: DEEP MULTI-TASK LEARNING
We use MultiMNIST (Sabour et al., 2017), which is a multi-task version of the MNIST dataset for image classification and commonly used in deep multi-task learning (Sener & Koltun, 2018; Lin et al., 2019). In MultiMNIST, each sample is composed of a random digit image from MNIST at the top-left and another image at the bottom-right. The goal is to classify the digit at the top-left (task L) and that at the bottom-right (task R) at the same time.
We follow (Sener & Koltun, 2018)’s setup with LeNet. Learning rates in all methods are selected via grid search over {0.0001, 0.001, 0.01, 0.1}. For linearization, we examine different weights (0.25, 0.75), (0.5, 0.5), and (0.75, 0.25). For DR-OMMD, αt is set according to Theorem 1, and the initial weights are simply set as λ0 = (0.5, 0.5). Note that in the online setting, samples arrive in a sequential manner, which is different from offline experiments where sample batches are randomly sampled from the training set. Figure 2 compares the average cumulative loss of all the examined methods. We also measure two conventional metrics in offline experiments, i.e., the training loss and test loss (Reddi et al., 2018); the results are similar and deferred to the appendix due to the lack of space. The results show that DR-OMMD outperforms counterpart algorithms using min-norm or linearization in all metrics on both tasks, validating its effectiveness in the non-convex setting.
6 CONCLUSIONS
In this paper, we give a systematic study of multi-objective online learning, encompassing a novel framework, a new algorithm, and corresponding non-trivial theoretical analysis. We believe that this work paves the way for future research on more advanced multi-objective optimization algorithms, which may inspire the design of new optimizers for multi-task deep learning.
ACKNOWLEDGMENTS
This work was supported in part by the National Key Research and Development Program of China No. 2020AAA0106300 and National Natural Science Foundation of China No. 62250008. This work was also supported by Ant Group through Ant Research Intern Program. We would like to thank Wenliang Zhong, Jinjie Gu, Guannan Zhang and Jiaxin Liu for generous support on this project.
APPENDIX
The appendix is organized as follows. Appendix A reviews related work. Appendix B validates the correctness of our definition of PSG. Appendix C discusses the domain of the comparator in S-PSG, indicating that it makes no difference whether the comparator is selected from the Pareto optimal set or from the whole domain. Appendix D provides the detailed derivation of the equivalent form of RII(T ). Appendix E discusses how to efficiently compute the composition weights for the minregularized-norm solver. Appendix F discusses the order of DR-OMMD’s regret bound with fixed or adaptive learning rate, shows the tightness of the derived bound, and provides more details on the regret comparison between DR-OMMD and linearization. Appendix G supplements more details in the experimental setup and empirical results. Appendix H and I provide detailed proofs of the remaining theoretical claims in the main paper. Finally, Appendix J supplements regret analysis of DR-OMMD in the strongly convex setting.
A RELATED WORK
In this section, we review previous work in some related fields, i.e., online learning, multi-objective optimization, multi-objective multi-armed bandits, and multi-objective Bayesian optimization.
A.1 ONLINE LEARNING
Online learning arms to make sequential predictions for streaming data. Please refer to the introduction books (Hazan et al., 2016; Orabona, 2019) for more background knowledges.
Most of the previous works on online learning are conducted in the single-objective setting. As far as we are concerned, there are only two lines of work concerning multi-objective learning. The first line of works provides a multi-objective perspective of the prediction-with-expert-advice (PEA) problem (Koolen, 2013; Koolen & Van Erven, 2015). Specifically, they view each individual expert as a multi-objective criterion, and characterize the Pareto optimal trade-offs among different experts. These works have two main distinctions from our proposed MO-OCO. First, they are still built upon the original PEA problem where the payoff of each expert (or decision) is a scalar, while we focus on vectoral payoffs. Second, their framework is restricted to an absolute loss game, whereas our framework is general and can be applied to any coordinate-wise convex loss functions.
The second line of work studies online learning with vectoral payoffs via Blackwell approachability (Blackwell, 1956; Mannor et al., 2014; Abernethy et al., 2011). In their framework, the learner is given a target set T ⊂ Rm and its goal is to generate decisions {xt}Tt=1 to minimize the distance between the average loss ∑T t=1 lt(xt)/T and the target set T . There are two major differences between Blackwell approachability and our proposed MO-OCO: previous works on Blackwell approachability are zero-order methods and the target set T is often known beforehand (also see the discussion in (Busa-Fekete et al., 2017)), while in MO-OCO we intend to develop a first-order method to reach the unknown Pareto front.
A.2 MULTI-OBJECTIVE OPTIMIZATION
Multi-objective optimization aims to optimize multiple objectives concurrently. Most of the previous works on multi-objective optimization are conducted in the offline setting, including the batch optimization setting (Désidéri, 2012; Liu et al., 2021) and the stochastic optimization setting (Sener & Koltun, 2018; Lin et al., 2019; Yu et al., 2020; Chen et al., 2020; Javaloy & Valera, 2021). These methods are based on gradient composition, and have shown very promising results in multi-task learning applications.
Despite the existence of previous works on multi-objective optimization, as the first work of multiobjective optimization in the OCO setting, our work is largely different from them in three aspects. First, we contribute the first formal framework of multi-objective online convex optimization. In particular, our framework is based on a novel equivalent transformation of the PSG metric, which is intrinsically different from previous offline optimization frameworks. Second, we provide a showcase in which a commonly used method in the offline setting, namely min-norm (Désidéri, 2012; Sener & Koltun, 2018), fail to attain sublinear regret in online setting. Our proposed min-regularized-norm
is a novel design when tailoring offline methods to the online setting. Third, the regret analysis of multi-objective online learning is intrinsically different from the convergence analysis in the offline setting (Yu et al., 2020).
A.3 MULTI-OBJECTIVE MULTI-ARMED BANDITS
Another branch of related works study multi-objective optimization in the multi-armed bandits setting (Busa-Fekete et al., 2017; Tekin & Turğay, 2018; Turgay et al., 2018; Lu et al., 2019a; Degenne et al., 2019). Among these works, the most relevant one to ours is (Turgay et al., 2018), which introduces the Pareto suboptimality gap (PSG) metric to characterize the multi-objective regret in the bandits setting, and proposes a zero-order zooming algorithm to minimize the regret.
In this work, our regret definition also utilizes the PSG metric (Turgay et al., 2018). However, as the first study of multi-objective optimization in the OCO setting, our work is intrinsically different from these previous works in the following aspects. First, as PSG is a zero-order metric, we perform a novel equivalent transformation, making it amenable to the OCO setting. Second, our proposed algorithm is a first-order multiple gradient algorithm, whose design principles are completely distinct from zero-order algorithms. For example, the concept of the stability of composite weights does not even exist in the design of previous zero-order methods for multi-objective bandits (Turgay et al., 2018; Lu et al., 2019a). Third, the regret analysis of MO-OCO is intrinsically different from that in the bandits setting.
A.4 MULTI-OBJECTIVE BAYESIAN OPTIMIZATION
The final area related to our work is multi-objective Bayesian optimization (Zhang & Golovin, 2020; Konakovic Lukovic et al., 2020; Chowdhury & Gopalan, 2021; Maddox et al., 2021; Daulton et al., 2022), which studies Bayesian optimization with vector-valued feedback. There are two branches of works in this area, using different notions of regret. The first branch is based on scalarization, which adopts the expectation of the gap between scalarized losses over some given distribution (Chowdhury & Gopalan, 2021) as the regret. In this approach, the distribution of scalarization can be understood as a set of preference, which needs to be known beforehand. The second branch is based on Pareto optimality (Zhang & Golovin, 2020), which uses hypervolume as the discrepancy metric and adopt the gap between the true Pareto front and the estimated Pareto front as the regret.
As the first work on multi-objective optimization in the OCO setting, our work is largely different from these works in the following aspects. First, the regret definitions are different. Specifically, compared to the first branch based on scalarization, our regret definition is purely motivated by Pareto optimality, which does not need any preference in advance; compared to the second branch using hypervolume, we note that hypervolume is mainly used for Pareto front approximation, which is unsuitable to our adversarial setting where the goal is to impose the cumulative loss to reach the Pareto front. Second, multi-objective Bayesian optimization is conducted in a stochastic setting, which typically assumes that the losses follow some Gaussian distribution, whereas our work is conducted in the adversarial setting where the losses can be generated arbitrarily.
B AN EQUIVALENT DEFINITION OF PSG
Recall that in Definition 3, we formulate the PSG metric as a constrained optimization problem. We note that, since the PSG metric is based on the notion of “non-dominance” (Turgay et al., 2018), its most direct form is actually
∆′(x;K∗, F ) = inf ϵ≥0 ϵ,
s.t. ∀x′′ ∈ K∗,∃i ∈ {1, . . . ,m}, f i(x)− ϵ < f i(x′′) or ∀i ∈ {1, . . . ,m}, f i(x)− ϵ = f i(x′′).
At the first glance, the above definition seems to be quite different from Definition 3, since it has an extra condition “∀i ∈ {1, . . . ,m}, f i(x) − ϵ = f i(x′′)”. In the following, we prove that both definitions actually yield the same value due to the infimum operation on ϵ.
Specifically, for any possible pair (x,K∗, F ), we denote ∆′(x;K∗, F ) = ϵ′0 and ∆(x;K∗, F ) = ϵ0. By comparing the constraints of both definitions, it is obvious that ϵ0 must satisfy the constraint
of ∆′(x;K∗, F ), hence the infimum operation guarantees that ϵ′0 ≤ ϵ0. It remains to prove that ϵ′0 ≥ ϵ0. To this end, we only need to show that ϵ′0 + ξ satisfies the constraint of ∆(x;K∗, F ) for any ξ > 0. Consider an arbitrary x′′ ∈ K∗. From the definition of ∆′(x;K∗, F ), we know that either ∃i ∈ {1, . . . ,m}, f i(x) − ϵ′0 < f i(x′′) or ∀i ∈ {1, . . . ,m}, f i(x) − ϵ′0 = f i(x′′). Whichever condition holds, we must have ∃i ∈ {1, . . . ,m}, f i(x)−ϵ′0−ξ < f i(x′′) for any ξ > 0. Since it holds for any x′′ ∈ K∗, ϵ′0 + ξ lies in the feasible region of ∆(x;K∗, F ), hence we have ϵ0 ≤ ϵ′0 + ξ,∀ξ > 0 and thus ϵ0 ≤ ϵ′0. In summary, we have ∆′(x;K∗, F ) = ∆(x;K∗, F ) for any pair (x,K∗, F ).
C DISCUSSION ON THE DOMAIN OF THE COMPARATOR IN S-PSG
Recall that in Definition 4, the comparator x′ in S-PSG is selected from the Pareto optimal set X ∗ of the cumulative loss ∑T t=1 Ft. This actually stems from the original definition of PSG (Turgay et al., 2018), which uses the Pareto optimal set as the comparator set. In fact, comparing with Pareto optimal decisions in X ∗ is already enough to measure the suboptimality of any decision sequence {xt}Tt=1. The reason is that, for any non-optimal decision x′ ∈ X − X ∗, there must exist some Pareto optimal decision x′′ ∈ X ∗ that dominates x′, hence the suboptimality metric does not need to compare with this non-optimal decision x′. In other words, even if we extend the comparator set in S-PSG to the whole domain X , the modified form will be equivalent to the original form based on the Pareto optimal set X ∗. In the following, we strictly prove this equivalence ∆({xt}Tt=1;X , {Ft}Tt=1) = ∆({xt}Tt=1;X ∗, {Ft}Tt=1). Specifically, we modify the definition of S-PSG and let the comparator domain X ′ be any subset of the decision domain X , i.e.,
∆({xt}Tt=1;X ′, {Ft}Tt=1) = inf ϵ≥0
ϵ, s.t. ∀x′′ ∈ X ′,∃i ∈ {1, . . . ,m}, T∑
t=1
f it (xt)− ϵ < T∑
t=1
f it (x ′′).
Then the modified regret based on the whole domain X takes R′II(T ) = ∆({xt}Tt=1;X , {Ft}Tt=1). Now we begin to prove the equivalence ∆({xt}Tt=1;X , {Ft}Tt=1) = ∆({xt}Tt=1;X ∗, {Ft}Tt=1). For any X ′ ⊂ X , let E(X ′) denote the constraint of ∆({xt}Tt=1;X ′, {Ft}Tt=1), i.e.,
E(X ′) = {ϵ ≥ 0 | ∀x′′ ∈ X ′,∃i ∈ {1, . . . ,m}, T∑
t=1
f it (xt)− ϵ < T∑
t=1
f it (x ′′)},
then ∆({xt}Tt=1;X ′, {Ft}Tt=1) = inf E(X ′). Hence, we just need to prove inf E(X ) = inf E(X ∗). On the one hand, since X ∗ ⊂ X , from the above definition of S-PSG, it is easy to check that for any ϵ ∈ E(X ), it must satisfy ϵ ∈ E(X ∗). Hence, we have E(X ) ⊂ E(X ∗). On the other hand, given any ϵ ∈ E(X ∗), we now check that ϵ ∈ E(X ). To this end, we consider an arbitrary point x′′ ∈ X in two cases. (i) If x′′ ∈ X ∗, since ϵ ∈ E(X ∗), we naturally have ∑T t=1 f i0 t (xt) − ϵ < ∑T t=1 f i0 t (x
′′) for some i0. (ii) If x′′ /∈ X ∗, since X ∗ is the Pareto optimal set of ∑T t=1 Ft, there must exist some Pareto optimal decision x̂ ∈ X ∗ that dominates x′′
w.r.t. ∑T t=1 Ft, which means that ∑T t=1 f i t (x̂) ≤ ∑T t=1 f i t (x
′′) for all i ∈ {1, ...,m}. Notice that ϵ ∈ E(X ∗) gives ∑T t=1 f i0 t (xt) − ϵ < ∑T t=1 f
i0 t (x̂) for some i0, hence in this case we also have∑T
t=1 f i0 t (xt)− ϵ < ∑T t=1 f i0 t (x
′′). Combining the above two cases, we prove that ϵ ∈ E(X ), and consequently E(X ∗) ⊂ E(X ). In summary, we have E(X ) = E(X ∗), hence ∆({xt}Tt=1;X , {Ft}Tt=1) = inf E(X ) = inf E(X ∗) = ∆({xt}Tt=1;X ∗, {Ft}Tt=1). Therefore, it makes no difference whether the comparator in RII(T ) is generated from the Pareto optimal set X ∗ or from the whole domain X .
D DERIVATION OF THE EQUIVALENT MULTI-OBJECTIVE REGRET FORM
In this section, We strictly derive the equivalent form of RII(T ) in Proposition 1, which is highly non-trivial and forms the basis of the subsequent algorithm design and theoretical analysis.
Proof of Proposition 1. Recall that the PSG metric used in RII(T ) is an extension of vanilla PSG to leverage any decision sequence. To motivate the analysis, we first investigate vanilla PSG ∆(x;X ∗, F ) that deals with a single decision x, and derive a useful lemma as follows. Lemma 1. Vanilla PSG has an equivalent form, i.e.,
∆(x;X ∗, F ) = sup x∗∈X∗ inf λ∈Sm λ⊤(F (x)− F (x))+,
where for any vector l = (l1, ..., lm) ∈ Rm, the truncation (l)+ produces a vector whose i-th entry equals to max{li, 0} for all i ∈ {1, ...,m}.
Proof. In the definition of PSG, the evaluated decision x is compared to all Pareto optimal points x′ ∈ X ∗. For any fixed comparator x′ ∈ X ∗, we define the pair-wise suboptimality gap w.r.t. F between decisions x and x′ as follows
δ(x;x′, F ) = inf ϵ≥0 {ϵ | F (x)− ϵ1 ⊁ F (x′)}.
Hence, PSG can be expressed as
∆(x;X ∗, F ) = sup x′∈X∗ δ(x;x′, F ).
To proceed, we analyze the pair-wise gap δ(x;x′, F ). From its definition, we know that δ(x;x′, F ) measures the minimal non-negative value that needs to be subtracted from each entry of F (x) until it is not dominated by x′. Now we consider two cases.
(i) If F (x) ⊁ F (x′), i.e., fk0(x) ≤ fk0(x′) for some k0 ∈ {1, ...,m}, nothing needs to be subtracted from F (x) and we directly have δ(x;x′, F ) = 0.
(ii) If F (x) ≻ F (x′), we have fk(x) ≥ fk(x′) for all k ∈ {1, ...,m}, which obviously violates the condition F (x) − ϵ1 ⊁ F (x′) when ϵ = 0. Now let us gradually increase ϵ from zero. Notice that such a condition holds only when there there exists some k0 satisfying fk0(x) − ϵ ≤ fk0(x′), or equivalently ϵ ≥ fk0(x) − fk0(x′). Hence, in this case, we have δ(x;x′, F ) = mink∈{1,...,m}{fk(x)− fk(x′)}. Combining the above two cases, we derive an equivalent form of the pair-wise suboptimality gap. Specifically, we can easily check that the following form holds for both cases, i.e.,
δ(x;x′, F ) = min k∈{1,...,m} max{fk(x)− fk(x′), 0}.
To relate the above form with F , denote Um = {ek | 1 ≤ k ≤ m} as the set of all unit vector in Rm, then we equivalently have
δ(x;x′, F ) = min λ∈Um λ⊤(F (x)− F (x′))+.
Now the calculation of δ(x;x′, F ) is transformed into a minimization problem over λ ∈ Um. Since Um is a discrete set, we can apply a linear relaxation trick. Specifically, we now turn to minimize the scalar p(λ) = λ⊤ max{F (x)−F (x′), 0} over the convex curvature of Um, which is exactly the probability simplex Sm = {λ ∈ Rm | λ ⪰ 0, ∥λ∥1 = 1}. Note that Um contains all the vertexes of Sm. Since infλ∈Sm p(λ) is a linear optimization problem, the minimal point λ∗ must be a vertex of the simplex, i.e., λ∗ ∈ Um. Hence, the relaxed problem is equivalent to the original problem, namely,
δ(x;x′, F ) = min λ∈Um λ⊤(F (x)− F (x′))+ = inf λ∈Sm λ⊤(F (x)− F (x′))+.
Taking the supremum of both sides over x′ ∈ X ∗, we prove the lemma. ■
The above lemma can be naturally extended to the sequence-wise variant S-PSG. Specifically, we can extend the pair-wise suboptimality gap δ(x;x′, F ) to measure any decision sequence, which now becomes
δ({xt}Tt=1;x′, {Ft}Tt=1) = inf ϵ≥0
{ϵ | T∑
t=1
Ft(xt)− ϵ1 ⊁ T∑
t=1
Ft(x ′)}.
Then S-PSG can be expressed as
∆({xt}Tt=1;X ∗, {Ft}Tt=1) = sup x∗∈X∗ δ({xt}Tt=1;x∗, {Ft}Tt=1).
Similar to the derivation of the above lemma, by investigating the relation between ∑T
t=1 Ft(xt) and ∑T
t=1 Ft(x ′), we can derive an equivalent form of δ({xt}Tt=1;x′, {Ft}Tt=1) as
δ({xt}Tt=1;x′, {Ft}Tt=1) = min k∈{1,...,m}
max{ T∑
t=1
fkt (x)− T∑
t=1
fkt (x ′), 0},
and further
δ({xt}Tt=1;x′, {Ft}Tt=1) = inf λ∈Sm λ⊤( T∑ t=1 Ft(xt)− T∑ t=1 Ft(x ′))+.
Hence, the S-PSG-based regret form can be expressed as
RII(T ) = sup x∗∈X∗ inf λ∈Sm
λ⊤( T∑
t=1
Ft(xt)− T∑
t=1
Ft(x ∗))+.
The max-min form of RII(T ) has a truncation operation (·)+, which brings irregularity to the regret form. To handle the truncation operation, we utilize the following lemma:
Lemma 2. (a) For any l ∈ Rm, we have infλ∈Sm λ⊤(l)+ = max{infλ∈Sm λ⊤l, 0}. (b) For any h : X → R, we have supx∈X max{h(x), 0} = max{supx∈X h(x), 0}.
Proof. To prove the first statement, we consider the following two cases. (i) If l ≻ 0, then (l)+ = l. For any λ ∈ Sm, we have λ⊤(l)+ = λ⊤l > 0. Taking the infimum over λ ∈ Sm on both sides, we have infλ⊤Sm λ⊤(l)+ = infλ∈Sm λ⊤l ≥ 0. Moreover, from the last equation we have max{infλ∈Sm λ⊤l, 0} = infλ∈Sm λ⊤l, which proves the statement in this case. (ii) If l ⊁ 0, then li ≤ 0 for some i ∈ {1, ...,m}. Set ei as the i-th unit vector in Rm, then we have e⊤i l ≤ 0. One the one hand, since ei ∈ Sm, we have infλ∈Sm λ⊤l ≤ e⊤i l ≤ 0, and further max{infλ∈Sm λ⊤l, 0} = 0. On the other hand, notice that e⊤i (l)+ = 0 and λ⊤(l)+ ≥ 0 for any λ ∈ Sm, then infλ∈Sm λ⊤(l)+ = e⊤i (l)+ = 0. Hence, the statement also holds in this case. To prove the second statement, we also consider two cases. (i) If h(x0) > 0 for some x0 ∈ X , then supx∈X h(x) ≥ h(x0) > 0, and max{supx∈X h(x), 0} = supx∈X h(x). Since we also have supx∈X max{h(x), 0} = supx∈X h(x), the statement holds in this case. (ii) If h(x) ≤ 0 for all x ∈ X , then supx∈X h(x) ≤ 0, and thus max{supx∈X h(x), 0} = 0. Meanwhile, for any x ∈ X , we have max{h(x)} = 0, which validates the statement in this case.
■
From the above lemma, we directly have
RII(T ) = sup x∗∈X∗ max{ inf λ∈Sm
λ⊤( T∑ t=1 Ft(xt)− T∑ t=1 Ft(x ∗)), 0}
= max{ sup x∗∈X∗ inf λ∈Sm
λ⊤( T∑ t=1 Ft(xt)− T∑ t=1 Ft(x ∗)), 0},
which derives the desired equivalent form. ■
E CALCULATION OF MIN-REGULARIZED-NORM
In this section, we discuss how to efficiently calculate the solutions to min-regularized-norm with L1-norm and L2-norm.
Algorithm 2 Frank-Wolfe Solver for Min-Regularized-Norm with L1-Norm 1: Initialize: λt = (γ1t , . . . , γmt ) = ( 1m , . . . , 1 m ).
2: Compute the matrix U = ∇Ft(xt)⊤∇Ft(xt), i.e., Uij = ∇f it (xt)⊤∇f j t (xt),∀i, j ∈
{1, . . . ,m}. 3: repeat 4: Select an index k ∈ argmaxi∈{1,...,m}{ ∑m j=1 γ j tU
ij + α sgn(γit − γi0)}. 5: Compute δ ∈ argmin0≤δ≤1 ∥∥δ∇fkt (xt) + (1− δ)∇Ft(xt)λt∥∥22+α∥δ(ek−λt)+λt−λ0∥1. 6: Update λt = (1− δ)λt + δek. 7: until δ ∼ 0 or Number of Iteration Limits 8: return λt.
E.1 L1-NORM
Similar to (Sener & Koltun, 2018), we first consider the setting of two objectives, namely m = 2. In this case, for any λ = (γ, 1− γ),λ0 = (γ0, 1− γ0) ∈ S2, the L1-regularization ∥λ− λ0∥1 equals to 2|γ − γ0|. Hence min-regularized-norm with L1-norm at round t reduces to λt = (γt, 1 − γt) where
γt ∈ argmin 0≤γ≤1 ∥γg1 + (1− γ)g2∥22 + 2α|γ − γ0|.
Interestingly, the above problem has a closed-form solution.
Proposition 3. Set γL = (g⊤2 (g2−g1)−α)/∥g2−g1∥22, and γR = (g⊤2 (g2−g1)+α)/∥g2−g1∥22. Then min-regularized-norm with L1-norm produces weights λt = (γt, 1− γt) where
γt = max{min{γ′′t , 1}, 0}, where γ′′t = max{min{γ0, γR}, γL}.
Proof. We solve the following two quadratic sub-problems, i.e.,
min 0≤γ≤γ0
h1(γ) = ∥γg1 + (1− γ)g2∥22 + 2α(γ0 − γ),
as well as min
γ0≤γ≤1 h2(γ) = ∥γg1 + (1− γ)g2∥22 + 2α(γ − γ0).
It can be checked that in the former sub-problem, h1 monotonously decreases on (−∞, γR] and increases on [γR,+∞); in the latter sub-problem, h2 monotonously decreases on (−∞, γL] and increases on [γL,+∞). Since each sub-problem has its constraint ([0, γ0] or [γ0, 1]), the solution to the original optimization problem can then be derived by comparing the optimal values of the two sub-problems with their constraints. Specifically, notice that γL ≤ γR and 0 ≤ γ0 ≤ 1, and we can consider the following three cases.
(i) When 0 ≤ γ0 ≤ γL ≤ γR, then h1 monotonously decreases on [0, γ0] and its minimum on [0, γ0] is h1(γ0). Notice that h1(γ0) = h2(γ0). For the sub-problem of h2, we further consider two situations: (i-a) If γL ≤ 1, then γL ∈ [γ0, 1], hence the minimum of h2 on [γ0, 1] is h2(γL). Since h2(γL) ≤ h2(γ0) = h1(γ0), the minimal point of the original problem is γL, and hence γt = γL. (i-b) If γL > 1, then h2 monotonously decreases on [γ0, 1], and we surely have h2(1) ≤ h2(γ0) = h1(γ0). Hence γt = 1 in this situation. Combining the above two situations, we have γt = min{γL, 1} in this case. (ii) When γL ≤ γR ≤ γ0 ≤ 1, then h2 monotonously increases on [γ0, 1] and its minimum on [γ0, 1] is h2(γ0). Notice that h1(γ0) = h2(γ0). For the sub-problem of h1, similar to the first case, we also consider two situations: (ii-a) If γR ≥ 0, then γR ∈ [0, γ0], hence the minimum of h1 on [0, γ0] is h1(γR). Since h1(γR) ≤ h1(γ0) = h2(γ0), the minimal point of the original problem is γR, and hence γt = γR. (ii-b) If γR < 0, then h1 monotonously increases on [0, γ0]. Hence we have h1(0) ≤ h1(γ0) = h2(γ0). Hence the solution to the original problem γt = 0. Combining the above two situations, we have γt = max{γR, 0} in this case.
Algorithm 3 Frank-Wolfe Solver for Min-Regularized-Norm with L2-Norm 1: Initialize: λt = (γ1t , . . . , γmt ) = ( 1m , . . . , 1 m ).
2: Compute the matrix U = ∇Ft(xt)⊤∇Ft(xt), i.e., Uij = ∇f it (xt)⊤∇f j t (xt),∀i, j ∈
{1, . . . ,m}. 3: repeat 4: Select an index k ∈ argmaxi∈{1,...,m}{ ∑m j=1 γ j tU
ij + α(γit − γi0)}. 5: Compute δ ∈ argmin0≤δ≤1 ∥δ∇fkt (xt))+(1−δ)∇Ft(xt)λt∥22+α∥δ(ek−λt)+λt−λ0∥22,
which has an analytical form
δ = max{min{ (∇Ft(xt)λt −∇f k t (xt)) ⊤∇Ft(xt)λt + α∥ek − λt∥22 ∥∇Ft(xt)λt −∇fkt (xt)∥22 + α(ek − λt)⊤(λt − λ0) , 1}, 0}.
6: Update λt = (1− δ)λt + δek. 7: until δ ∼ 0 or Number of Iteration Limits 8: return λt.
(iii) When γL < γ0 < γR, then h1 monotonously decreases on [0, γ0] and h2 monotonously increases on [γ0, 1]. Hence each sub-problem attains its minimum at γ0, and thus γt = γ0.
Summarizing the above three cases gives
γt = min{γL, 1}, γ0 ≤ γL; max{γR, 0}, γ0 ≥ γR;
γ0, otherwise.
We can further rewrite the above formula into a compact form as follows, which can be checked case-by-case.
γt = max{min{γ′′t , 1}, 0}, where γ′′t = max{min{γ0, γR}, γL}, This gives the closed-form solution of min-regularized-norm when m = 2. ■
Now that we have derived the closed-form solution to the min-regularized-norm | 1. What is the focus of the paper regarding multi-objective online learning?
2. What are the strengths of the proposed approach, particularly in tackling the challenges of multi-objective optimization?
3. Do you have any concerns or questions about the method's effectiveness in addressing the issue?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies the problem of multi-objective online learning. In the classic online convex optimization problem, there is a single objective function, and the goal is to find the best action that leads to the best value of the objective function in a sequential decision-making setting. This paper extends the problem to a case where there are multiple objective functions. In this setting, then, the notion of best action needs further elaboration to define formally. The reason is that it is possible to have different best actions for different terms of the objective function. In this paper, the notion of Pareto optimality is used to define the Pareto optimal action. In addition, regret in multi-objective settings needs to be carefully redefined.
Then, after problem formulation, the paper develops an algorithm based on Doubly Regularized Online Mirror Multiple Descent, and the regret of the algorithm is analyzed, and it is shown that it matches the regret in a single objective case.
Strengths And Weaknesses
Strengths
The paper is very well written and well organized.
The results are solid, and the authors provide sufficient insights on the issues with alternative solutions.
There are numerical results that support theoretical results.
Weaknesses
The paper sounds like a nice re-execution of the multi-objective bandits in the full information feedback and online convex optimization. That said, I believe the authors did a great job clarifying the differences and unique challenges in this paper. Hence, I still believe that the contribution of this paper is valuable.
Is it possible to derive a problem-specific regret lower bound that captures the unique structure of the multi-objective property of the problem? It seems that in the paper, the only remark on the optimality of regret shows that with a proper parameter setting, the result matches the single-objective case. However, it is unclear what is the definition of optimality of regret in the multi-objective setting.
Clarity, Quality, Novelty And Reproducibility
The paper is very clear, and the quality of the results and presentation is great. The novelty might be somehow limited since it mainly borrows ideas from similar problems in prior literature. |
ICLR | Title
Multi-Objective Online Learning
Abstract
This paper presents a systematic study of multi-objective online learning. We first formulate the framework of Multi-Objective Online Convex Optimization, which encompasses a novel multi-objective regret. This regret is built upon a sequencewise extension of the commonly used discrepancy metric Pareto suboptimality gap in zero-order multi-objective bandits. We then derive an equivalent form of the regret, making it amenable to be optimized via first-order iterative methods. To motivate the algorithm design, we give an explicit example in which equipping OMD with the vanilla min-norm solver for gradient composition will incur a linear regret, which shows that merely regularizing the iterates, as in single-objective online learning, is not enough to guarantee sublinear regrets in the multi-objective setting. To resolve this issue, we propose a novel min-regularized-norm solver that regularizes the composite weights. Combining min-regularized-norm with OMD results in the Doubly Regularized Online Mirror Multiple Descent algorithm. We further derive the multi-objective regret bound for the proposed algorithm, which matches the optimal bound in the single-objective setting. Extensive experiments on several real-world datasets verify the effectiveness of the proposed algorithm.
1 INTRODUCTION
Traditional optimization methods for machine learning are usually designed to optimize a single objective. However, in many real-world applications, we are often required to optimize multiple correlated objectives concurrently. For example, in autonomous driving (Huang et al., 2019; Lu et al., 2019b), self-driving vehicles need to solve multiple tasks such as self-localization and object identification at the same time. In online advertising (Ma et al., 2018a;b), advertising systems need to decide on the exposure of items to different users to maximize both the Click-Through Rate (CTR) and the Post-Click Conversion Rate (CVR). In most multi-objective scenarios, the objectives may conflict with each other (Kendall et al., 2018). Hence, there may not exist any single solution that can optimize all the objectives simultaneously. For example, merely optimizing CTR or CVR will degrade the performance of the other (Ma et al., 2018a;b).
Multi-objective optimization (MOO) (Marler & Arora, 2004; Deb, 2014) is concerned with optimizing multiple conflicting objectives simultaneously. It seeks Pareto optimality, where no single objective can be improved without hurting the performance of others. Many different methods for MOO have been proposed, including evolutionary methods (Murata et al., 1995; Zitzler & Thiele, 1999), scalarization methods (Fliege & Svaiter, 2000), and gradient-based iterative methods (Désidéri, 2012). Recently, the Multiple Gradient Descent Algorithm (MGDA) and its variants have been introduced to the training of multi-task deep neural networks and achieved great empirical success (Sener & Koltun, 2018), making them regain a significant amount of research interest (Lin et al., 2019; Yu et al., 2020; Liu et al., 2021). These methods compute a composite gradient based on
∗Equal contributions. †Corresponding author.
the gradient information of all the individual objectives and then apply the composite gradient to update the model parameters. The composite weights are determined by a min-norm solver (Désidéri, 2012) which yields a common descent direction of all the objectives.
However, compared to the increasingly wide application prospect, the gradient-based iterative algorithms are relatively understudied, especially in the online learning setting. Multi-objective online learning is of essential importance for reasons in two folds. First, due to the data explosion in many real-world scenarios such as web applications, making in-time predictions requires performing online learning. Second, the theoretical investigation of multi-objective online learning will lay a solid foundation for the design of new optimizers for multi-task deep learning. This is analogous to the single-objective setting, where nearly all the optimizers for training DNNs are initially analyzed in the online setting, such as AdaGrad (Duchi et al., 2011), Adam (Kingma & Ba, 2015), and AMSGrad (Reddi et al., 2018).
In this paper, we give a systematic study of multi-objective online learning. To begin with, we formulate the framework of Multi-Objective Online Convex Optimization (MO-OCO). One major challenge in deriving MO-OCO is the lack of a proper regret definition. In the multi-objective setting, in general, no single decision can optimize all the objectives simultaneously. Thus, to devise the multi-objective regret, we need to first extend the single fixed comparator used in the singleobjective regret, i.e., the fixed optimal decision, to the entire Pareto optimal set. Then we need an appropriate discrepancy metric to evaluate the gap between vector-valued losses. Intuitively, the Pareto suboptimality gap (PSG) metric, which is frequently used in zero-order multi-objective bandits (Turgay et al., 2018; Lu et al., 2019a), is a very promising candidate. PSG can yield scalarized measurements from any vector-valued loss to a given comparator set. However, we find that vanilla PSG is unsuitable for our setting since it always yields non-negative values and may be too loose. In a concrete example, we show that the naive PSG-based regret RI(T ) can even be linear w.r.t. T when the decisions are already optimal, which disqualifies it as a regret metric. To overcome the failure of vanilla PSG, we propose its sequence-wise variant termed S-PSG, which measures the suboptimality of the whole decision sequence to the Pareto optimal set of the cumulative loss function. Optimizing the resulting regret RII(T ) will drive the cumulative loss to approach the Pareto front. However, as a zero-order metric motivated geometrically, designing appropriate first-order algorithms to directly optimize it is too difficult. To resolve the issue, we derive a more intuitive equivalent form of RII(T ) via a highly non-trivial transformation.
Based on the MO-OCO framework, we develop a novel multi-objective online algorithm termed Doubly Regularized Online Mirror Multiple Descent. The key module of the algorithm is the gradient composition scheme, which calculates a composite gradient in the form of a convex combination of the gradients of all objectives. Intuitively, the most direct way to determine the composite weights is to apply the min-norm solver (Désidéri, 2012) commonly used in offline multi-objective optimization. However, directly applying min-norm is not workable in the online setting. Specifically, the composite weights in min-norm are merely determined by the gradients at the current round. In the online setting, since the gradients are adversarial, they may result in undesired composite weights, which further produce a composite gradient that reversely optimizes the loss. To rigorously verify this point, we give an example where equipping OMD with vanilla min-norm incurs a linear regret, showing that only regularizing the iterate, as in OMD, is not enough to guarantee sublinear regrets in our setting. To fix the issue, we devise a novel min-regularized-norm solver with an explicit regularization on composite weights. Equipping it with OMD results in our proposed algorithm. In theory, we derive a regret bound of O( √ T ) for DR-OMMD, which matches the optimal bound in the single-objective setting (Hazan et al., 2016) and is tight w.r.t. the number of objectives. Our analysis also shows that DR-OMMD attains a smaller regret bound than that of linearization with fixed composite weights. We show that, in the two-objective setting with linear losses, the margin between the regret bounds depends on the difference between the composite weights yielded by the two algorithms and the difference between the gradients of the two underlying objectives.
To evaluate the effectiveness of DR-OMMD, we conduct extensive experiments on several largescale real-world datasets. We first realize adaptive regularization via multi-objective optimization, and find that adaptive regularization with DR-OMMD significantly outperforms fixed regularization with linearization, which verifies the effectiveness of DR-OMMD over linearization in the convex setting. Then we apply DR-OMMD to deep online multi-task learning. The results show that DROMMD is also effective in the non-convex setting.
2 PRELIMINARIES
In this section, we briefly review the necessary background knowledge of two related fields.
2.1 MULTI-OBJECTIVE OPTIMIZATION
Multiple-objective optimization (MOO) is concerned with solving the problems of optimizing multiple objectives simultaneously (Fliege & Svaiter, 2000; Deb, 2014). In general, since different objectives may conflict with each other, there is no single solution that can optimize all the objectives at the same time, hence the conventional concept of optimality used in the single-objective setting is no longer suitable. Instead, MOO seeks to achieve Pareto optimality. In the following, we give the relevant definitions more formally. We use a vector-valued loss F = (f1, . . . , fm) to denote the objectives, where m ≥ 2 and f i : X → R, i ∈ {1, . . . ,m}, X ⊂ R, is the i-th loss function. Definition 1 (Pareto optimality). (a) For any two solutions x,x′ ∈ X , we say that x dominates x′, denoted as x ≺ x′ or x′ ≻ x, if f i(x) ≤ f i(x′) for all i, and there exists one i such that f i(x) < f i(x′); otherwise, we say that x does not dominate x′, denoted as x ⊀ x′ or x′ ⊁ x. (b) A solution x∗ ∈ X is called Pareto optimal if it is not dominated by any other solution in X .
Note that there may exist multiple Pareto optimal solutions. For example, it is easy to show that the optimizer of any single objective, i.e., x∗i ∈ argminx∈X f i(x), i ∈ {1, . . . ,m}, is Pareto optimal. Different Pareto optimal solutions reflect different trade-offs among the objectives (Lin et al., 2019). Definition 2 (Pareto front). (a) All Pareto optimal solutions form the Pareto set PX (F ). (b) The image of PX (F ) constitutes the Pareto front, denoted as P(H) = {F (x) | x ∈ PX (F )}.
Now that we have established the notion of optimality in MOO, we proceed to introduce the metrics that measure the discrepancy of an arbitrary solution x ∈ X from being optimal. Recall that, in the single-objective setting with merely one loss function f : Z → R, for any z ∈ Z , the loss difference f(z) − minz′′∈Z f(z′′) is directly qualified for the discrepancy measure. However, in MOO with more than one loss, for any x ∈ X , the loss difference F (x) − F (x′′), where x′′ ∈ PX (F ), is a vector. Intuitionally, the desired discrepancy metric shall scalarize the vector-valued loss difference and yield 0 for any Pareto optimal solution. In general, in MOO, there are two commonly used discrepancy metrics, i.e., Pareto suboptimality gap (PSG) (Turgay et al., 2018) and Hypervolume (HV) (Bradstreet, 2011). As HV is a complex volume-based metric, it is more difficult to optimize via gradient-based algorithms (Zhang & Golovin, 2020). Hence in this paper, we adopt PSG, which has already been extensively used in multi-objective bandits (Turgay et al., 2018; Lu et al., 2019a). Definition 3 (Pareto suboptimality gap1). For any x ∈ X , the Pareto suboptimality gap to a given comparator set Z ⊂ X , denoted as ∆(x;Z, F ), is defined as the minimal scalar ϵ ≥ 0 that needs to be subtracted from all entries of F (x), such that F (x)− ϵ1 is not dominated by any point in Z , where 1 denotes the all-one vector in Rm, i.e.,
∆(x;Z, F ) = inf ϵ≥0
ϵ, s.t. ∀x′′ ∈ Z, ∃ i ∈ {1, . . . ,m}, f i(x)− ϵ < f i(x′′).
Clearly, PSG is a distance-based discrepancy metric motivated from a purely geometric viewpoint. In practice, the comparator set Z is often set to be the Pareto set X ∗ = PX (F ) (Turgay et al., 2018); therein for any x ∈ K, its PSG is always non-negative and equals zero if and only if x ∈ PX (F ). Multiple Gradient Descent Algorithm (MGDA) is an offline first-order MOO algorithm (Fliege & Svaiter, 2000; Désidéri, 2012). At each iteration l ∈ {1, . . . , L} (L is the number of iterations), it first computes the gradient ∇f i(xl) of each objective, then derives the composite gradient gcompl =∑m
i=1 λ i l∇f i(xl) as a convex combination of these gradients, and finally applies g comp l to execute a gradient descent step to update the decision, i.e., xl+1 = xl − ηgcompl (η is the step size). The core part of MGDA is the module that determines the composite weights λl = (λ1l , . . . , λ m l ), given by
λl = argmin λl∈Sm
∥ ∑m
i=1 λil∇f i(xl)∥22,
where Sm = {λ ∈ Rm | ∑m i=1 λ i = 1, λi ≥ 0, i ∈ {1, . . . ,m}} is the probabilistic simplex in Rm. This is a min-norm solver, which finds the weights in the simplex that yield the minimum L2norm of the composite gradient. Thus MGDA is also called the min-norm method. Previous works
1Our definition looks a bit different from (Turgay et al., 2018). In Appendix B, we show they are equivalent.
(Désidéri, 2012; Sener & Koltun, 2018) showed that when all f i are convex functions, MGDA is guaranteed to decrease all the objectives simultaneously until it reaches a Pareto optimal decision.
2.2 ONLINE CONVEX OPTIMIZATION
Online Convex Optimization (OCO) (Zinkevich, 2003; Hazan et al., 2016) is the most commonly adopted framework for designing online learning algorithms. It can be viewed as a structured repeated game between a learner and an adversary. At each round t ∈ {1, . . . , T}, the learner is required to generate a decision xt from a convex compact set X ⊂ Rn. Then the adversary replies the learner with a convex function ft : X → R and the learner suffers the loss ft(xt). The goal of the learner is to minimize the regret with respect to the best fixed decision in hindsight, i.e.,
R(T ) = ∑T
t=1 ft(xt)− min x∗∈X ∑T t=1 ft(x ∗).
A meaningful regret is required to be sublinear in T , i.e., limT→∞ R(T )/T = 0, which implies that when T is large enough, the learner can perform as well as the best fixed decision in hindsight.
Online Mirror Descent (OMD) (Hazan et al., 2016) is a classic first-order online learning algorithm. At each round t ∈ {1, . . . , T}, OMD yields its decision via
xt+1 = argmin x∈X
η⟨∇ft(xt),x⟩+BR(x,xt),
where η is the step size, R : X → R is the regularization function, and BR(x,x′) = R(x)−R(x′)− ⟨∇R(x′),x − x′⟩ is the Bregman divergence induced by R. As a meta-algorithm, by instantiating different regularization functions, OMD can induce two important algorithms, i.e., Online Gradient Descent (Zinkevich, 2003) and Online Exponentiated Gradient (Hazan et al., 2016).
3 MULTI-OBJECTIVE ONLINE CONVEX OPTIMIZATION
In this section, we formally formulate the MO-OCO framework.
Framework overview. Analogously to single-objective OCO, MO-OCO can be viewed as a repeated game between an online learner and the adversarial environment. The main difference is that in MO-OCO, the feedback is vector-valued. The general framework of MO-OCO is given as follows. At each round t ∈ {1, . . . , T}, the learner generates a decision xt from a given convex compact decision set X ⊂ Rn. Then the adversary replies the decision with a vector-valued loss function Ft : X → Rm, whose i-th component f it : X → R is a convex function corresponding to the i-th objective, and the learner suffers the vector-valued loss Ft(xt). The goal of the learner is to generate a sequence of decisions {xt}Tt=1 to minimize a certain kind of multi-objective regret. The remaining work in framework formulation is to give an appropriate regret definition, which is the most challenging part. Recall that the single-objective regret R(T ) = ∑T t=1 ft(xt)− ∑T t=1 ft(x
∗) is defined as the difference between the cumulative loss of the actual decisions {xt}Tt=1 and that of the fixed optimal decision in hindsight x∗ ∈ argminx∈X ∑T t=1 ft(x). When defining the multiobjective analogy to R(T ), we encounter two issues. First, in the multi-objective setting, no single decision can optimize all the objectives simultaneously in general, hence we cannot compare the cumulative loss with that of any single decision. Instead, we use the the Pareto optimal set X ∗ of the cumulative loss function ∑T t=1 Ft, i.e., X ∗ = PX( ∑T t=1 Ft), which naturally aligns with the optimality concept in MOO. Second, to compare {xt}Tt=1 and X ∗ in the loss space, we need a discrepancy metric to measure the gap between vector losses. Intuitively, we can adopt the commonly used PSG metric (Turgay et al., 2018). But we find that vanilla PSG is not appropriate for OCO, which is largely different from the bandits setting. We explicate the reason in the following.
3.1 THE NAIVE REGRET BASED ON VANILLA PSG FAILS IN MO-OCO
By definition, at each round t, the difference between the decision xt and the Pareto optimal set can be evaluated by PSG ∆(xt;X ∗, Ft). Naturally, we can formulate the multi-objective regret by accumulating ∆(xt;X ∗, Ft) over all rounds, i.e.,
RI(T ) := ∑T
t=1 ∆(xt;X ∗, Ft).
Recall that the single-objective regret can also expressed as R(T ) = ∑T
t=1(ft(xt)−ft(x∗)). Hence, RI(T ) essentially extends the scalar discrepancy ft(xt)− ft(x∗) to the PSG metric ∆(xt;X ∗, Ft). However, these two discrepancy metrics have a major difference, i.e., ft(xt) − ft(x∗) can be negative, whereas ∆(xt;X ∗, Ft) is always non-negative. In previous bandits settings (Turgay et al., 2018), the discrepancy is intrinsically non-negative, since the comparator set is exactly the Pareto optimal set of the evaluated loss function. However, the non-negative property of PSG can be problematic in our setting, where the comparator set X ∗ is the Pareto set of the cumulative loss function, rather than the instantaneous loss Ft that is used for evaluation. Specifically, at some round t, the decision xt may Pareto dominate all points in X ∗ w.r.t. Ft, which corresponds to the single-objective setting where it is possible that ft(xt) < ft(x∗) at some specific round. In this case, we would expect the discrepancy metric at this round to be negative. However, PSG can only yield 0 in this case, making the regret much looser than we expect. In the following, we provide an example in which the naive regret RI(T ) is linear w.r.t. T even when the decisions xt are already optimal.
Problem instance. Set X = [−2, 2]. Let the loss function be identical among all objectives, i.e., f1t (x) = ... = f m t (x), and alternate between x and −x. Suppose the time horizon T is an even number, then the Pareto optimal set X ∗ = X . Now consider the decisions xt = 1, t ∈ {1, ..., T}. In this case, it can easily be checked that the single-objective regret of each objective is zero, indicating that these decisions are optimal for each objective. To calculate RI(T ), notice that when all the objectives are identical, PSG reduces to ∆(xt;X ∗, f1t ) = supx∗∈X max{f1t (xt) − f1t (x∗), 0} at each round t. Hence, in this case we have RI(T ) = ∑ 1≤k≤T/2(supx∗∈[−2,2] max{1 − x∗, 0} + supx∗∈[−2,2] max{x∗ − 1, 0}) = 3T , which is linear w.r.t. T . Therefore, RI(T ) is too loose to measure the suboptimality of decisions, which is unqualified as a regret metric.
3.2 THE ALTERNATIVE REGRET BASED ON SEQUENCE-WISE PSG
In light of the failure of the naive regret, we need to modify the discrepancy metric in our setting. Recall that the single-objective regret can be interpreted as the gap between the actual cumulative loss ∑T t=1 ft(xt) and its optimal value minx∈X ∑T t=1 ft(x). In analogy, we can measure the gap
between ∑T t=1 Ft(xt) and the Pareto front P∗ = PX ( ∑T
t=1 Ft). However, vanilla PSG is a pointwise metric, i.e., it can only measure the suboptimality of a decision point. To evaluate the decision sequence {xt}Tt=1, we modify its definition and propose a sequence-wise variant of PSG. Definition 4 (Sequence-wise PSG). For any decision sequence {xt}Tt=1, the sequence-wise PSG (S-PSG) to a given comparator set2 X ∗ w.r.t. the loss sequence {Ft}Tt=1 is defined as
∆({xt}Tt=1;X ∗, {Ft}Tt=1) = inf ϵ≥0
ϵ, s.t. ∀x′′ ∈ X ∗,∃ i ∈ {1, . . . ,m}, T∑
t=1
f it (xt)−ϵ < T∑
t=1
f it (x ′′).
Since X ∗ is the Pareto set of ∑T
t=1 Ft, S-PSG measures the discrepancy from the cumulative loss of the decision sequence to the Pareto front P∗. Now the regret can be directly given as
RII(T ) := ∆({xt}Tt=1;X ∗, {Ft}Tt=1).
RII(T ) has a clear physical meaning that optimizing it will impose the cumulative loss to be close to the Pareto front P∗. However, since PSG (or S-PSG) is a zero-order metric motivated in a purely geometric sense, i.e., its calculation needs to solve a constrained optimization problem with an unknown boundary {Ft(x′′) | x′′ ∈ X ∗}, it is difficult to design a first-order algorithm to optimize PSG-based regrets, not to mention the analysis. To resolve this issue, we derive an equivalent form via highly non-trivial transformations, which is more intuitive than its original form. Proposition 1. The multi-objective regret RII(T ) based on S-PSG has an equivalent form, i.e.,
RII(T ) = max {
sup x∗∈X∗ inf λ∗∈Sm ∑T t=1 λ∗⊤(Ft(xt)− Ft(x∗)), 0 } .
Remark. (i) The above form is closely related to the single-objective regret R(T ). Specifically, when m = 1, we can prove that RII(T ) = max{ ∑T t=1 Ft(xt) − minx∗∈X∗ ∑T t=1 Ft(x
∗), 0} = 2It is equivalent to use either X ∗ or X as the comparator set. See Appendix C for the detailed proof.
Algorithm 1 Doubly Regularized Online Mirror Multiple Descent (DR-OMMD) 1: Input: Convex set X , time horizon T , regularization parameter αt, learning rate ηt, regulariza-
tion function R, user preference λ0. 2: Initialize: x1 ∈ X . 3: for t = 1, . . . , T do 4: Predict xt and receive a loss function Ft : X → Rm. 5: Compute the multiple gradients ∇Ft(xt) = [∇f1t (xt), . . . ,∇fmt (xt)] ∈ Rn×m. 6: Determine the weights for the gradient composition via min-regularized-norm
λt = argmin λ∈Sm
∥∇Ft(xt)λ∥22 + αt∥λ− λ0∥1.
7: Compute the composite gradient gt = ∇Ft(xt)λt. 8: Perform online mirror descent using gt
xt+1 = argmin x∈X
ηt⟨gt,x⟩+BR(x,xt).
9: end for
max{R(T ), 0}. Note that in the regret analysis, we are more interested in the case of R(T ) ≥ 0 (where RII(T ) = R(T )), since when R(T ) < 0, it is naturally bounded by any sublinear regret bound. Hence, RII(T ) is essentially aligned with R(T ) in the single-objective setting. (ii) At its first glance, RII(T ) can be optimized via linearization with fixed weights λ0 ∈ Sm, or alternatively, optimizing a single objective i ∈ {1, ...,m}. We remark that this is not a problem of our regret definition, but an intrinsic requirement of Pareto optimality. Specifically, Pareto optimality characterizes the status where no objective can be improved without hurting others. Hence merely optimizing a single objective naturally achieves Pareto optimality. Please refer to Proposition 8 in (Emmerich & Deutz, 2018) for the rigorous proof. As a general performance metric, our regret should incorporate this special case. Later, we will design a novel algorithm based on the concept of common descent, which outperforms linearization in both theory and experiment.
4 DOUBLY REGULARIZED ONLINE MIRROR MULTIPLE DESCENT
In this section, we present the Doubly Robust Online Mirror Multiple Descent (DR-OMMD) algorithm, the protocol of which is given in Algorithm 1. At each round t, the learner first computes the gradient of the loss regarding each objective, then determines the composite weights of all these gradients, and finally applies the composite gradient to the online mirror descent step.
4.1 VANILLA MIN-NORM MAY INCUR LINEAR REGRETS
The core module of DR-OMMD is the composition of gradients. For simplicity, denote the gradients at round t in a matrix form ∇Ft(xt) = [∇f1t (xt), . . . ,∇fmt (xt)] ∈ Rn×m. Then the composite gradient is gt = ∇Ft(xt)λt, where λt is the composite weights. As illustrated in the preliminary, in the offline setting, the min-norm method (Désidéri, 2012; Sener & Koltun, 2018) is a classic method to determine the composite weights, which produces a common descent direction that can descend all the losses simultaneously. Thus, it is tempting to consider applying it to the online setting.
However, directly applying min-norm to the online setting is not workable, which may even incur linear regrets. In vanilla min-norm, the composite weights λt are determined solely by the gradients ∇Ft(xt) at the current round t, which are very sensitive to the instantaneous loss Ft. In the online setting, the losses at each round can be adversarially chosen, and thus the corresponding gradients can be adversarial. These adversarial gradients may result in undesired composite weights, which may further produce a composite gradient that even deteriorates the next prediction. In the following, we provide an example in which min-norm incurs a linear regret. We extend OMD (Hazan et al., 2016) to the multi-objective setting, where the composite weights are directly yielded by min-norm.
Problem instance. We consider a two-objective problem. The decision domain is X = {(u, v) | u+ v ≤ 12 , v − u ≤ 1 2 , v ≥ 0} and the loss function at each round is
Ft(x) = { (∥x− a∥2, ∥x− b∥2), t = 2k − 1, k = 1, 2, ...; (∥x− b∥2, ∥x− c∥2), t = 2k, k = 1, 2, ...,
where a = (−2,−1), b = (0, 1), c = (2,−1). For simplicity, we first analyze the case where the total time horizon T is an even number. Then we can compute the Pareto set of the cumulative loss∑T
t=1 Ft, i.e., X ∗ = {(u, 0) | − 1 2 ≤ u ≤ 1 2}, which locates at the x-axis. For conciseness of analysis, we instantiate OMD with L2-regularization, which results in the simple OGD algorithm (McMahan, 2011). We start at an arbitrary point x1 = (u1, v1) ∈ X satisfying v1 > 0. At each round t, suppose the decision xt = (ut, vt), then the gradient of each objective w.r.t. xt takes
g1t = { (2ut + 4, 2vt + 2), t = 2k − 1; (2ut, 2vt − 2), t = 2k.
g2t = { (2ut, 2vt − 2), t = 2k − 1; (2ut − 4, 2vt + 2), t = 2k.
Since 0 ≤ vt ≤ 12 , we observe that the second entry of either gradient alternates between positive and negative. By using min-norm, the composite weights λt can be computed as
λt = { ((1− ut − vt)/4, (3 + ut + vt)/4), t = 2k − 1; ((3− ut + vt)/4, (1 + ut − vt)/4), t = 2k.
We observe that both entries of composite weights alternative between above 12 and below 1 2 , and ∥λt+1 − λt∥1 ≥ 1. Recall that ∥λt∥1 = 1, hence the composite weights at two consecutive rounds change radically. The resulting composite gradient takes
gcompt = { (ut − vt + 1, −ut + vt − 1), t = 2k − 1; (−ut − vt − 1, −ut − vt − 1), t = 2k.
The fluctuating composite weights mix with the positive and negative second entries of gradients, making the second entry of gcompt always negative, i.e., −ut + vt − 1 < 0 and −ut − vt − 1 < 0. Hence gcompt always drives xt away from the Pareto set X ∗ that coincides with the x-axis. This essentially reversely optimizes the loss, hence increasing the regret. In fact, we can prove that it even incurs a linear regret. Due to the lack of space, we leave the proof of linear regret when T is an odd number in Appendix H. The above results of the problem instance are summarized as follows.
Proposition 2. For OMD equipped with vanilla min-norm, there exists a multi-objective online convex optimization problem, in which the resulting algorithm incurs a linear regret.
Remark. Stability is a basic requirement to ensure meaningful regrets in online learning (McMahan, 2017). In the single-objective setting, directly regularizing the iterate xt (e.g., OMD) is enough. However, as shown in the above analysis, merely regularizing xt is not enough to attain sublinear regrets in the multi-objective setting, since there is another source of instability, i.e., the composite weights, that affects the direction of composite gradients. Therefore, in multi-objective online learning, besides regularizing the iterates, we also need to explicitly regularize the composite weights.
4.2 THE ALGORITHM
Enlightened by the design of regularization in FTRL (McMahan, 2017), we consider the regularizer r(λ,λ0), where λ0 is the pre-defined composite weights that may reflect the user preference. This results in a new solver called min-regularized-norm, i.e.,
λt = argmin λ∈Sm
∥∇Ft(xt)λ∥22 + αt r(λ,λ0),
where αt is the regularization strength. Equipping OMD with the new solver, we derive the proposed algorithm. Note that beyond the regularization on the iterate xt that is intrinsic in online learning, there is another regularization on the composite weights λt in min-regularized-norm. Both regularizations are fundamental, and they together ensure stability in the multi-objective online setting. Hence we call the algorithm Doubly Regularized Online Mirror Multiple Descent (DR-OMMD).
In principle, r can take various forms such as L1-norm, L2-norm, etc. Here we adopt L1-norm since it aligns well with the simplex constraint of λ. Min-regularized-norm can be computed very efficiently. When m = 2, it has a closed-form solution. Specifically, suppose the gradients at round t are g1t and g 2 t . Set γL = (g ⊤ 2 (g2−g1)−αt)/∥g2−g1∥2 and γR = (g⊤2 (g2−g1)+αt)/∥g2−g1∥2. Given any λ0 = (γ0, 1− γ0) ∈ S2, we can compute the composite weights λt as (γt, 1− γt) where
γt = max{min{γ′′t , 1}, 0}, where γ′′t = max{min{γ0, γR}, γL}.
When m > 2, since the constraint Sm is a simplex, we can introduce a Frank-Wolfe solver (Jaggi, 2013) (see detailed protocol in Appendix E.1). We also discuss the L2-norm case in Appendix E.2.
Compared to vanilla min-norm, the composite weights in min-regularized-norm are not fully determined by the adversarial gradients. The resulting relative stability of composite weights makes the composite gradients more robust to the adversarial environment. In the following, we give a general analysis and prove that DR-OMMD indeed guarantees sublinear regrets.
4.3 THEORETICAL ANALYSIS
Our analysis is based on two conventional assumptions (Jadbabaie et al., 2015; Hazan et al., 2016). Assumption 1. The regularization function R is 1-strongly convex. In addition, the Bregman divergence is γ-Lipschitz continuous, i.e., BR(x, z)−BR(y, z) ≤ γ∥x−y∥,∀x,y, z ∈ domR, where domR is the domain of R and satisfies X ⊂ domR ⊂ Rn. Assumption 2. There exists some finite G > 0 such that for each i ∈ {1, . . . ,m}, the i-th loss f it at each round t ∈ {1, . . . , T} is differentiable and G-Lipschitz continuous w.r.t. ∥ · ∥2, i.e., |f it (x)− f it (x′)| ≤ G∥x− x′∥2. Note that in the convex setting, this assumption leads to bounded gradients, i.e., ∥∇f it (x)∥2 ≤ G for any t ∈ {1, . . . , T}, i ∈ {1, . . . ,m},x ∈ X . Theorem 1. Suppose the diameter of X is D. Assume Ft is bounded, i.e., |f it (x)| ≤ F,∀x ∈ X , t ∈ {1, . . . , T}, i ∈ {1, . . . ,m}. For any λ0 ∈ Sm, DR-OMMD attains
R(T ) ≤ γD ηT
+ ∑T
t=1 ηt 2 (∥∇Ft(xt)λt∥22 + 4F ηt ∥λt − λ0∥1).
Remark. When ηt = √ 2γD
G √ T
or √ 2γD
G √ t , αt = 4Fηt , the bound attains O( √ T ). It matches the optimal
single-objective bound w.r.t. T (Hazan et al., 2016) and is tight w.r.t. m (justified in Appendix F.2).
Comparison with linearization. Linearization with fixed weights λ0 ∈ Sm essentially optimizes the scalar loss λ⊤0 Ft with gradient gt = ∇Ft(xt)λ0. From OMD’s tight bound (Theorem 6.8 in (Orabona, 2019)), we can derive a bound γDηT + ∑T t=1 ηt 2 ∥∇Ft(xt)λ0∥ 2 2 for linearization. In comparison, when αt = 4Fηt , DR-OMMD attains a regret bound γD ηT + ∑T t=1 ηt 2 minλ∈Sm{∥∇Ft(xt)λ∥ 2 2+ αt∥λ−λ0∥1}, which is smaller than that of linearization. Note that although the bound of linearization refers to single-objective regret R(T ), the comparison is reasonable due to the consistency of the two regret metrics, i.e., RII(T ) = max{R(T ), 0} when m = 1, as proved in Proposition 1. In the following, we further investigate the margin in the two-objective setting with linear losses. Suppose the loss functions are f1t (x) = x ⊤g1t and f 2 t (x) = x ⊤g2t for some vectors g 1 t , g 2 t ∈ Rn at each round. Then we can show that the margin is at least (see Appendix F.3 for the detailed proof)
M ≥ ∑T
t=1 ηt 4 ∥λt − λ0∥22 · ∥g1t − g2t ∥22,
which indicates the benefit of DR-OMMD. Specifically, while linearization requires adequate λ0, DR-OMMD selects more proper λt adaptively; the advantange is more obvious as the gradients of different objectives vary wildly. This matches our intuition that linearization suffers from conflict gradients (Yu et al., 2020), while DR-OMMD can alleviate the conflict by pursuing common descent.
5 EXPERIMENTS
In this section, we conduct experiments to compare DR-OMMD with two baselines: (i) linearization performs single-objective online learning on scalar losses λ⊤0 Ft with pre-defined fixed λ0 ∈ Sm; (ii) min-norm equips OMD with vanilla min-norm (Désidéri, 2012) for gradient composition.
5.1 CONVEX EXPERIMENTS: ADAPTIVE REGULARIZATION
Many real-world online scenarios adopt regularization to avoid overfitting. A standard scheme is to add a term r(x) to the loss ft(x) at each round and optimize the regularized loss ft(x) + σr(x) (McMahan, 2011), where σ is a pre-defined fixed hyperparameter. The formalism of multi-objective online learning provides a novel way of regularization. As r(x) measures model complexity, it can
(a) Effect of Preference (b) Learning Curve
0 2500 5000 7500 10000 12500 # Rounds
0.31
0.33
0.35
0.37
Av er
ag e
Lo ss
lin-opt DR-OMMD
0.0 0.2 0.4 0.6 0.8 1.0 Value of 10
0.3
0.4
0.5
0.6
0.7
Av er
ag e
Lo ss
linearization DR-OMMD
Figure 1: Results to verify the effectiveness of adaptive regularization on protein. (a) Performance of DR-OMMD and linearization under varying λ0 = (λ10, 1−λ10). (b) Performance using the optimal weights λ0 = (0.1, 0.9).
(a) Task L (b) Task R
0 20000 40000 60000 # Rounds
0.6
0.7
0.8
0.9
1.0
1.1
Av er
ag e
Lo ss
DR-OMMD min-norm lin (.25,.75) lin (0.5,0.5) lin (.75,.25)
0 20000 40000 60000 # Rounds
0.6
0.8
1.0
1.2
Av er
ag e
Lo ss
DR-OMMD min-norm lin (.25,.75) lin (0.5,0.5) lin (.75,.25)
Figure 2: Results to verify the effectiveness of DR-OMMD in the non-convex setting. The two plots show the performance of DR-OMMD and various baselines on both tasks (Task L and Task R) of MultiMNIST.
be regarded as the second objective alongside the primary goal ft(x). We can augment the loss to Ft(x) = (ft(x), r(x)) and thereby cast regularized online learning into a two-objective problem. Compared to the standard scheme, our approach chooses σt = λ2t/λ 1 t in an adaptive way.
We use two large-scale online benchmark datasets. (i) protein is a bioinformatics dataset for protein type classification (Wang, 2002), which has 17 thousand instances with 357 features. (ii) covtype is a biological dataset collected from a non-stationary environment for forest cover type prediction (Blackard & Dean, 1999), which has 50 thousand instances with 54 features. We set the logistic classification loss as the first objective, and the squared L2-norm of model parameters as the second objective. Since the ultimate goal of regularization is to lift predictive performance, we measure the average loss, i.e., ∑ t≤T lt(xt)/T , where lt(xt) is the classification loss at round t.
We adopt a L2-norm ball centered at the origin with diameter K = 100 as the decision set. The learning rates are decided by a grid search over {0.1, 0.2, . . . , 3.0}. For DR-OMMD, the parameter αt is simply set as 0.1. For fixed regularization, the strength σ = (1−λ10)/λ10 is determined by some λ10 ∈ [0, 1], which is exactly linearization with weights λ0 = (λ10, 1− λ10). We run both algorithms with varying λ10 ∈ {0, 0.1, ..., 1}. In Figure 1, we plot (a) their final performance w.r.t. the choice of λ0 and (b) their learning curves with desirable λ0 (e.g., (0.1, 0.9) on protein). Other results are deferred to the appendix due to the lack of space. The results show that DR-OMMD consistently outperforms fixed regularization; the gap becomes more significant when λ0 is not properly set.
5.2 NON-CONVEX EXPERIMENTS: DEEP MULTI-TASK LEARNING
We use MultiMNIST (Sabour et al., 2017), which is a multi-task version of the MNIST dataset for image classification and commonly used in deep multi-task learning (Sener & Koltun, 2018; Lin et al., 2019). In MultiMNIST, each sample is composed of a random digit image from MNIST at the top-left and another image at the bottom-right. The goal is to classify the digit at the top-left (task L) and that at the bottom-right (task R) at the same time.
We follow (Sener & Koltun, 2018)’s setup with LeNet. Learning rates in all methods are selected via grid search over {0.0001, 0.001, 0.01, 0.1}. For linearization, we examine different weights (0.25, 0.75), (0.5, 0.5), and (0.75, 0.25). For DR-OMMD, αt is set according to Theorem 1, and the initial weights are simply set as λ0 = (0.5, 0.5). Note that in the online setting, samples arrive in a sequential manner, which is different from offline experiments where sample batches are randomly sampled from the training set. Figure 2 compares the average cumulative loss of all the examined methods. We also measure two conventional metrics in offline experiments, i.e., the training loss and test loss (Reddi et al., 2018); the results are similar and deferred to the appendix due to the lack of space. The results show that DR-OMMD outperforms counterpart algorithms using min-norm or linearization in all metrics on both tasks, validating its effectiveness in the non-convex setting.
6 CONCLUSIONS
In this paper, we give a systematic study of multi-objective online learning, encompassing a novel framework, a new algorithm, and corresponding non-trivial theoretical analysis. We believe that this work paves the way for future research on more advanced multi-objective optimization algorithms, which may inspire the design of new optimizers for multi-task deep learning.
ACKNOWLEDGMENTS
This work was supported in part by the National Key Research and Development Program of China No. 2020AAA0106300 and National Natural Science Foundation of China No. 62250008. This work was also supported by Ant Group through Ant Research Intern Program. We would like to thank Wenliang Zhong, Jinjie Gu, Guannan Zhang and Jiaxin Liu for generous support on this project.
APPENDIX
The appendix is organized as follows. Appendix A reviews related work. Appendix B validates the correctness of our definition of PSG. Appendix C discusses the domain of the comparator in S-PSG, indicating that it makes no difference whether the comparator is selected from the Pareto optimal set or from the whole domain. Appendix D provides the detailed derivation of the equivalent form of RII(T ). Appendix E discusses how to efficiently compute the composition weights for the minregularized-norm solver. Appendix F discusses the order of DR-OMMD’s regret bound with fixed or adaptive learning rate, shows the tightness of the derived bound, and provides more details on the regret comparison between DR-OMMD and linearization. Appendix G supplements more details in the experimental setup and empirical results. Appendix H and I provide detailed proofs of the remaining theoretical claims in the main paper. Finally, Appendix J supplements regret analysis of DR-OMMD in the strongly convex setting.
A RELATED WORK
In this section, we review previous work in some related fields, i.e., online learning, multi-objective optimization, multi-objective multi-armed bandits, and multi-objective Bayesian optimization.
A.1 ONLINE LEARNING
Online learning arms to make sequential predictions for streaming data. Please refer to the introduction books (Hazan et al., 2016; Orabona, 2019) for more background knowledges.
Most of the previous works on online learning are conducted in the single-objective setting. As far as we are concerned, there are only two lines of work concerning multi-objective learning. The first line of works provides a multi-objective perspective of the prediction-with-expert-advice (PEA) problem (Koolen, 2013; Koolen & Van Erven, 2015). Specifically, they view each individual expert as a multi-objective criterion, and characterize the Pareto optimal trade-offs among different experts. These works have two main distinctions from our proposed MO-OCO. First, they are still built upon the original PEA problem where the payoff of each expert (or decision) is a scalar, while we focus on vectoral payoffs. Second, their framework is restricted to an absolute loss game, whereas our framework is general and can be applied to any coordinate-wise convex loss functions.
The second line of work studies online learning with vectoral payoffs via Blackwell approachability (Blackwell, 1956; Mannor et al., 2014; Abernethy et al., 2011). In their framework, the learner is given a target set T ⊂ Rm and its goal is to generate decisions {xt}Tt=1 to minimize the distance between the average loss ∑T t=1 lt(xt)/T and the target set T . There are two major differences between Blackwell approachability and our proposed MO-OCO: previous works on Blackwell approachability are zero-order methods and the target set T is often known beforehand (also see the discussion in (Busa-Fekete et al., 2017)), while in MO-OCO we intend to develop a first-order method to reach the unknown Pareto front.
A.2 MULTI-OBJECTIVE OPTIMIZATION
Multi-objective optimization aims to optimize multiple objectives concurrently. Most of the previous works on multi-objective optimization are conducted in the offline setting, including the batch optimization setting (Désidéri, 2012; Liu et al., 2021) and the stochastic optimization setting (Sener & Koltun, 2018; Lin et al., 2019; Yu et al., 2020; Chen et al., 2020; Javaloy & Valera, 2021). These methods are based on gradient composition, and have shown very promising results in multi-task learning applications.
Despite the existence of previous works on multi-objective optimization, as the first work of multiobjective optimization in the OCO setting, our work is largely different from them in three aspects. First, we contribute the first formal framework of multi-objective online convex optimization. In particular, our framework is based on a novel equivalent transformation of the PSG metric, which is intrinsically different from previous offline optimization frameworks. Second, we provide a showcase in which a commonly used method in the offline setting, namely min-norm (Désidéri, 2012; Sener & Koltun, 2018), fail to attain sublinear regret in online setting. Our proposed min-regularized-norm
is a novel design when tailoring offline methods to the online setting. Third, the regret analysis of multi-objective online learning is intrinsically different from the convergence analysis in the offline setting (Yu et al., 2020).
A.3 MULTI-OBJECTIVE MULTI-ARMED BANDITS
Another branch of related works study multi-objective optimization in the multi-armed bandits setting (Busa-Fekete et al., 2017; Tekin & Turğay, 2018; Turgay et al., 2018; Lu et al., 2019a; Degenne et al., 2019). Among these works, the most relevant one to ours is (Turgay et al., 2018), which introduces the Pareto suboptimality gap (PSG) metric to characterize the multi-objective regret in the bandits setting, and proposes a zero-order zooming algorithm to minimize the regret.
In this work, our regret definition also utilizes the PSG metric (Turgay et al., 2018). However, as the first study of multi-objective optimization in the OCO setting, our work is intrinsically different from these previous works in the following aspects. First, as PSG is a zero-order metric, we perform a novel equivalent transformation, making it amenable to the OCO setting. Second, our proposed algorithm is a first-order multiple gradient algorithm, whose design principles are completely distinct from zero-order algorithms. For example, the concept of the stability of composite weights does not even exist in the design of previous zero-order methods for multi-objective bandits (Turgay et al., 2018; Lu et al., 2019a). Third, the regret analysis of MO-OCO is intrinsically different from that in the bandits setting.
A.4 MULTI-OBJECTIVE BAYESIAN OPTIMIZATION
The final area related to our work is multi-objective Bayesian optimization (Zhang & Golovin, 2020; Konakovic Lukovic et al., 2020; Chowdhury & Gopalan, 2021; Maddox et al., 2021; Daulton et al., 2022), which studies Bayesian optimization with vector-valued feedback. There are two branches of works in this area, using different notions of regret. The first branch is based on scalarization, which adopts the expectation of the gap between scalarized losses over some given distribution (Chowdhury & Gopalan, 2021) as the regret. In this approach, the distribution of scalarization can be understood as a set of preference, which needs to be known beforehand. The second branch is based on Pareto optimality (Zhang & Golovin, 2020), which uses hypervolume as the discrepancy metric and adopt the gap between the true Pareto front and the estimated Pareto front as the regret.
As the first work on multi-objective optimization in the OCO setting, our work is largely different from these works in the following aspects. First, the regret definitions are different. Specifically, compared to the first branch based on scalarization, our regret definition is purely motivated by Pareto optimality, which does not need any preference in advance; compared to the second branch using hypervolume, we note that hypervolume is mainly used for Pareto front approximation, which is unsuitable to our adversarial setting where the goal is to impose the cumulative loss to reach the Pareto front. Second, multi-objective Bayesian optimization is conducted in a stochastic setting, which typically assumes that the losses follow some Gaussian distribution, whereas our work is conducted in the adversarial setting where the losses can be generated arbitrarily.
B AN EQUIVALENT DEFINITION OF PSG
Recall that in Definition 3, we formulate the PSG metric as a constrained optimization problem. We note that, since the PSG metric is based on the notion of “non-dominance” (Turgay et al., 2018), its most direct form is actually
∆′(x;K∗, F ) = inf ϵ≥0 ϵ,
s.t. ∀x′′ ∈ K∗,∃i ∈ {1, . . . ,m}, f i(x)− ϵ < f i(x′′) or ∀i ∈ {1, . . . ,m}, f i(x)− ϵ = f i(x′′).
At the first glance, the above definition seems to be quite different from Definition 3, since it has an extra condition “∀i ∈ {1, . . . ,m}, f i(x) − ϵ = f i(x′′)”. In the following, we prove that both definitions actually yield the same value due to the infimum operation on ϵ.
Specifically, for any possible pair (x,K∗, F ), we denote ∆′(x;K∗, F ) = ϵ′0 and ∆(x;K∗, F ) = ϵ0. By comparing the constraints of both definitions, it is obvious that ϵ0 must satisfy the constraint
of ∆′(x;K∗, F ), hence the infimum operation guarantees that ϵ′0 ≤ ϵ0. It remains to prove that ϵ′0 ≥ ϵ0. To this end, we only need to show that ϵ′0 + ξ satisfies the constraint of ∆(x;K∗, F ) for any ξ > 0. Consider an arbitrary x′′ ∈ K∗. From the definition of ∆′(x;K∗, F ), we know that either ∃i ∈ {1, . . . ,m}, f i(x) − ϵ′0 < f i(x′′) or ∀i ∈ {1, . . . ,m}, f i(x) − ϵ′0 = f i(x′′). Whichever condition holds, we must have ∃i ∈ {1, . . . ,m}, f i(x)−ϵ′0−ξ < f i(x′′) for any ξ > 0. Since it holds for any x′′ ∈ K∗, ϵ′0 + ξ lies in the feasible region of ∆(x;K∗, F ), hence we have ϵ0 ≤ ϵ′0 + ξ,∀ξ > 0 and thus ϵ0 ≤ ϵ′0. In summary, we have ∆′(x;K∗, F ) = ∆(x;K∗, F ) for any pair (x,K∗, F ).
C DISCUSSION ON THE DOMAIN OF THE COMPARATOR IN S-PSG
Recall that in Definition 4, the comparator x′ in S-PSG is selected from the Pareto optimal set X ∗ of the cumulative loss ∑T t=1 Ft. This actually stems from the original definition of PSG (Turgay et al., 2018), which uses the Pareto optimal set as the comparator set. In fact, comparing with Pareto optimal decisions in X ∗ is already enough to measure the suboptimality of any decision sequence {xt}Tt=1. The reason is that, for any non-optimal decision x′ ∈ X − X ∗, there must exist some Pareto optimal decision x′′ ∈ X ∗ that dominates x′, hence the suboptimality metric does not need to compare with this non-optimal decision x′. In other words, even if we extend the comparator set in S-PSG to the whole domain X , the modified form will be equivalent to the original form based on the Pareto optimal set X ∗. In the following, we strictly prove this equivalence ∆({xt}Tt=1;X , {Ft}Tt=1) = ∆({xt}Tt=1;X ∗, {Ft}Tt=1). Specifically, we modify the definition of S-PSG and let the comparator domain X ′ be any subset of the decision domain X , i.e.,
∆({xt}Tt=1;X ′, {Ft}Tt=1) = inf ϵ≥0
ϵ, s.t. ∀x′′ ∈ X ′,∃i ∈ {1, . . . ,m}, T∑
t=1
f it (xt)− ϵ < T∑
t=1
f it (x ′′).
Then the modified regret based on the whole domain X takes R′II(T ) = ∆({xt}Tt=1;X , {Ft}Tt=1). Now we begin to prove the equivalence ∆({xt}Tt=1;X , {Ft}Tt=1) = ∆({xt}Tt=1;X ∗, {Ft}Tt=1). For any X ′ ⊂ X , let E(X ′) denote the constraint of ∆({xt}Tt=1;X ′, {Ft}Tt=1), i.e.,
E(X ′) = {ϵ ≥ 0 | ∀x′′ ∈ X ′,∃i ∈ {1, . . . ,m}, T∑
t=1
f it (xt)− ϵ < T∑
t=1
f it (x ′′)},
then ∆({xt}Tt=1;X ′, {Ft}Tt=1) = inf E(X ′). Hence, we just need to prove inf E(X ) = inf E(X ∗). On the one hand, since X ∗ ⊂ X , from the above definition of S-PSG, it is easy to check that for any ϵ ∈ E(X ), it must satisfy ϵ ∈ E(X ∗). Hence, we have E(X ) ⊂ E(X ∗). On the other hand, given any ϵ ∈ E(X ∗), we now check that ϵ ∈ E(X ). To this end, we consider an arbitrary point x′′ ∈ X in two cases. (i) If x′′ ∈ X ∗, since ϵ ∈ E(X ∗), we naturally have ∑T t=1 f i0 t (xt) − ϵ < ∑T t=1 f i0 t (x
′′) for some i0. (ii) If x′′ /∈ X ∗, since X ∗ is the Pareto optimal set of ∑T t=1 Ft, there must exist some Pareto optimal decision x̂ ∈ X ∗ that dominates x′′
w.r.t. ∑T t=1 Ft, which means that ∑T t=1 f i t (x̂) ≤ ∑T t=1 f i t (x
′′) for all i ∈ {1, ...,m}. Notice that ϵ ∈ E(X ∗) gives ∑T t=1 f i0 t (xt) − ϵ < ∑T t=1 f
i0 t (x̂) for some i0, hence in this case we also have∑T
t=1 f i0 t (xt)− ϵ < ∑T t=1 f i0 t (x
′′). Combining the above two cases, we prove that ϵ ∈ E(X ), and consequently E(X ∗) ⊂ E(X ). In summary, we have E(X ) = E(X ∗), hence ∆({xt}Tt=1;X , {Ft}Tt=1) = inf E(X ) = inf E(X ∗) = ∆({xt}Tt=1;X ∗, {Ft}Tt=1). Therefore, it makes no difference whether the comparator in RII(T ) is generated from the Pareto optimal set X ∗ or from the whole domain X .
D DERIVATION OF THE EQUIVALENT MULTI-OBJECTIVE REGRET FORM
In this section, We strictly derive the equivalent form of RII(T ) in Proposition 1, which is highly non-trivial and forms the basis of the subsequent algorithm design and theoretical analysis.
Proof of Proposition 1. Recall that the PSG metric used in RII(T ) is an extension of vanilla PSG to leverage any decision sequence. To motivate the analysis, we first investigate vanilla PSG ∆(x;X ∗, F ) that deals with a single decision x, and derive a useful lemma as follows. Lemma 1. Vanilla PSG has an equivalent form, i.e.,
∆(x;X ∗, F ) = sup x∗∈X∗ inf λ∈Sm λ⊤(F (x)− F (x))+,
where for any vector l = (l1, ..., lm) ∈ Rm, the truncation (l)+ produces a vector whose i-th entry equals to max{li, 0} for all i ∈ {1, ...,m}.
Proof. In the definition of PSG, the evaluated decision x is compared to all Pareto optimal points x′ ∈ X ∗. For any fixed comparator x′ ∈ X ∗, we define the pair-wise suboptimality gap w.r.t. F between decisions x and x′ as follows
δ(x;x′, F ) = inf ϵ≥0 {ϵ | F (x)− ϵ1 ⊁ F (x′)}.
Hence, PSG can be expressed as
∆(x;X ∗, F ) = sup x′∈X∗ δ(x;x′, F ).
To proceed, we analyze the pair-wise gap δ(x;x′, F ). From its definition, we know that δ(x;x′, F ) measures the minimal non-negative value that needs to be subtracted from each entry of F (x) until it is not dominated by x′. Now we consider two cases.
(i) If F (x) ⊁ F (x′), i.e., fk0(x) ≤ fk0(x′) for some k0 ∈ {1, ...,m}, nothing needs to be subtracted from F (x) and we directly have δ(x;x′, F ) = 0.
(ii) If F (x) ≻ F (x′), we have fk(x) ≥ fk(x′) for all k ∈ {1, ...,m}, which obviously violates the condition F (x) − ϵ1 ⊁ F (x′) when ϵ = 0. Now let us gradually increase ϵ from zero. Notice that such a condition holds only when there there exists some k0 satisfying fk0(x) − ϵ ≤ fk0(x′), or equivalently ϵ ≥ fk0(x) − fk0(x′). Hence, in this case, we have δ(x;x′, F ) = mink∈{1,...,m}{fk(x)− fk(x′)}. Combining the above two cases, we derive an equivalent form of the pair-wise suboptimality gap. Specifically, we can easily check that the following form holds for both cases, i.e.,
δ(x;x′, F ) = min k∈{1,...,m} max{fk(x)− fk(x′), 0}.
To relate the above form with F , denote Um = {ek | 1 ≤ k ≤ m} as the set of all unit vector in Rm, then we equivalently have
δ(x;x′, F ) = min λ∈Um λ⊤(F (x)− F (x′))+.
Now the calculation of δ(x;x′, F ) is transformed into a minimization problem over λ ∈ Um. Since Um is a discrete set, we can apply a linear relaxation trick. Specifically, we now turn to minimize the scalar p(λ) = λ⊤ max{F (x)−F (x′), 0} over the convex curvature of Um, which is exactly the probability simplex Sm = {λ ∈ Rm | λ ⪰ 0, ∥λ∥1 = 1}. Note that Um contains all the vertexes of Sm. Since infλ∈Sm p(λ) is a linear optimization problem, the minimal point λ∗ must be a vertex of the simplex, i.e., λ∗ ∈ Um. Hence, the relaxed problem is equivalent to the original problem, namely,
δ(x;x′, F ) = min λ∈Um λ⊤(F (x)− F (x′))+ = inf λ∈Sm λ⊤(F (x)− F (x′))+.
Taking the supremum of both sides over x′ ∈ X ∗, we prove the lemma. ■
The above lemma can be naturally extended to the sequence-wise variant S-PSG. Specifically, we can extend the pair-wise suboptimality gap δ(x;x′, F ) to measure any decision sequence, which now becomes
δ({xt}Tt=1;x′, {Ft}Tt=1) = inf ϵ≥0
{ϵ | T∑
t=1
Ft(xt)− ϵ1 ⊁ T∑
t=1
Ft(x ′)}.
Then S-PSG can be expressed as
∆({xt}Tt=1;X ∗, {Ft}Tt=1) = sup x∗∈X∗ δ({xt}Tt=1;x∗, {Ft}Tt=1).
Similar to the derivation of the above lemma, by investigating the relation between ∑T
t=1 Ft(xt) and ∑T
t=1 Ft(x ′), we can derive an equivalent form of δ({xt}Tt=1;x′, {Ft}Tt=1) as
δ({xt}Tt=1;x′, {Ft}Tt=1) = min k∈{1,...,m}
max{ T∑
t=1
fkt (x)− T∑
t=1
fkt (x ′), 0},
and further
δ({xt}Tt=1;x′, {Ft}Tt=1) = inf λ∈Sm λ⊤( T∑ t=1 Ft(xt)− T∑ t=1 Ft(x ′))+.
Hence, the S-PSG-based regret form can be expressed as
RII(T ) = sup x∗∈X∗ inf λ∈Sm
λ⊤( T∑
t=1
Ft(xt)− T∑
t=1
Ft(x ∗))+.
The max-min form of RII(T ) has a truncation operation (·)+, which brings irregularity to the regret form. To handle the truncation operation, we utilize the following lemma:
Lemma 2. (a) For any l ∈ Rm, we have infλ∈Sm λ⊤(l)+ = max{infλ∈Sm λ⊤l, 0}. (b) For any h : X → R, we have supx∈X max{h(x), 0} = max{supx∈X h(x), 0}.
Proof. To prove the first statement, we consider the following two cases. (i) If l ≻ 0, then (l)+ = l. For any λ ∈ Sm, we have λ⊤(l)+ = λ⊤l > 0. Taking the infimum over λ ∈ Sm on both sides, we have infλ⊤Sm λ⊤(l)+ = infλ∈Sm λ⊤l ≥ 0. Moreover, from the last equation we have max{infλ∈Sm λ⊤l, 0} = infλ∈Sm λ⊤l, which proves the statement in this case. (ii) If l ⊁ 0, then li ≤ 0 for some i ∈ {1, ...,m}. Set ei as the i-th unit vector in Rm, then we have e⊤i l ≤ 0. One the one hand, since ei ∈ Sm, we have infλ∈Sm λ⊤l ≤ e⊤i l ≤ 0, and further max{infλ∈Sm λ⊤l, 0} = 0. On the other hand, notice that e⊤i (l)+ = 0 and λ⊤(l)+ ≥ 0 for any λ ∈ Sm, then infλ∈Sm λ⊤(l)+ = e⊤i (l)+ = 0. Hence, the statement also holds in this case. To prove the second statement, we also consider two cases. (i) If h(x0) > 0 for some x0 ∈ X , then supx∈X h(x) ≥ h(x0) > 0, and max{supx∈X h(x), 0} = supx∈X h(x). Since we also have supx∈X max{h(x), 0} = supx∈X h(x), the statement holds in this case. (ii) If h(x) ≤ 0 for all x ∈ X , then supx∈X h(x) ≤ 0, and thus max{supx∈X h(x), 0} = 0. Meanwhile, for any x ∈ X , we have max{h(x)} = 0, which validates the statement in this case.
■
From the above lemma, we directly have
RII(T ) = sup x∗∈X∗ max{ inf λ∈Sm
λ⊤( T∑ t=1 Ft(xt)− T∑ t=1 Ft(x ∗)), 0}
= max{ sup x∗∈X∗ inf λ∈Sm
λ⊤( T∑ t=1 Ft(xt)− T∑ t=1 Ft(x ∗)), 0},
which derives the desired equivalent form. ■
E CALCULATION OF MIN-REGULARIZED-NORM
In this section, we discuss how to efficiently calculate the solutions to min-regularized-norm with L1-norm and L2-norm.
Algorithm 2 Frank-Wolfe Solver for Min-Regularized-Norm with L1-Norm 1: Initialize: λt = (γ1t , . . . , γmt ) = ( 1m , . . . , 1 m ).
2: Compute the matrix U = ∇Ft(xt)⊤∇Ft(xt), i.e., Uij = ∇f it (xt)⊤∇f j t (xt),∀i, j ∈
{1, . . . ,m}. 3: repeat 4: Select an index k ∈ argmaxi∈{1,...,m}{ ∑m j=1 γ j tU
ij + α sgn(γit − γi0)}. 5: Compute δ ∈ argmin0≤δ≤1 ∥∥δ∇fkt (xt) + (1− δ)∇Ft(xt)λt∥∥22+α∥δ(ek−λt)+λt−λ0∥1. 6: Update λt = (1− δ)λt + δek. 7: until δ ∼ 0 or Number of Iteration Limits 8: return λt.
E.1 L1-NORM
Similar to (Sener & Koltun, 2018), we first consider the setting of two objectives, namely m = 2. In this case, for any λ = (γ, 1− γ),λ0 = (γ0, 1− γ0) ∈ S2, the L1-regularization ∥λ− λ0∥1 equals to 2|γ − γ0|. Hence min-regularized-norm with L1-norm at round t reduces to λt = (γt, 1 − γt) where
γt ∈ argmin 0≤γ≤1 ∥γg1 + (1− γ)g2∥22 + 2α|γ − γ0|.
Interestingly, the above problem has a closed-form solution.
Proposition 3. Set γL = (g⊤2 (g2−g1)−α)/∥g2−g1∥22, and γR = (g⊤2 (g2−g1)+α)/∥g2−g1∥22. Then min-regularized-norm with L1-norm produces weights λt = (γt, 1− γt) where
γt = max{min{γ′′t , 1}, 0}, where γ′′t = max{min{γ0, γR}, γL}.
Proof. We solve the following two quadratic sub-problems, i.e.,
min 0≤γ≤γ0
h1(γ) = ∥γg1 + (1− γ)g2∥22 + 2α(γ0 − γ),
as well as min
γ0≤γ≤1 h2(γ) = ∥γg1 + (1− γ)g2∥22 + 2α(γ − γ0).
It can be checked that in the former sub-problem, h1 monotonously decreases on (−∞, γR] and increases on [γR,+∞); in the latter sub-problem, h2 monotonously decreases on (−∞, γL] and increases on [γL,+∞). Since each sub-problem has its constraint ([0, γ0] or [γ0, 1]), the solution to the original optimization problem can then be derived by comparing the optimal values of the two sub-problems with their constraints. Specifically, notice that γL ≤ γR and 0 ≤ γ0 ≤ 1, and we can consider the following three cases.
(i) When 0 ≤ γ0 ≤ γL ≤ γR, then h1 monotonously decreases on [0, γ0] and its minimum on [0, γ0] is h1(γ0). Notice that h1(γ0) = h2(γ0). For the sub-problem of h2, we further consider two situations: (i-a) If γL ≤ 1, then γL ∈ [γ0, 1], hence the minimum of h2 on [γ0, 1] is h2(γL). Since h2(γL) ≤ h2(γ0) = h1(γ0), the minimal point of the original problem is γL, and hence γt = γL. (i-b) If γL > 1, then h2 monotonously decreases on [γ0, 1], and we surely have h2(1) ≤ h2(γ0) = h1(γ0). Hence γt = 1 in this situation. Combining the above two situations, we have γt = min{γL, 1} in this case. (ii) When γL ≤ γR ≤ γ0 ≤ 1, then h2 monotonously increases on [γ0, 1] and its minimum on [γ0, 1] is h2(γ0). Notice that h1(γ0) = h2(γ0). For the sub-problem of h1, similar to the first case, we also consider two situations: (ii-a) If γR ≥ 0, then γR ∈ [0, γ0], hence the minimum of h1 on [0, γ0] is h1(γR). Since h1(γR) ≤ h1(γ0) = h2(γ0), the minimal point of the original problem is γR, and hence γt = γR. (ii-b) If γR < 0, then h1 monotonously increases on [0, γ0]. Hence we have h1(0) ≤ h1(γ0) = h2(γ0). Hence the solution to the original problem γt = 0. Combining the above two situations, we have γt = max{γR, 0} in this case.
Algorithm 3 Frank-Wolfe Solver for Min-Regularized-Norm with L2-Norm 1: Initialize: λt = (γ1t , . . . , γmt ) = ( 1m , . . . , 1 m ).
2: Compute the matrix U = ∇Ft(xt)⊤∇Ft(xt), i.e., Uij = ∇f it (xt)⊤∇f j t (xt),∀i, j ∈
{1, . . . ,m}. 3: repeat 4: Select an index k ∈ argmaxi∈{1,...,m}{ ∑m j=1 γ j tU
ij + α(γit − γi0)}. 5: Compute δ ∈ argmin0≤δ≤1 ∥δ∇fkt (xt))+(1−δ)∇Ft(xt)λt∥22+α∥δ(ek−λt)+λt−λ0∥22,
which has an analytical form
δ = max{min{ (∇Ft(xt)λt −∇f k t (xt)) ⊤∇Ft(xt)λt + α∥ek − λt∥22 ∥∇Ft(xt)λt −∇fkt (xt)∥22 + α(ek − λt)⊤(λt − λ0) , 1}, 0}.
6: Update λt = (1− δ)λt + δek. 7: until δ ∼ 0 or Number of Iteration Limits 8: return λt.
(iii) When γL < γ0 < γR, then h1 monotonously decreases on [0, γ0] and h2 monotonously increases on [γ0, 1]. Hence each sub-problem attains its minimum at γ0, and thus γt = γ0.
Summarizing the above three cases gives
γt = min{γL, 1}, γ0 ≤ γL; max{γR, 0}, γ0 ≥ γR;
γ0, otherwise.
We can further rewrite the above formula into a compact form as follows, which can be checked case-by-case.
γt = max{min{γ′′t , 1}, 0}, where γ′′t = max{min{γ0, γR}, γL}, This gives the closed-form solution of min-regularized-norm when m = 2. ■
Now that we have derived the closed-form solution to the min-regularized-norm | 1. What is the focus of the paper regarding multi-objective online learning?
2. What are the strengths of the proposed approach, particularly in measuring performance?
3. Do you have any concerns about the assumptions made in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for improving the proof or making it easier to follow? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper is about Multi-objective online learning. Compared with the original online learning setting, this paper considers the case that the online algorithm is required to solve multi-sequence input loss functions simultaneously rather than the single input loss function. To measure the performance of the algorithm, this paper involves the sequence-wise PSG (Pareto sub-optimality gap), which extends the regret for the single objective online learning. Meanwhile, this paper involves the min-regularized norm with Multiple Gradient Descent to make a trade-off to the multi-gradient of the loss sequences in the proposed algorithm. Finally, this paper gives a sub-linear “regret” bound to the problem.
Strengths And Weaknesses
This paper provides a new and efficient measurement of Multi-objective online learning. This measurement sequence-wise PSG is an extension of the original regret for the single loss function. Then, by proposing the algorithm, this paper involves the min-regularized norm to overcome the potential instability caused by the gradients for loss functions shown in a counterexample. This is a contribution.
However, there are some concerns about this paper.
1. In the definition of sequence-wise PSG, the domain of the comparator of the algorithm is constrained to the Pareto optimality, not the total domain of the algorithm. Although the paper states the reason, it seems to be a very strong assumption of the problem. In one of the related works like “Online Minimax Multiobjective Optimization: Multicalibeating and Other Applications ” (D. Lee et al.), the comparator term is also constrained but the domain to the comparator term remains. This paper might be supposed to emphasize more in detail to support the advantages of this measurement.
2. In this paper, the regret bound for the multiobjective loss is T^{1/2}. In the main part and the appendix, the reviewer can only follow the result of the case when m=2, according to Lemma 3. This is a significant issue for this paper since theorem 1 only implies the sub-linear bound when the min-regularized norm method can bound the last term with arbitrary m. Otherwise, the result is a little bit incremental. Another suggestion is that the authors may rewrite the proof to make them easier and more obvious to follow, especially for Corollary 1.
Clarity, Quality, Novelty And Reproducibility
The paper is written clearly in the main part, but the proof of the main result seems to be ambiguous to follow. With counterexamples, it provides the necessity to involve the sequence-wise PSG and min-regularized-norm. Under these two concepts, the paper gives a learnable algorithm for multi-objective online learning. The result now for m=2 is clear, but for the further result, it might not be convincing. The idea of this paper is acceptable and enlightening if the proof is correct. |
ICLR | Title
Visual Imitation with Reinforcement Learning using Recurrent Siamese Networks
Abstract
It would be desirable for a reinforcement learning (RL) based agent to learn behaviour by merely watching a demonstration. However, defining rewards that facilitate this goal within the RL paradigm remains a challenge. Here we address this problem with Siamese networks, trained to compute distances between observed behaviours and the agent’s behaviours. Given a desired motion such Siamese networks can be used to provide a reward signal to an RL agent via the distance between the desired motion and the agent’s motion. We experiment with an RNNbased comparator model that can compute distances in space and time between motion clips while training an RL policy to minimize this distance. Through experimentation, we have had also found that the inclusion of multi-task data and an additional image encoding loss helps enforce the temporal consistency. These two components appear to balance reward for matching a specific instance of a behaviour versus that behaviour in general. Furthermore, we focus here on a particularly challenging form of this problem where only a single demonstration is provided for a given task – the one-shot learning setting. We demonstrate our approach on humanoid agents in both 2D with 10 degrees of freedom (DoF) and 3D with 38 DoF.
1 INTRODUCTION
Imitation learning and Reinforcement Learning (RL) often intersect when the goal is to imitate with incomplete information, for example, when imitating from motion capture data (mocap) or video. In this case, the agent needs to search for actions that will result in observations similar to the expert. However, formulating a metric that will provide a reasonable distance between the agent and the expert is difficult. Robots and people plan using types of internal and abstract pose representations that can have reasonable distances; however, typically when animals observe others performing tasks, only visual information is available. Using distances in pose-space is ill-suited for imitation as changing some features can result in drastically different visual appearance. In order to understand how to perform tasks from visual observation a mapping/transformation is used which allows for the minimization of distance in appearance. Even with a method to transform observations to a similar pose space, each person has different capabilities. Because of this, people are motivated to learn transformations in space and time where they can reproduce the behaviour to the best of their own ability. How can we learn a representation similar to this latent space?
An essential detail of imitating demonstrations is their sequential and causal nature. There is both an ordering and speed in which a demonstration is performed. Most methods require the agent to learn to imitate the temporal and spatial structure at the same time creating a potentially narrow solution space. When the agent becomes desynchronized with the demonstration, the agent will receive a low reward. Consider the case when a robot has learned to stand when its goal is to walk. Standing is spatially close to the demonstration and actions that help the robot stand, as opposed to falling, should be encouraged. How can such latent goals be encouraged?
If we consider a phase-based reward function r = R(s, a, φ) where φ indexes the time in the demonstration and s and a is the agent state and action. As the demonstration timing φ, often controlled by the environment, and agent diverge, the agent receives less reward, even if it is visiting states that exist elsewhere in the demonstration. The issue of determining if an agent is displaying outof-phase behaviour can understood as trying to find the φ that would result in the highest reward
φ′ = maxφR(s, a, φ) and the distance φ′ − φ is an indicator of how far away in time or out-ofphase the agent is. This phase-independent form can be seen as a form of reward shaping. However, this naive description ignores the ordered property of demonstrations. What is needed is a metric that gives reward for behaviour that is in the proper order, independent of phase. This ordering motivates the creation of a recurrent distance metric that is designed to understand the context between two motions. For example, does this motion look like a walk, not, does this motion look precisely like that walk.
Our proposed Visual Imitation with Reinforcement Learning (VIRL) method uses Recurrent Siamese Networks (RSNs) and has similarities to both Inverse Reinforcement Learning (IRL) (Abbeel & Ng, 2004) and Generative Advisarial Imitation Learning (GAIL) (Ho & Ermon, 2016). The process of learning a cost function that understands the space of policies to find an optimal policy given a demonstration is fundamentally IRL. While using positive examples from the expert and negative examples from the policy is similar to the method GAIL uses to train a discriminator to recognize in distribution examples. In this work, we build upon these techniques by constructing a method that can learn policies using noisy visual data without action information. Considering the problem’s data sparsity, we include data from other tasks to learn a more robust distance function in the space of visual sequence. We also construct a cost function that takes into account the demonstration ordering as well as pose using a recurrent Siamese network. Our contribution consists of proposing and exploring these forms of recurrent Siamese networks as a way to address a critical problem in defining reward structure for imitation learning from the video for deep RL agents and accomplishing this on simulated humanoid robots for the challenging single shot learning setting.
2 RELATED WORK
Learning From Demonstration Searching for good distance functions is an active research area (Abbeel & Ng, 2004; Argall et al., 2009). Given some vector of features, the goal is to find an optimal transformation of these features, such in this transformed space, there exists a strong contextual meaning. Previous work has explored the area of state-based distance functions, but most rely on pose based metrics (Ho & Ermon, 2016; Merel et al., 2017) that come from an expert. While there is other work using distance functions, including for example Sermanet et al. (2017); Finn et al. (2017); Liu et al. (2017); Dwibedi et al. (2018), few use image based inputs and none consider the importance of learning a distance function in time as well as space. In this work, we train recurrent Siamese networks (Chopra et al., 2005) to learn distances between videos.
Partially Observable Imitation Without Actions For Learning from Demonstration (LfD) problems the goal is to replicate the behaviour of expert πE behaviour. Unlike the typical setting for humans learning to imitate, LfD often assumes the availability of expert action and observation data. Instead, in this work, we focus on the case where only noisy actionless observations of the expert are available. Recent work uses Behavioural Cloning (BC) to learn an inverse dynamics model to estimate the actions used via maximum-likelihood estimation (Torabi et al., 2018). Still, BC often needs many expert examples and tends to suffer from state distribution mismatch issues between the expert policy and student (Ross et al., 2011). Work in (Merel et al., 2017) proposes a system based on GAIL that can learn a policy from a partial observation of the demonstration. In this work, the discriminator’s state input is a customized version of the expert’s state and does not take into account the demonstration’s sequential nature. The work in (Wang et al., 2017) provides a more robust GAIL framework along with a new model to encode motions for few-shot imitation. This model uses an Recurrent Neural Network (RNN) to encode a demonstration but uses expert state and action observations. In our work, the agent is limited to only a partial visual observation as a demonstration. Additional works learn implicit models of distance (Yu et al., 2018; Pathak et al., 2018; Finn et al., 2017; Sermanet et al., 2017), none of these explicitly learn a sequential model considering the demonstration timing. An additional version of GAIL, infoGAIL (Li et al., 2017), included pixel based inputs. Goals can be specified using the latent space from a Variational Auto Encoder (VAE) (Nair et al., 2018). Our work extends this VAE loss using sequence data to train a more temporally consistent latent representation. Recent work (Peng et al., 2018b) has a 2D control example of learning from video data. We show results on more complex 3D tasks and additionally model distance in time. In contrast, here we train a recurrent siamese model that can be used to en-
able curriculum learning and allow for computing distances even when the agent and demonstration are out of sync.
3 PRELIMINARIES
In this section, we outline the general RL framework and specific formulations for RL that we rely upon when developing our method in Section 4.
Reinforcement Learning Using the RL framework formulated with a Markov Dynamic Process (MDP): at every time step t, the world (including the agent) exists in a state st ∈ S, wherein the agent is able to perform actions at ∈ A, sampled from a policy π(at|st) which results in a new state st+1 ∈ S and reward rt according to the transition probability function T (rt, st+1|st, at). The policy is optimize to maximize the future discounted reward
J(π) = Er0,...,rT [ T∑ t=0 γtrt ] , (1)
where T is the max time horizon, and γ is the discount factor, indicating the planning horizon length. Inverse reinforcement learning refers to the problem of extracting a reward function from observed optimal behavior Ng et al. (2000). In contrast, in our approach we learn a distance that works across a collection of behaviours. Further, we do not assume the example data to be optimal. See Appendix 7.2 for further discussion of the connections of our work to inverse reinforcement learning.
GAIL VIRL is similar to the GAIL framework (Ho & Ermon, 2016) which uses a Generative Advasarial Network (GAN) (Goodfellow et al., 2014), where the discriminator is trained with positive examples from the expert trajectories and negative examples from the policy. The generator is a combination of the environment, policy and current state visitation probability induced by the policy pπ(s).
min θπ max θφ EπE [log(D(s, a|θφ))] + Eπθπ [log(1−D(s, a|θφ))] (2)
In this framework the discriminator provides rewards for the RL policy to optimize, as the probability of a state generated by the policy being in the distribution rt = D(st, at|θφ). While this framework has been shown to work in practice, this dual optimization is often unstable. In the next section we will outline our method for learning a more stable distance based reward over sequences of images.
4 CONCEPTUAL DISTANCE-BASED REINFORCEMENT LEARNING
Our approach is aimed at facilitating imitation learning within an underlying RL formulation over partially observed observations o. Unlike the situation in GAIL, we do not rely on having accces to state, s and action, a information – our idea is to minimize a function that determintes the distance between two sequences observations, o, one from the desired example behavior oe, and another from the current agent behavior oa. We can then define the reward used within an underlying RL framework in terms of a distance function D, such that
rt̂(o e, oa) = −D(oe, oa, t̂) = t̂∑ t=0 −d(oet , oat ), (3)
where in our setting here D(oe, oa, t̂) models a distance between video clips from time t = 0 to t̂.
A simple formulation of the approach above can be overly restrictive on sequence timing. While these distances can serve as RL rewards, they often provide insufficient signal for the policy to learn a good imitative behaviour, especially when the agent only has partial observations of the expert. We can see an example of this in Figure 1a were starting at t5 the agent (in red) begins to exhibit behaviour that is similar to the expert (in blue) yet the spatial distance indicates that this state is further away from the desired behaviour than at t4.
To encourge the agent to match any part of the expert behaviour we propose decomposing the distance into two distances, by adding a type of temporal distance shown in green. To compute a time
independant distance we can find the state in the expert sequence that is closest to the agent’s current state argmin t̂∈T d(oet̂ , o a t ) and use it in the following distance measure
dT (oe, oa, t̂, t) = . . .+ d(oe t̂−1, o a t−1) + d(o e t̂ , oat ) + d(o e t̂+1 , oat+1) + . . . (4)
Using only a single state time-alined may lead to the agent fixating on mataching a single state in the expert demonstration. To avoid this the neighbouring states given sequence timing readjustment are used in the distance computation. This framework allows the agent to be rewarded for exhibiting behaviour that matches any part of the experts demonstration. The better is learns to match parts of the expert demonstration the more reward it is given. The previous spatial distance will then help the agent learn to sync up its timing with the deomonstration. Next we describe how we learn both of these distances.
Distance Metric Learning Many methods can be used to learn a distance function in state-space. Here we use a Siamese network f(oe, oa) with a triplet loss over time and task data (Chopra et al., 2005). The triplet loss is used to minimize the distance between two examples that are positive, very similar or from the same class, and maximize the distance between pairs of examples that are known to be unrelated. For more details see supplementary document.
Sequence Imitation The distance metric is formulated in a recurrent style where the distance is computed from the current state and conditioned on all previous states d(ot|ot−1, . . . , o0). The loss function is a combination of distance Eq. 9 and VAE-based representation learning objectives from Eq. 7 and Eq. 8, detailed in the supplementary material. This combination of sequencebased losses assists in compressing the representation while ensuring intermediate representations are informative. The loss function used to train the distance model on a positive pair of sequences is:
LV IRL(oi, op, ·) =λ0LSN (oi, op, ·) + λ1[ 1
T T∑ t=0 LSN (oi,t, op,t, ·)]+
λ2[ 1
T T∑ t=0 LV AE(oi,t) + LV AE(op,t)]+
λ3[LAE(oi) + LAE(op)].
Where λ = {0.7, 0.1, 0.1, 0.1}. With a negative pair, the second sequence used in the VAE and AE losses would be the negative sequence.
The Siamese loss function remains the same as in Eq. 9 but the overall learning process evolves to use an RNN-based deep networks. A diagram of the full model is shown in Figure 2. This model uses a time distributed Long Short-Term Memory (LSTM). A single convolutional network conva is first used to transform images of the demonstration oa to an encoding vector eat . After the sequence of images is distributed through conva there is an encoded sequence < ea0 , . . . , e a t >, this sequence is fed into the RNN lstma until a final encoding is produced hat . This same process is performed for a copy of the RNN lstma producing hbt for the agent ob. The loss is computed in a similar fashion to (Mueller & Thyagarajan, 2016) using the sequence outputs of images from the agent and another from the demonstration. The reward at each timestep is computed as rt =
||hat −hbt ||+ ||eat − ebt || = ||lstma(conva(sat ))− lstma(conva(sbt))||+ ||conva(sat )− conva(sbt)||. At the beginning of each episode, the RNN’s internal state is reset. The policy and value function have 2 hidden layers with 512 and 256 units, respectively. The use of additional VAE-based image and Auto Encoder (AE)-based sequence decoding losses improve the latent space conditioning and representation.
Algorithm 1 Learning Algorithm Initialize model parameters θπ and θd Create experience memory D ← {} while not done do
for i ∈ {0, . . . N} do τi ← {} {st, oet , oat } ← env.reset() for t ∈ {0, . . . , T} do at ← π(·|st, θπ) {st+1, oet+1, oat+1} ← env.step(at) rt ← −d(oet+1, oat+1|θd) τi,t ← {st, oet , oat , at, rt} {st, oet , oat } ← {st+1, oet+1, oat+1}
end for end for D ← D ⋃ {τ0, . . . , τN} Update d(·) parameters θd using D Update policy θπ using {τ0, . . . , τN}
end while Unsupervised Data labelling To construct positive and negative pairs for training we make use of time information in a similar fashion to (Sermanet et al., 2017), where observations at similar times in the same sequence are often correlated and observations at different times will likely have little similarity. We compute pairs by altering one sequence and comparing this modified version to its original. Positive pairs are created by adding noise to the sequence or altering a few frames of the sequences. Negative pairs are created by shuffling one sequence or reversing it. More details are available in the supplementary material. Imitation data for 24 other tasks are also used to help condition the distance metric learning process. These include motion clips for running, backflips, frontflips, dancing, punching, kicking and jumping along with the desired motion. For details on how positive and negative pairs are created from this data, see the supplementary document.
Importantly the RL environment generates two different state representations for the agent. The first state st+1 is the internal robot pose. The second state ot+1 is the agent’s rendered view, shown in Figure 2. The rendered view is used with the distance metric to compute the similarity between the agent and the demonstration. We attempted using the visual features as the state input for the policy as well; this resulted in poor policy quality. Details of the algorithm used to train the distance metric and policy are outlined in the supplementary document Algorithm 1.
5 ANALYSIS AND RESULTS
The simulation environment used in the experiments is similar to the DeepMind Control Suite (Tassa et al., 2018). In this simulated robotics environment, the agent is learning to imitate a given reference motion. The agent’s goal is to learn a policy to actuate Proportional Derivative (PD) controllers at 30 fps to mimic the desired motion. The simulation environment provides a hard-coded reward function based on the robot’s pose that is used to evaluate the policy quality. The demonstration M the agent is learning to imitate is generated from a clip of mocap data. The mocap data is used to
animate a second robot in the simulation. Frames from the simulation are captured and used as video input to train the distance metric. The images captured from the simulation are converted to greyscale with 64× 64 pixels. We train the policy on pose data, as link distances and velocities relative to the robot’s Centre of Mass (COM). This simulation environment is new and has been created to take motion capture data and produce multi-view video data that can be used for training RL agents or generating data for computer vision tasks. The environment includes challenging and dynamic tasks for humanoid robots. Some example tasks are imitating running, jumping, and walking, shown in Figure 3 and humanoid2d detailed in the supplementary material.
3D Humanoid Robot Imitation In these simulated robotics environments the agent is learning to imitate a given reference motion of a walk, run, jump or zombie motion. A single motion demonstration is provided by the simulation environment as a cyclic motion. During learning, we include additional data from all other tasks for the walking task this would be: walking-dynamic-speed, running, jogging, frontflips, backflips, dancing, jumping, punching and kicking) that are only used to train the distance metric. We also include data from a modified version of the tasks that has a randomly generated speed modifier ω ∈ [0.5, 2.0] walking-dynamic-speed, that warps the demonstration timing. This additional data is used to provide a richer understanding of distances in space and time to the distance metric. The method is capable of learning policies that produce similar behaviour to the expert across a diverse set of tasks. We show example trajectories from the learned policies in Figure 3 and in the supplemental Video. It takes 5− 7 days to train each policy in these results on a 16 core machine with an Nvidia GTX1080 GPU.
Algorithm Analysis and Comparison To evaluate the learning capabilities and improvements of VIRL we compare against two other methods that learn a distance function in state space, GAIL and using a VAE to train an encoding and compute distances between those encodings, similar to (Nair et al., 2018), using the same method as the Siamese network in Figure 4a. We find that the VAE alone does not appear to capture the critical distances between states, possibly due to the decoding transformation complexity. Similarly, the GAIL baseline produces very jerky motion or stands still, both of which are contained in the imitation distribution. Our method that considers the temporal structure of the data learns faster and produces higher value policies.
Additionally, we create a multi-modal version of VIRL. Here we replace the bottom conv net with a dense network and learn a distance metric between agent poses and imitation video. The results of these models, along with the default manual reward function provided by the environment, are shown in Figure 4b. The multi-modal version appears to perform about equal to the vision-only modal. In Figure 4b we also compare our method to a non-sequence-based model that is equivalent to Time Contrastive Network (TCN). On average VIRL achieves higher value policies. We find that using the RNN-based distance metric makes the learning process more gradual. We show this learning effect in Figure 4b, where the original manually created reward with flat feedback leads to slow initial learning.
In Figure 4c we compare the importance of the spatial ||eat −ebt ||2 and temporal ||hat −hbt ||2 representations learned by VIRL. Using the recurrent representation (temporal lstm) alone allows learning to progress quickly but can have difficulty informing the policy of how to best match the desired example. On the other hand, using only the encoding between single frames (spatial conv) slows learning due to limited reward for out-of-phase behaviour. We achieved the best results by combining the representations from these two models. The assistance of spatial rewards is also seen in Figure 4b, where the manual reward learns the slowest.
Ablation We conduct ablation studies in Figure 5a to compare the effects of data augmentation methods, network models and the use of additional data from other tasks. For the more complex humanoid3d control problems the data augmentation methods, including Early Episode Sequence Priority (EESP), increases average policy quality marginally. The use of mutlitask data Figure 8c and the additional representational losses Figure 8a greatly improve the methods ability to learn. More ablation results are available in the supplementary material.
Sequence Encoding Using the learned sequence encoder a collection of motions from different classes are processed to create a TSNE embedding of the encodings (Maaten & Hinton, 2008). In Figure 5c we plot motions both generated from the learned policy π and the expert trajectories
πE . Overlaps in specific areas of the space for similar classes across learned π and expert πE data indicate a well-formed distance metric that does not sperate expert and agent examples. There is also a separation between motion classes in the data, and the cyclic nature of the walking cycle is visible.
In this section, we have described the process followed to create and analyze VIRL. Due to a combination of data augmentation techniques, VIRL can imitate given only a single demonstration. We have shown some of the first results to learn imitative policies from video data using a recurrent net-
work. Interestingly, the method displays new learning efficiencies that are important to the method success by separating the imitation problem into spatial and temporal aspects. For best results, we found that the inclusion of additional regularizing losses on the recurrent siamese network, along with some multi-task supervision, was key to producing results.
6 DISCUSSION AND CONCLUSION
In this work, we have created a new method for learning imitative policies from a single demonstration. The method uses a Siamese recurrent network to learn a distance function in both space and time. This distance function, trained on noisy partially observed video data, is used as a reward function for training an RL policy. Using data from other motion styles and regularization terms, VIRL produces policies that demonstrate similar behaviour to the demonstration.
Learning a distance metric is enigmatic, the distance metric can compute inaccurate distances in areas of the state space it has not yet seen. This inaccuracy could imply that when the agent explores and finds truly new and promising trajectories, the distance metric computes incorrect distances. We attempt to mitigate this effect by including training data from different tasks. We believe VIRL will benefit from a more extensive collection of multi-task data and increased variation of each task. Additionally, if the distance metric confidence is available, this information could be used to reduce variance and overconfidence during policy optimization.
It is probable learning a reward function while training adds additional variance to the policy gradient. This variance may indicate that the bias of off-policy methods could be preferred over the added variance of on-policy methods used here. We also find it important to have a small learning rate for the distance metric. The low learning rate reduces the reward variance between data collection phases and allows learning a more accurate value function. Another approach may be to use partially observable RL that can learn a better value function model given a changing RNN-based
reward function. Training the distance metric could benefit from additional regularization such as constraining the kl-divergence between updates to reduce variance. Learning a sequence-based policy as well, given that the rewards are now not dependent on a single state observation is another area for future research.
We compare our method to GAIL, but we found GAIL has limited temporal consistency. This method led to learning jerky and overactive policies. The use of a recurrent discriminator for GAIL may mitigate some of these issues and is left for future work. It is challenging to produce results better than the carefully manually crafted reward functions used by the RL simulation environments that include motion phase information in the observations (Peng et al., 2018a; 2017). However, we have shown that our method that can compute distances in space and time has faster initial learning. Potentially, a combination of starting with our method and following with a manually crafted reward function could lead to faster learning of high-quality policies. Still, as environments become increasingly more realistic and grow in complexity, we will need more robust methods to describe the desired behaviour we want from the agent.
Training the distance metric is a complicated balancing game. One might expect that the distance metric should be trained early and fast so that it quickly understands the difference between a good and bad demonstration. However, quickly learning confuses the agent, rewards can change, which cause the agent to diverge off toward an unrecoverable policy space. Slower is better, as the distance metric may not be accurate, it may be locally or relatively reasonable, which is enough to learn a good policy. As learning continues, these two optimizations can converge together.
7 APPENDIX
This section includes additional details related to VIRL.
7.1 IMITATION LEARNING
Imitation learning is the process of training a new policy to reproduce the behaviour of some expert policy. BC is a fundamental method for imitation learning. Given an expert policy πE possibly represented as a collection of trajectories τ < (s0, a0), . . . , (sT , aT ) > a new policy π can be learned to match this trajectory using supervised learning.
max θ EπE [ T∑ t=0 log π(at|st, θπ)] (5)
While this simple method can work well, it often suffers from distribution mismatch issues leading to compounding errors as the learned policy deviates from the expert’s behaviour.
7.2 INVERSE REINFORCEMENT LEARNING
Similar to BC, Inverse Reinforcement Learning (IRL) also learns to replicate some desired behaviour. However, IRL makes use of the RL environment without a defined reward function. Here we describe maximal entropy IRL (Ziebart et al., 2008). Given an expert trajectory τ < (s0, a0), . . . , (sT , aT ) > a policy π can be trained to produce similar trajectories by discovering a distance metric between the expert trajectory and trajectories produced by the policy π.
max c∈C min π (Eπ[c(s, a)]−H(π))− EπE [c(s, a)] (6)
where c is some learned cost function and H(π) is a causal entropy term. πE is the expert policy that is represented by a collection of trajectories. IRL is searching for a cost function c that is low for the expert πE and high for other policies. Then, a policy can be optimized by maximizing the reward function rt = −c(st, at).
7.3 AUTO-ENCODER FRAMEWORK
Variational Auto-encoders Previous work shows that VAEs can learn a lower dimensional structured representation of a distribution (Kingma & Welling, 2014). A VAE consists of two parts an encoder qφ and a decoder pψ . The encoder maps states to a latent encoding z and in turn the decoder transforms z back to states. The model parameters for both φ and ψ are trained jointly to maximize
LV AE(φ, ψ, s) = −βDKL(qφ(z||s)||p(z) + Eqφ(z||s)[log pψ(s||z)] (7)
, where DKL is the Kullback-Leibler divergence, p(s) is some prior and β is a hyper-parameter to balance the two terms. The encoder qφ takes the form of a diagonal Gaussian distribution qφ = N (µφ(s), σ2(s)). In the case of images, the decoder pψ parameterized a Bernoulli distribution over pixel values. This simple parameterization is akin to training the decoder with a cross entropy loss over normalized pixel values.
Sequence Auto-encoding The goal of sequence to sequence translation is to learn the conditional probability p(y0, . . . , yT ′ |x0, . . . , xT ), where x = x0, . . . , xT and y = y0, . . . , yT ′ are sequence Here we want to explicitly learn a latent variable zRNN that compresses the information in x0, . . . , xT . An RNN can model this conditional probability by calculating v =∏T t=0 p(yT |{x0, . . . , xT }) of the sequence x that can, in turn, be used to condition the decoding of the sequence y (Rumelhart et al., 1985).
p(y) = T∏ t=0 p(yT |{y0, . . . , yT−1}, v) (8)
, This method has been used for learning compressed representations for transfer learning (Zhu et al., 2016) and 3D shape retrieval (Zhuang et al., 2015).
7.4 DATA
The mocap used in the created environment come from the CMU mocap database and the SFU mocap database.
Data Augmentation and Training We apply several data augmentation methods to produce additional data for training the distance metric. Using methods analogous to the cropping and warping methods popular in computer vision (He et al., 2015) we randomly crop sequences and randomly warp the demonstration timing. The cropping is performed by both initializing the agent to random poses from the demonstration motion and terminating episodes when the agent’s head, hands or torso contact the ground. As the agent improves, the average length of each episode increases and so to will the average length of the cropped window. The motion warping is done by replaying the demonstration motion at different speeds. Two additional methods influence the data distribution. The first method is Reference State Initialization (RSI) (Peng et al., 2018a), where the initial state of the agent and expert is randomly selected from the expert demonstration. With this property, the environment can also be thought of as a form of memory replay. The environment allows the agent to go back to random points in the demonstration as if replaying a remembered demonstration. The second is EESP where the probability a sequence x is cropped starting at i is p(i) = len(x)−i∑ i , increasing the likelihood of starting earlier in the episode.
7.5 TRAINING DETAILS
The learning simulations are trained using Graphics Processing Unit (GPU)s. The simulation is not only simulating the interaction physics of the world but also rendering the simulation scene to capture video observations. On average, it takes 3 days to execute a single training simulation. The process of rendering and copying the images from the GPU is one of the most expensive operations with VIRL. We collect 2048 data samples between training rounds. The batch size for Trust Region Policy Optimization (TRPO) is 2048. The kl term is 0.5.
The simulation environment includes several different tasks that are represented by a collection of motion capture clips to imitate. These tasks come from the tasks created in the DeepMimic works (Peng et al., 2018a). We include all humanoid tasks in this dataset.
In Algorithm 1 we include an outline of the algorithm used for the method. The simulation environment produces three types of observations, st+1 the agent’s proprioceptive pose, svt+1 the image observation of the agent and mt+1 the image-based oberservation of the expert demonstration. The images are 64× 64.
7.6 DISTANCE FUNCTION TRAINING
Our Siamese training loss consists of
LSN (si, sp, sn) = y ∗ ||f(si)− f(sp)||+ ((1− y) ∗ (max(ρ− (||f(si)− f(sn)||), 0))), (9) where y = 1 is a positive example sp, pair where the distance should be minimal and y = 0 is a negative example sn, pair where the distance should be maximal. The margin ρ is used as an attractor or anchor to pull the negative example output away from si and push values towards a 0 to 1 range. f(·) computes the output from the underlying network. The distance between two states is calculated as d(s, s′) = ||f(s) − f(s′)|| and the reward as r(s, s′) = −d(s, s′). Data used to train the Siamese network is a combination of trajectories τ = 〈s0, . . . , sT 〉 generated from simulating the agent in the environment and the expert demonstration. For our recurrent model the same loss is used; however, the states sp, sn, si are sequences. During RL training we compute a distance given the sequence of states observed so far in the episode. This method allows us to train a distance function in state space where all we need to provide is labels that denote if two states, or sequences, are similar or not.
In Figure 6b we show the training curve for the recurrent siamese network. The model learns smoothly, considering that the training data used is continually changing as the RL agent explores. In Figure 6a the learning curve for the siamese RNN is shown after performing pretraining. We can see the overfitting portion the occurs during RL training. This overfitting can lead to poor reward prediction during the early phase of training.
It can be challenging to train a sequenced based distance function. One particular challenge is training the distance function to be accurate across the space of possible states. We found a good strategy was to focus on the beginning of episode data. When the model is not accurate on states it saw earlier in the episode; it may never learn how to get into good states later that the distance function understands better. Therefore, when constructing batches to train the RNN on, we give a higher probability of starting earlier in episodes. We also give a higher probability to shorter sequences. As the agent gets better average episodes length increase, so to will the randomly selected sequence windows.
7.7 DISTANCE FUNCTION USE
We find it helpful to normalize the distance metric outputs using r = exp(r2∗wd) wherewd = −5.0 scales the filtering width. Early in training the distance metric often produces large, noisy values. Also, the RL method regularly updates reward scaling statistics; the initial high variance data reduces the significance of better distance metric values produced later on by scaling them to small numbers. The improvement of using this normalize reward is shown in Figure 7a.
8 POSITIVE AND NEGATIVE EXAMPLES
We use two methods to generate positive and negative examples. The first method is similar to TCN, where we can assume that sequences that overlap more in time are more similar. For each episode two sequences are generated, one for the agent and one for the imitation motion. Here we list the methods used to alter sequences for positive pairs.
1. Adding Gaussian noise to each state in the sequence (mean = 0 and variance = 0.02) 2. Out of sync versions where the first state is removed from the first sequence and the last
state from the second sequence 3. Duplicating the first state in either sequence 4. Duplicating the last state in either sequence
We alter sequences for negative pairs by
1. Reversing the ordering of the second sequence in the pair. 2. Randomly picking a state out of the second sequence and replicating it to be as long as the
first sequence. 3. Randomly shuffling one sequence. 4. Randomly shuffling both sequences.
The second method we use to create positive and negative examples is by including data for additional classes of motion. These classes denote different task types. For the humanoid3d environment, we generate data for walking-dynamic-speed, running, backflipping and frontflipping. Pairs from the same tasks are labelled as positive, and pairs from different classes are negative.
8.1 ADDITIONAL ABLATION ANALYSIS
8.2 RL ALGORITHM ANALYSIS
It is not clear which RL algorithm may work best for this type of imitation problem. A number of RL algorithms were evaluated on the humanoid2d environment Figure 9a. Surprisingly, TRPO (Schulman et al., 2015) did not work well in this framework, considering it has a controlled policy gradient step, we thought it would reduce the overall variance. We found that Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015) worked rather well. This result could be related to having a changing reward function, in that if the changing rewards are considered off-policy data, it can be easier to learn. This can be seen in Figure 9b where DDPG is best at estimating the future discounted rewards in the environment. We also tried Continuous Actor Critic Learning Automaton (CACLA) (Van Hasselt, 2012) and Proximal Policy Optimization (PPO) (Schulman et al., 2017); we found that PPO did not work particularly well on this task; this could also be related to added variance.
8.3 ADDITIONAL IMITATION RESULTS
Our first experiments evaluate the methods ability to learn a complex cyclic motion for a simulated humanoid robot given a single motion demonstration, similar to (Peng & van de Panne, 2017), but using video instead. The agent is able to learn a robust walking gate even though it is only given noisy partial observations of a demonstration Figure 10. | 1. What is the main contribution of the paper in the field of reinforcement learning?
2. How does the proposed algorithm, VIRL, differ from other methods, such as GAIL and its extensions?
3. Can you explain how the learned reward function is defined by a learned distance metric, and how this distance metric is trained?
4. How does the use of a recurrent network in the distance metric affect the comparison between trajectories?
5. What is the purpose of the variational autoencoder in the algorithm, and how does it help improve the representation of the trajectory space?
6. How do the experimental results demonstrate the effectiveness of the proposed method compared to baselines?
7. Are there any limitations or areas for improvement in the approach presented in the paper? | Review | Review
This paper presents visual imitation with reinforcement learning (VIRL), an algorithm for learning to imitate expert trajectories based solely on visual observations, and without access to the expert’s actions. The algorithm is similar in form to GAIL and its extensions, learning a reward function which captures the similarity between an observed behavior and the expert's demonstrations, while simultaneously using reinforcement learning to find a policy maximizing this reward, such that the learned policy will replicate the demonstrated behavior as well as possible. A key feature of this method is that the learned reward function is defined by a learned distance metric, which evaluates the similarity between the agent's current trajectory, and the nearest demonstrated expert trajectory.
The network describing the distance metric is recurrent, such that the distance is defined between trajectories rather than individual states. The distance function network is trained via a negative sampling approach, where expert trajectories are randomly reordered to produce examples that dissimilar to the expert trajectories. The distance network also defines a variational autoencoder, and the reconstruction of the target trajectories is treated as an auxiliary task to help train better representations of the trajectory space.
While previous work has considered the problem of visual imitation learning, the approach taken here is novel in its architecture and loss function, and significantly outperforms the baselines in terms of the similarity between the resulting behavior and the expert behavior.
The clarity of the technical presentation could be improved, however. In particular, it would be helpful for the reader if the definitions of the negative sampling loss and the autoencoder losses were given before the combined loss, and if we saw the form of the loss for both positive and negative sequence pairs. Equation 4 could also be made explicit, with the full summation term included. |
ICLR | Title
Visual Imitation with Reinforcement Learning using Recurrent Siamese Networks
Abstract
It would be desirable for a reinforcement learning (RL) based agent to learn behaviour by merely watching a demonstration. However, defining rewards that facilitate this goal within the RL paradigm remains a challenge. Here we address this problem with Siamese networks, trained to compute distances between observed behaviours and the agent’s behaviours. Given a desired motion such Siamese networks can be used to provide a reward signal to an RL agent via the distance between the desired motion and the agent’s motion. We experiment with an RNNbased comparator model that can compute distances in space and time between motion clips while training an RL policy to minimize this distance. Through experimentation, we have had also found that the inclusion of multi-task data and an additional image encoding loss helps enforce the temporal consistency. These two components appear to balance reward for matching a specific instance of a behaviour versus that behaviour in general. Furthermore, we focus here on a particularly challenging form of this problem where only a single demonstration is provided for a given task – the one-shot learning setting. We demonstrate our approach on humanoid agents in both 2D with 10 degrees of freedom (DoF) and 3D with 38 DoF.
1 INTRODUCTION
Imitation learning and Reinforcement Learning (RL) often intersect when the goal is to imitate with incomplete information, for example, when imitating from motion capture data (mocap) or video. In this case, the agent needs to search for actions that will result in observations similar to the expert. However, formulating a metric that will provide a reasonable distance between the agent and the expert is difficult. Robots and people plan using types of internal and abstract pose representations that can have reasonable distances; however, typically when animals observe others performing tasks, only visual information is available. Using distances in pose-space is ill-suited for imitation as changing some features can result in drastically different visual appearance. In order to understand how to perform tasks from visual observation a mapping/transformation is used which allows for the minimization of distance in appearance. Even with a method to transform observations to a similar pose space, each person has different capabilities. Because of this, people are motivated to learn transformations in space and time where they can reproduce the behaviour to the best of their own ability. How can we learn a representation similar to this latent space?
An essential detail of imitating demonstrations is their sequential and causal nature. There is both an ordering and speed in which a demonstration is performed. Most methods require the agent to learn to imitate the temporal and spatial structure at the same time creating a potentially narrow solution space. When the agent becomes desynchronized with the demonstration, the agent will receive a low reward. Consider the case when a robot has learned to stand when its goal is to walk. Standing is spatially close to the demonstration and actions that help the robot stand, as opposed to falling, should be encouraged. How can such latent goals be encouraged?
If we consider a phase-based reward function r = R(s, a, φ) where φ indexes the time in the demonstration and s and a is the agent state and action. As the demonstration timing φ, often controlled by the environment, and agent diverge, the agent receives less reward, even if it is visiting states that exist elsewhere in the demonstration. The issue of determining if an agent is displaying outof-phase behaviour can understood as trying to find the φ that would result in the highest reward
φ′ = maxφR(s, a, φ) and the distance φ′ − φ is an indicator of how far away in time or out-ofphase the agent is. This phase-independent form can be seen as a form of reward shaping. However, this naive description ignores the ordered property of demonstrations. What is needed is a metric that gives reward for behaviour that is in the proper order, independent of phase. This ordering motivates the creation of a recurrent distance metric that is designed to understand the context between two motions. For example, does this motion look like a walk, not, does this motion look precisely like that walk.
Our proposed Visual Imitation with Reinforcement Learning (VIRL) method uses Recurrent Siamese Networks (RSNs) and has similarities to both Inverse Reinforcement Learning (IRL) (Abbeel & Ng, 2004) and Generative Advisarial Imitation Learning (GAIL) (Ho & Ermon, 2016). The process of learning a cost function that understands the space of policies to find an optimal policy given a demonstration is fundamentally IRL. While using positive examples from the expert and negative examples from the policy is similar to the method GAIL uses to train a discriminator to recognize in distribution examples. In this work, we build upon these techniques by constructing a method that can learn policies using noisy visual data without action information. Considering the problem’s data sparsity, we include data from other tasks to learn a more robust distance function in the space of visual sequence. We also construct a cost function that takes into account the demonstration ordering as well as pose using a recurrent Siamese network. Our contribution consists of proposing and exploring these forms of recurrent Siamese networks as a way to address a critical problem in defining reward structure for imitation learning from the video for deep RL agents and accomplishing this on simulated humanoid robots for the challenging single shot learning setting.
2 RELATED WORK
Learning From Demonstration Searching for good distance functions is an active research area (Abbeel & Ng, 2004; Argall et al., 2009). Given some vector of features, the goal is to find an optimal transformation of these features, such in this transformed space, there exists a strong contextual meaning. Previous work has explored the area of state-based distance functions, but most rely on pose based metrics (Ho & Ermon, 2016; Merel et al., 2017) that come from an expert. While there is other work using distance functions, including for example Sermanet et al. (2017); Finn et al. (2017); Liu et al. (2017); Dwibedi et al. (2018), few use image based inputs and none consider the importance of learning a distance function in time as well as space. In this work, we train recurrent Siamese networks (Chopra et al., 2005) to learn distances between videos.
Partially Observable Imitation Without Actions For Learning from Demonstration (LfD) problems the goal is to replicate the behaviour of expert πE behaviour. Unlike the typical setting for humans learning to imitate, LfD often assumes the availability of expert action and observation data. Instead, in this work, we focus on the case where only noisy actionless observations of the expert are available. Recent work uses Behavioural Cloning (BC) to learn an inverse dynamics model to estimate the actions used via maximum-likelihood estimation (Torabi et al., 2018). Still, BC often needs many expert examples and tends to suffer from state distribution mismatch issues between the expert policy and student (Ross et al., 2011). Work in (Merel et al., 2017) proposes a system based on GAIL that can learn a policy from a partial observation of the demonstration. In this work, the discriminator’s state input is a customized version of the expert’s state and does not take into account the demonstration’s sequential nature. The work in (Wang et al., 2017) provides a more robust GAIL framework along with a new model to encode motions for few-shot imitation. This model uses an Recurrent Neural Network (RNN) to encode a demonstration but uses expert state and action observations. In our work, the agent is limited to only a partial visual observation as a demonstration. Additional works learn implicit models of distance (Yu et al., 2018; Pathak et al., 2018; Finn et al., 2017; Sermanet et al., 2017), none of these explicitly learn a sequential model considering the demonstration timing. An additional version of GAIL, infoGAIL (Li et al., 2017), included pixel based inputs. Goals can be specified using the latent space from a Variational Auto Encoder (VAE) (Nair et al., 2018). Our work extends this VAE loss using sequence data to train a more temporally consistent latent representation. Recent work (Peng et al., 2018b) has a 2D control example of learning from video data. We show results on more complex 3D tasks and additionally model distance in time. In contrast, here we train a recurrent siamese model that can be used to en-
able curriculum learning and allow for computing distances even when the agent and demonstration are out of sync.
3 PRELIMINARIES
In this section, we outline the general RL framework and specific formulations for RL that we rely upon when developing our method in Section 4.
Reinforcement Learning Using the RL framework formulated with a Markov Dynamic Process (MDP): at every time step t, the world (including the agent) exists in a state st ∈ S, wherein the agent is able to perform actions at ∈ A, sampled from a policy π(at|st) which results in a new state st+1 ∈ S and reward rt according to the transition probability function T (rt, st+1|st, at). The policy is optimize to maximize the future discounted reward
J(π) = Er0,...,rT [ T∑ t=0 γtrt ] , (1)
where T is the max time horizon, and γ is the discount factor, indicating the planning horizon length. Inverse reinforcement learning refers to the problem of extracting a reward function from observed optimal behavior Ng et al. (2000). In contrast, in our approach we learn a distance that works across a collection of behaviours. Further, we do not assume the example data to be optimal. See Appendix 7.2 for further discussion of the connections of our work to inverse reinforcement learning.
GAIL VIRL is similar to the GAIL framework (Ho & Ermon, 2016) which uses a Generative Advasarial Network (GAN) (Goodfellow et al., 2014), where the discriminator is trained with positive examples from the expert trajectories and negative examples from the policy. The generator is a combination of the environment, policy and current state visitation probability induced by the policy pπ(s).
min θπ max θφ EπE [log(D(s, a|θφ))] + Eπθπ [log(1−D(s, a|θφ))] (2)
In this framework the discriminator provides rewards for the RL policy to optimize, as the probability of a state generated by the policy being in the distribution rt = D(st, at|θφ). While this framework has been shown to work in practice, this dual optimization is often unstable. In the next section we will outline our method for learning a more stable distance based reward over sequences of images.
4 CONCEPTUAL DISTANCE-BASED REINFORCEMENT LEARNING
Our approach is aimed at facilitating imitation learning within an underlying RL formulation over partially observed observations o. Unlike the situation in GAIL, we do not rely on having accces to state, s and action, a information – our idea is to minimize a function that determintes the distance between two sequences observations, o, one from the desired example behavior oe, and another from the current agent behavior oa. We can then define the reward used within an underlying RL framework in terms of a distance function D, such that
rt̂(o e, oa) = −D(oe, oa, t̂) = t̂∑ t=0 −d(oet , oat ), (3)
where in our setting here D(oe, oa, t̂) models a distance between video clips from time t = 0 to t̂.
A simple formulation of the approach above can be overly restrictive on sequence timing. While these distances can serve as RL rewards, they often provide insufficient signal for the policy to learn a good imitative behaviour, especially when the agent only has partial observations of the expert. We can see an example of this in Figure 1a were starting at t5 the agent (in red) begins to exhibit behaviour that is similar to the expert (in blue) yet the spatial distance indicates that this state is further away from the desired behaviour than at t4.
To encourge the agent to match any part of the expert behaviour we propose decomposing the distance into two distances, by adding a type of temporal distance shown in green. To compute a time
independant distance we can find the state in the expert sequence that is closest to the agent’s current state argmin t̂∈T d(oet̂ , o a t ) and use it in the following distance measure
dT (oe, oa, t̂, t) = . . .+ d(oe t̂−1, o a t−1) + d(o e t̂ , oat ) + d(o e t̂+1 , oat+1) + . . . (4)
Using only a single state time-alined may lead to the agent fixating on mataching a single state in the expert demonstration. To avoid this the neighbouring states given sequence timing readjustment are used in the distance computation. This framework allows the agent to be rewarded for exhibiting behaviour that matches any part of the experts demonstration. The better is learns to match parts of the expert demonstration the more reward it is given. The previous spatial distance will then help the agent learn to sync up its timing with the deomonstration. Next we describe how we learn both of these distances.
Distance Metric Learning Many methods can be used to learn a distance function in state-space. Here we use a Siamese network f(oe, oa) with a triplet loss over time and task data (Chopra et al., 2005). The triplet loss is used to minimize the distance between two examples that are positive, very similar or from the same class, and maximize the distance between pairs of examples that are known to be unrelated. For more details see supplementary document.
Sequence Imitation The distance metric is formulated in a recurrent style where the distance is computed from the current state and conditioned on all previous states d(ot|ot−1, . . . , o0). The loss function is a combination of distance Eq. 9 and VAE-based representation learning objectives from Eq. 7 and Eq. 8, detailed in the supplementary material. This combination of sequencebased losses assists in compressing the representation while ensuring intermediate representations are informative. The loss function used to train the distance model on a positive pair of sequences is:
LV IRL(oi, op, ·) =λ0LSN (oi, op, ·) + λ1[ 1
T T∑ t=0 LSN (oi,t, op,t, ·)]+
λ2[ 1
T T∑ t=0 LV AE(oi,t) + LV AE(op,t)]+
λ3[LAE(oi) + LAE(op)].
Where λ = {0.7, 0.1, 0.1, 0.1}. With a negative pair, the second sequence used in the VAE and AE losses would be the negative sequence.
The Siamese loss function remains the same as in Eq. 9 but the overall learning process evolves to use an RNN-based deep networks. A diagram of the full model is shown in Figure 2. This model uses a time distributed Long Short-Term Memory (LSTM). A single convolutional network conva is first used to transform images of the demonstration oa to an encoding vector eat . After the sequence of images is distributed through conva there is an encoded sequence < ea0 , . . . , e a t >, this sequence is fed into the RNN lstma until a final encoding is produced hat . This same process is performed for a copy of the RNN lstma producing hbt for the agent ob. The loss is computed in a similar fashion to (Mueller & Thyagarajan, 2016) using the sequence outputs of images from the agent and another from the demonstration. The reward at each timestep is computed as rt =
||hat −hbt ||+ ||eat − ebt || = ||lstma(conva(sat ))− lstma(conva(sbt))||+ ||conva(sat )− conva(sbt)||. At the beginning of each episode, the RNN’s internal state is reset. The policy and value function have 2 hidden layers with 512 and 256 units, respectively. The use of additional VAE-based image and Auto Encoder (AE)-based sequence decoding losses improve the latent space conditioning and representation.
Algorithm 1 Learning Algorithm Initialize model parameters θπ and θd Create experience memory D ← {} while not done do
for i ∈ {0, . . . N} do τi ← {} {st, oet , oat } ← env.reset() for t ∈ {0, . . . , T} do at ← π(·|st, θπ) {st+1, oet+1, oat+1} ← env.step(at) rt ← −d(oet+1, oat+1|θd) τi,t ← {st, oet , oat , at, rt} {st, oet , oat } ← {st+1, oet+1, oat+1}
end for end for D ← D ⋃ {τ0, . . . , τN} Update d(·) parameters θd using D Update policy θπ using {τ0, . . . , τN}
end while Unsupervised Data labelling To construct positive and negative pairs for training we make use of time information in a similar fashion to (Sermanet et al., 2017), where observations at similar times in the same sequence are often correlated and observations at different times will likely have little similarity. We compute pairs by altering one sequence and comparing this modified version to its original. Positive pairs are created by adding noise to the sequence or altering a few frames of the sequences. Negative pairs are created by shuffling one sequence or reversing it. More details are available in the supplementary material. Imitation data for 24 other tasks are also used to help condition the distance metric learning process. These include motion clips for running, backflips, frontflips, dancing, punching, kicking and jumping along with the desired motion. For details on how positive and negative pairs are created from this data, see the supplementary document.
Importantly the RL environment generates two different state representations for the agent. The first state st+1 is the internal robot pose. The second state ot+1 is the agent’s rendered view, shown in Figure 2. The rendered view is used with the distance metric to compute the similarity between the agent and the demonstration. We attempted using the visual features as the state input for the policy as well; this resulted in poor policy quality. Details of the algorithm used to train the distance metric and policy are outlined in the supplementary document Algorithm 1.
5 ANALYSIS AND RESULTS
The simulation environment used in the experiments is similar to the DeepMind Control Suite (Tassa et al., 2018). In this simulated robotics environment, the agent is learning to imitate a given reference motion. The agent’s goal is to learn a policy to actuate Proportional Derivative (PD) controllers at 30 fps to mimic the desired motion. The simulation environment provides a hard-coded reward function based on the robot’s pose that is used to evaluate the policy quality. The demonstration M the agent is learning to imitate is generated from a clip of mocap data. The mocap data is used to
animate a second robot in the simulation. Frames from the simulation are captured and used as video input to train the distance metric. The images captured from the simulation are converted to greyscale with 64× 64 pixels. We train the policy on pose data, as link distances and velocities relative to the robot’s Centre of Mass (COM). This simulation environment is new and has been created to take motion capture data and produce multi-view video data that can be used for training RL agents or generating data for computer vision tasks. The environment includes challenging and dynamic tasks for humanoid robots. Some example tasks are imitating running, jumping, and walking, shown in Figure 3 and humanoid2d detailed in the supplementary material.
3D Humanoid Robot Imitation In these simulated robotics environments the agent is learning to imitate a given reference motion of a walk, run, jump or zombie motion. A single motion demonstration is provided by the simulation environment as a cyclic motion. During learning, we include additional data from all other tasks for the walking task this would be: walking-dynamic-speed, running, jogging, frontflips, backflips, dancing, jumping, punching and kicking) that are only used to train the distance metric. We also include data from a modified version of the tasks that has a randomly generated speed modifier ω ∈ [0.5, 2.0] walking-dynamic-speed, that warps the demonstration timing. This additional data is used to provide a richer understanding of distances in space and time to the distance metric. The method is capable of learning policies that produce similar behaviour to the expert across a diverse set of tasks. We show example trajectories from the learned policies in Figure 3 and in the supplemental Video. It takes 5− 7 days to train each policy in these results on a 16 core machine with an Nvidia GTX1080 GPU.
Algorithm Analysis and Comparison To evaluate the learning capabilities and improvements of VIRL we compare against two other methods that learn a distance function in state space, GAIL and using a VAE to train an encoding and compute distances between those encodings, similar to (Nair et al., 2018), using the same method as the Siamese network in Figure 4a. We find that the VAE alone does not appear to capture the critical distances between states, possibly due to the decoding transformation complexity. Similarly, the GAIL baseline produces very jerky motion or stands still, both of which are contained in the imitation distribution. Our method that considers the temporal structure of the data learns faster and produces higher value policies.
Additionally, we create a multi-modal version of VIRL. Here we replace the bottom conv net with a dense network and learn a distance metric between agent poses and imitation video. The results of these models, along with the default manual reward function provided by the environment, are shown in Figure 4b. The multi-modal version appears to perform about equal to the vision-only modal. In Figure 4b we also compare our method to a non-sequence-based model that is equivalent to Time Contrastive Network (TCN). On average VIRL achieves higher value policies. We find that using the RNN-based distance metric makes the learning process more gradual. We show this learning effect in Figure 4b, where the original manually created reward with flat feedback leads to slow initial learning.
In Figure 4c we compare the importance of the spatial ||eat −ebt ||2 and temporal ||hat −hbt ||2 representations learned by VIRL. Using the recurrent representation (temporal lstm) alone allows learning to progress quickly but can have difficulty informing the policy of how to best match the desired example. On the other hand, using only the encoding between single frames (spatial conv) slows learning due to limited reward for out-of-phase behaviour. We achieved the best results by combining the representations from these two models. The assistance of spatial rewards is also seen in Figure 4b, where the manual reward learns the slowest.
Ablation We conduct ablation studies in Figure 5a to compare the effects of data augmentation methods, network models and the use of additional data from other tasks. For the more complex humanoid3d control problems the data augmentation methods, including Early Episode Sequence Priority (EESP), increases average policy quality marginally. The use of mutlitask data Figure 8c and the additional representational losses Figure 8a greatly improve the methods ability to learn. More ablation results are available in the supplementary material.
Sequence Encoding Using the learned sequence encoder a collection of motions from different classes are processed to create a TSNE embedding of the encodings (Maaten & Hinton, 2008). In Figure 5c we plot motions both generated from the learned policy π and the expert trajectories
πE . Overlaps in specific areas of the space for similar classes across learned π and expert πE data indicate a well-formed distance metric that does not sperate expert and agent examples. There is also a separation between motion classes in the data, and the cyclic nature of the walking cycle is visible.
In this section, we have described the process followed to create and analyze VIRL. Due to a combination of data augmentation techniques, VIRL can imitate given only a single demonstration. We have shown some of the first results to learn imitative policies from video data using a recurrent net-
work. Interestingly, the method displays new learning efficiencies that are important to the method success by separating the imitation problem into spatial and temporal aspects. For best results, we found that the inclusion of additional regularizing losses on the recurrent siamese network, along with some multi-task supervision, was key to producing results.
6 DISCUSSION AND CONCLUSION
In this work, we have created a new method for learning imitative policies from a single demonstration. The method uses a Siamese recurrent network to learn a distance function in both space and time. This distance function, trained on noisy partially observed video data, is used as a reward function for training an RL policy. Using data from other motion styles and regularization terms, VIRL produces policies that demonstrate similar behaviour to the demonstration.
Learning a distance metric is enigmatic, the distance metric can compute inaccurate distances in areas of the state space it has not yet seen. This inaccuracy could imply that when the agent explores and finds truly new and promising trajectories, the distance metric computes incorrect distances. We attempt to mitigate this effect by including training data from different tasks. We believe VIRL will benefit from a more extensive collection of multi-task data and increased variation of each task. Additionally, if the distance metric confidence is available, this information could be used to reduce variance and overconfidence during policy optimization.
It is probable learning a reward function while training adds additional variance to the policy gradient. This variance may indicate that the bias of off-policy methods could be preferred over the added variance of on-policy methods used here. We also find it important to have a small learning rate for the distance metric. The low learning rate reduces the reward variance between data collection phases and allows learning a more accurate value function. Another approach may be to use partially observable RL that can learn a better value function model given a changing RNN-based
reward function. Training the distance metric could benefit from additional regularization such as constraining the kl-divergence between updates to reduce variance. Learning a sequence-based policy as well, given that the rewards are now not dependent on a single state observation is another area for future research.
We compare our method to GAIL, but we found GAIL has limited temporal consistency. This method led to learning jerky and overactive policies. The use of a recurrent discriminator for GAIL may mitigate some of these issues and is left for future work. It is challenging to produce results better than the carefully manually crafted reward functions used by the RL simulation environments that include motion phase information in the observations (Peng et al., 2018a; 2017). However, we have shown that our method that can compute distances in space and time has faster initial learning. Potentially, a combination of starting with our method and following with a manually crafted reward function could lead to faster learning of high-quality policies. Still, as environments become increasingly more realistic and grow in complexity, we will need more robust methods to describe the desired behaviour we want from the agent.
Training the distance metric is a complicated balancing game. One might expect that the distance metric should be trained early and fast so that it quickly understands the difference between a good and bad demonstration. However, quickly learning confuses the agent, rewards can change, which cause the agent to diverge off toward an unrecoverable policy space. Slower is better, as the distance metric may not be accurate, it may be locally or relatively reasonable, which is enough to learn a good policy. As learning continues, these two optimizations can converge together.
7 APPENDIX
This section includes additional details related to VIRL.
7.1 IMITATION LEARNING
Imitation learning is the process of training a new policy to reproduce the behaviour of some expert policy. BC is a fundamental method for imitation learning. Given an expert policy πE possibly represented as a collection of trajectories τ < (s0, a0), . . . , (sT , aT ) > a new policy π can be learned to match this trajectory using supervised learning.
max θ EπE [ T∑ t=0 log π(at|st, θπ)] (5)
While this simple method can work well, it often suffers from distribution mismatch issues leading to compounding errors as the learned policy deviates from the expert’s behaviour.
7.2 INVERSE REINFORCEMENT LEARNING
Similar to BC, Inverse Reinforcement Learning (IRL) also learns to replicate some desired behaviour. However, IRL makes use of the RL environment without a defined reward function. Here we describe maximal entropy IRL (Ziebart et al., 2008). Given an expert trajectory τ < (s0, a0), . . . , (sT , aT ) > a policy π can be trained to produce similar trajectories by discovering a distance metric between the expert trajectory and trajectories produced by the policy π.
max c∈C min π (Eπ[c(s, a)]−H(π))− EπE [c(s, a)] (6)
where c is some learned cost function and H(π) is a causal entropy term. πE is the expert policy that is represented by a collection of trajectories. IRL is searching for a cost function c that is low for the expert πE and high for other policies. Then, a policy can be optimized by maximizing the reward function rt = −c(st, at).
7.3 AUTO-ENCODER FRAMEWORK
Variational Auto-encoders Previous work shows that VAEs can learn a lower dimensional structured representation of a distribution (Kingma & Welling, 2014). A VAE consists of two parts an encoder qφ and a decoder pψ . The encoder maps states to a latent encoding z and in turn the decoder transforms z back to states. The model parameters for both φ and ψ are trained jointly to maximize
LV AE(φ, ψ, s) = −βDKL(qφ(z||s)||p(z) + Eqφ(z||s)[log pψ(s||z)] (7)
, where DKL is the Kullback-Leibler divergence, p(s) is some prior and β is a hyper-parameter to balance the two terms. The encoder qφ takes the form of a diagonal Gaussian distribution qφ = N (µφ(s), σ2(s)). In the case of images, the decoder pψ parameterized a Bernoulli distribution over pixel values. This simple parameterization is akin to training the decoder with a cross entropy loss over normalized pixel values.
Sequence Auto-encoding The goal of sequence to sequence translation is to learn the conditional probability p(y0, . . . , yT ′ |x0, . . . , xT ), where x = x0, . . . , xT and y = y0, . . . , yT ′ are sequence Here we want to explicitly learn a latent variable zRNN that compresses the information in x0, . . . , xT . An RNN can model this conditional probability by calculating v =∏T t=0 p(yT |{x0, . . . , xT }) of the sequence x that can, in turn, be used to condition the decoding of the sequence y (Rumelhart et al., 1985).
p(y) = T∏ t=0 p(yT |{y0, . . . , yT−1}, v) (8)
, This method has been used for learning compressed representations for transfer learning (Zhu et al., 2016) and 3D shape retrieval (Zhuang et al., 2015).
7.4 DATA
The mocap used in the created environment come from the CMU mocap database and the SFU mocap database.
Data Augmentation and Training We apply several data augmentation methods to produce additional data for training the distance metric. Using methods analogous to the cropping and warping methods popular in computer vision (He et al., 2015) we randomly crop sequences and randomly warp the demonstration timing. The cropping is performed by both initializing the agent to random poses from the demonstration motion and terminating episodes when the agent’s head, hands or torso contact the ground. As the agent improves, the average length of each episode increases and so to will the average length of the cropped window. The motion warping is done by replaying the demonstration motion at different speeds. Two additional methods influence the data distribution. The first method is Reference State Initialization (RSI) (Peng et al., 2018a), where the initial state of the agent and expert is randomly selected from the expert demonstration. With this property, the environment can also be thought of as a form of memory replay. The environment allows the agent to go back to random points in the demonstration as if replaying a remembered demonstration. The second is EESP where the probability a sequence x is cropped starting at i is p(i) = len(x)−i∑ i , increasing the likelihood of starting earlier in the episode.
7.5 TRAINING DETAILS
The learning simulations are trained using Graphics Processing Unit (GPU)s. The simulation is not only simulating the interaction physics of the world but also rendering the simulation scene to capture video observations. On average, it takes 3 days to execute a single training simulation. The process of rendering and copying the images from the GPU is one of the most expensive operations with VIRL. We collect 2048 data samples between training rounds. The batch size for Trust Region Policy Optimization (TRPO) is 2048. The kl term is 0.5.
The simulation environment includes several different tasks that are represented by a collection of motion capture clips to imitate. These tasks come from the tasks created in the DeepMimic works (Peng et al., 2018a). We include all humanoid tasks in this dataset.
In Algorithm 1 we include an outline of the algorithm used for the method. The simulation environment produces three types of observations, st+1 the agent’s proprioceptive pose, svt+1 the image observation of the agent and mt+1 the image-based oberservation of the expert demonstration. The images are 64× 64.
7.6 DISTANCE FUNCTION TRAINING
Our Siamese training loss consists of
LSN (si, sp, sn) = y ∗ ||f(si)− f(sp)||+ ((1− y) ∗ (max(ρ− (||f(si)− f(sn)||), 0))), (9) where y = 1 is a positive example sp, pair where the distance should be minimal and y = 0 is a negative example sn, pair where the distance should be maximal. The margin ρ is used as an attractor or anchor to pull the negative example output away from si and push values towards a 0 to 1 range. f(·) computes the output from the underlying network. The distance between two states is calculated as d(s, s′) = ||f(s) − f(s′)|| and the reward as r(s, s′) = −d(s, s′). Data used to train the Siamese network is a combination of trajectories τ = 〈s0, . . . , sT 〉 generated from simulating the agent in the environment and the expert demonstration. For our recurrent model the same loss is used; however, the states sp, sn, si are sequences. During RL training we compute a distance given the sequence of states observed so far in the episode. This method allows us to train a distance function in state space where all we need to provide is labels that denote if two states, or sequences, are similar or not.
In Figure 6b we show the training curve for the recurrent siamese network. The model learns smoothly, considering that the training data used is continually changing as the RL agent explores. In Figure 6a the learning curve for the siamese RNN is shown after performing pretraining. We can see the overfitting portion the occurs during RL training. This overfitting can lead to poor reward prediction during the early phase of training.
It can be challenging to train a sequenced based distance function. One particular challenge is training the distance function to be accurate across the space of possible states. We found a good strategy was to focus on the beginning of episode data. When the model is not accurate on states it saw earlier in the episode; it may never learn how to get into good states later that the distance function understands better. Therefore, when constructing batches to train the RNN on, we give a higher probability of starting earlier in episodes. We also give a higher probability to shorter sequences. As the agent gets better average episodes length increase, so to will the randomly selected sequence windows.
7.7 DISTANCE FUNCTION USE
We find it helpful to normalize the distance metric outputs using r = exp(r2∗wd) wherewd = −5.0 scales the filtering width. Early in training the distance metric often produces large, noisy values. Also, the RL method regularly updates reward scaling statistics; the initial high variance data reduces the significance of better distance metric values produced later on by scaling them to small numbers. The improvement of using this normalize reward is shown in Figure 7a.
8 POSITIVE AND NEGATIVE EXAMPLES
We use two methods to generate positive and negative examples. The first method is similar to TCN, where we can assume that sequences that overlap more in time are more similar. For each episode two sequences are generated, one for the agent and one for the imitation motion. Here we list the methods used to alter sequences for positive pairs.
1. Adding Gaussian noise to each state in the sequence (mean = 0 and variance = 0.02) 2. Out of sync versions where the first state is removed from the first sequence and the last
state from the second sequence 3. Duplicating the first state in either sequence 4. Duplicating the last state in either sequence
We alter sequences for negative pairs by
1. Reversing the ordering of the second sequence in the pair. 2. Randomly picking a state out of the second sequence and replicating it to be as long as the
first sequence. 3. Randomly shuffling one sequence. 4. Randomly shuffling both sequences.
The second method we use to create positive and negative examples is by including data for additional classes of motion. These classes denote different task types. For the humanoid3d environment, we generate data for walking-dynamic-speed, running, backflipping and frontflipping. Pairs from the same tasks are labelled as positive, and pairs from different classes are negative.
8.1 ADDITIONAL ABLATION ANALYSIS
8.2 RL ALGORITHM ANALYSIS
It is not clear which RL algorithm may work best for this type of imitation problem. A number of RL algorithms were evaluated on the humanoid2d environment Figure 9a. Surprisingly, TRPO (Schulman et al., 2015) did not work well in this framework, considering it has a controlled policy gradient step, we thought it would reduce the overall variance. We found that Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015) worked rather well. This result could be related to having a changing reward function, in that if the changing rewards are considered off-policy data, it can be easier to learn. This can be seen in Figure 9b where DDPG is best at estimating the future discounted rewards in the environment. We also tried Continuous Actor Critic Learning Automaton (CACLA) (Van Hasselt, 2012) and Proximal Policy Optimization (PPO) (Schulman et al., 2017); we found that PPO did not work particularly well on this task; this could also be related to added variance.
8.3 ADDITIONAL IMITATION RESULTS
Our first experiments evaluate the methods ability to learn a complex cyclic motion for a simulated humanoid robot given a single motion demonstration, similar to (Peng & van de Panne, 2017), but using video instead. The agent is able to learn a robust walking gate even though it is only given noisy partial observations of a demonstration Figure 10. | 1. What is the focus of the paper in terms of the problem addressed and the proposed solution?
2. What are the strengths of the paper, particularly in terms of the use of previous techniques?
3. What are the weaknesses of the paper regarding its claims and experiments?
4. How does the reviewer assess the novelty and impact of the proposed method?
5. Are there any concerns regarding the limitation of the experimental results and the lack of comparison with recent state-of-the-art baselines? | Review | Review
This paper presents an imitation learning method that deploys previously well-studied techniques such as siamese networks, inverse RL, learning distance functions for IRL and tracking.
+ the paper studies an important problem of IL using visual data.
+ I found the ablation studies in the appendix quite useful in understanding the efficiency of the proposed method.
-In terms of novelty, the proposed approach is a combination of several past works so the technical novelty is limited. Additionally, it is not clear how impactful the proposed method can be given that it is only tested on a synthetic domain which is the same as the train domain. So, from the current experimental results it is not clear if this approach would be effective to be applied in a real system (e.g. robots) on the practical side.
-There are not enough evaluation done to compare with the most updated state-of-the-art baselines. The evaluations are done on just a single synthetic domain with a single character. Therefore, the train and test videos are very similar. |
ICLR | Title
Visual Imitation with Reinforcement Learning using Recurrent Siamese Networks
Abstract
It would be desirable for a reinforcement learning (RL) based agent to learn behaviour by merely watching a demonstration. However, defining rewards that facilitate this goal within the RL paradigm remains a challenge. Here we address this problem with Siamese networks, trained to compute distances between observed behaviours and the agent’s behaviours. Given a desired motion such Siamese networks can be used to provide a reward signal to an RL agent via the distance between the desired motion and the agent’s motion. We experiment with an RNNbased comparator model that can compute distances in space and time between motion clips while training an RL policy to minimize this distance. Through experimentation, we have had also found that the inclusion of multi-task data and an additional image encoding loss helps enforce the temporal consistency. These two components appear to balance reward for matching a specific instance of a behaviour versus that behaviour in general. Furthermore, we focus here on a particularly challenging form of this problem where only a single demonstration is provided for a given task – the one-shot learning setting. We demonstrate our approach on humanoid agents in both 2D with 10 degrees of freedom (DoF) and 3D with 38 DoF.
1 INTRODUCTION
Imitation learning and Reinforcement Learning (RL) often intersect when the goal is to imitate with incomplete information, for example, when imitating from motion capture data (mocap) or video. In this case, the agent needs to search for actions that will result in observations similar to the expert. However, formulating a metric that will provide a reasonable distance between the agent and the expert is difficult. Robots and people plan using types of internal and abstract pose representations that can have reasonable distances; however, typically when animals observe others performing tasks, only visual information is available. Using distances in pose-space is ill-suited for imitation as changing some features can result in drastically different visual appearance. In order to understand how to perform tasks from visual observation a mapping/transformation is used which allows for the minimization of distance in appearance. Even with a method to transform observations to a similar pose space, each person has different capabilities. Because of this, people are motivated to learn transformations in space and time where they can reproduce the behaviour to the best of their own ability. How can we learn a representation similar to this latent space?
An essential detail of imitating demonstrations is their sequential and causal nature. There is both an ordering and speed in which a demonstration is performed. Most methods require the agent to learn to imitate the temporal and spatial structure at the same time creating a potentially narrow solution space. When the agent becomes desynchronized with the demonstration, the agent will receive a low reward. Consider the case when a robot has learned to stand when its goal is to walk. Standing is spatially close to the demonstration and actions that help the robot stand, as opposed to falling, should be encouraged. How can such latent goals be encouraged?
If we consider a phase-based reward function r = R(s, a, φ) where φ indexes the time in the demonstration and s and a is the agent state and action. As the demonstration timing φ, often controlled by the environment, and agent diverge, the agent receives less reward, even if it is visiting states that exist elsewhere in the demonstration. The issue of determining if an agent is displaying outof-phase behaviour can understood as trying to find the φ that would result in the highest reward
φ′ = maxφR(s, a, φ) and the distance φ′ − φ is an indicator of how far away in time or out-ofphase the agent is. This phase-independent form can be seen as a form of reward shaping. However, this naive description ignores the ordered property of demonstrations. What is needed is a metric that gives reward for behaviour that is in the proper order, independent of phase. This ordering motivates the creation of a recurrent distance metric that is designed to understand the context between two motions. For example, does this motion look like a walk, not, does this motion look precisely like that walk.
Our proposed Visual Imitation with Reinforcement Learning (VIRL) method uses Recurrent Siamese Networks (RSNs) and has similarities to both Inverse Reinforcement Learning (IRL) (Abbeel & Ng, 2004) and Generative Advisarial Imitation Learning (GAIL) (Ho & Ermon, 2016). The process of learning a cost function that understands the space of policies to find an optimal policy given a demonstration is fundamentally IRL. While using positive examples from the expert and negative examples from the policy is similar to the method GAIL uses to train a discriminator to recognize in distribution examples. In this work, we build upon these techniques by constructing a method that can learn policies using noisy visual data without action information. Considering the problem’s data sparsity, we include data from other tasks to learn a more robust distance function in the space of visual sequence. We also construct a cost function that takes into account the demonstration ordering as well as pose using a recurrent Siamese network. Our contribution consists of proposing and exploring these forms of recurrent Siamese networks as a way to address a critical problem in defining reward structure for imitation learning from the video for deep RL agents and accomplishing this on simulated humanoid robots for the challenging single shot learning setting.
2 RELATED WORK
Learning From Demonstration Searching for good distance functions is an active research area (Abbeel & Ng, 2004; Argall et al., 2009). Given some vector of features, the goal is to find an optimal transformation of these features, such in this transformed space, there exists a strong contextual meaning. Previous work has explored the area of state-based distance functions, but most rely on pose based metrics (Ho & Ermon, 2016; Merel et al., 2017) that come from an expert. While there is other work using distance functions, including for example Sermanet et al. (2017); Finn et al. (2017); Liu et al. (2017); Dwibedi et al. (2018), few use image based inputs and none consider the importance of learning a distance function in time as well as space. In this work, we train recurrent Siamese networks (Chopra et al., 2005) to learn distances between videos.
Partially Observable Imitation Without Actions For Learning from Demonstration (LfD) problems the goal is to replicate the behaviour of expert πE behaviour. Unlike the typical setting for humans learning to imitate, LfD often assumes the availability of expert action and observation data. Instead, in this work, we focus on the case where only noisy actionless observations of the expert are available. Recent work uses Behavioural Cloning (BC) to learn an inverse dynamics model to estimate the actions used via maximum-likelihood estimation (Torabi et al., 2018). Still, BC often needs many expert examples and tends to suffer from state distribution mismatch issues between the expert policy and student (Ross et al., 2011). Work in (Merel et al., 2017) proposes a system based on GAIL that can learn a policy from a partial observation of the demonstration. In this work, the discriminator’s state input is a customized version of the expert’s state and does not take into account the demonstration’s sequential nature. The work in (Wang et al., 2017) provides a more robust GAIL framework along with a new model to encode motions for few-shot imitation. This model uses an Recurrent Neural Network (RNN) to encode a demonstration but uses expert state and action observations. In our work, the agent is limited to only a partial visual observation as a demonstration. Additional works learn implicit models of distance (Yu et al., 2018; Pathak et al., 2018; Finn et al., 2017; Sermanet et al., 2017), none of these explicitly learn a sequential model considering the demonstration timing. An additional version of GAIL, infoGAIL (Li et al., 2017), included pixel based inputs. Goals can be specified using the latent space from a Variational Auto Encoder (VAE) (Nair et al., 2018). Our work extends this VAE loss using sequence data to train a more temporally consistent latent representation. Recent work (Peng et al., 2018b) has a 2D control example of learning from video data. We show results on more complex 3D tasks and additionally model distance in time. In contrast, here we train a recurrent siamese model that can be used to en-
able curriculum learning and allow for computing distances even when the agent and demonstration are out of sync.
3 PRELIMINARIES
In this section, we outline the general RL framework and specific formulations for RL that we rely upon when developing our method in Section 4.
Reinforcement Learning Using the RL framework formulated with a Markov Dynamic Process (MDP): at every time step t, the world (including the agent) exists in a state st ∈ S, wherein the agent is able to perform actions at ∈ A, sampled from a policy π(at|st) which results in a new state st+1 ∈ S and reward rt according to the transition probability function T (rt, st+1|st, at). The policy is optimize to maximize the future discounted reward
J(π) = Er0,...,rT [ T∑ t=0 γtrt ] , (1)
where T is the max time horizon, and γ is the discount factor, indicating the planning horizon length. Inverse reinforcement learning refers to the problem of extracting a reward function from observed optimal behavior Ng et al. (2000). In contrast, in our approach we learn a distance that works across a collection of behaviours. Further, we do not assume the example data to be optimal. See Appendix 7.2 for further discussion of the connections of our work to inverse reinforcement learning.
GAIL VIRL is similar to the GAIL framework (Ho & Ermon, 2016) which uses a Generative Advasarial Network (GAN) (Goodfellow et al., 2014), where the discriminator is trained with positive examples from the expert trajectories and negative examples from the policy. The generator is a combination of the environment, policy and current state visitation probability induced by the policy pπ(s).
min θπ max θφ EπE [log(D(s, a|θφ))] + Eπθπ [log(1−D(s, a|θφ))] (2)
In this framework the discriminator provides rewards for the RL policy to optimize, as the probability of a state generated by the policy being in the distribution rt = D(st, at|θφ). While this framework has been shown to work in practice, this dual optimization is often unstable. In the next section we will outline our method for learning a more stable distance based reward over sequences of images.
4 CONCEPTUAL DISTANCE-BASED REINFORCEMENT LEARNING
Our approach is aimed at facilitating imitation learning within an underlying RL formulation over partially observed observations o. Unlike the situation in GAIL, we do not rely on having accces to state, s and action, a information – our idea is to minimize a function that determintes the distance between two sequences observations, o, one from the desired example behavior oe, and another from the current agent behavior oa. We can then define the reward used within an underlying RL framework in terms of a distance function D, such that
rt̂(o e, oa) = −D(oe, oa, t̂) = t̂∑ t=0 −d(oet , oat ), (3)
where in our setting here D(oe, oa, t̂) models a distance between video clips from time t = 0 to t̂.
A simple formulation of the approach above can be overly restrictive on sequence timing. While these distances can serve as RL rewards, they often provide insufficient signal for the policy to learn a good imitative behaviour, especially when the agent only has partial observations of the expert. We can see an example of this in Figure 1a were starting at t5 the agent (in red) begins to exhibit behaviour that is similar to the expert (in blue) yet the spatial distance indicates that this state is further away from the desired behaviour than at t4.
To encourge the agent to match any part of the expert behaviour we propose decomposing the distance into two distances, by adding a type of temporal distance shown in green. To compute a time
independant distance we can find the state in the expert sequence that is closest to the agent’s current state argmin t̂∈T d(oet̂ , o a t ) and use it in the following distance measure
dT (oe, oa, t̂, t) = . . .+ d(oe t̂−1, o a t−1) + d(o e t̂ , oat ) + d(o e t̂+1 , oat+1) + . . . (4)
Using only a single state time-alined may lead to the agent fixating on mataching a single state in the expert demonstration. To avoid this the neighbouring states given sequence timing readjustment are used in the distance computation. This framework allows the agent to be rewarded for exhibiting behaviour that matches any part of the experts demonstration. The better is learns to match parts of the expert demonstration the more reward it is given. The previous spatial distance will then help the agent learn to sync up its timing with the deomonstration. Next we describe how we learn both of these distances.
Distance Metric Learning Many methods can be used to learn a distance function in state-space. Here we use a Siamese network f(oe, oa) with a triplet loss over time and task data (Chopra et al., 2005). The triplet loss is used to minimize the distance between two examples that are positive, very similar or from the same class, and maximize the distance between pairs of examples that are known to be unrelated. For more details see supplementary document.
Sequence Imitation The distance metric is formulated in a recurrent style where the distance is computed from the current state and conditioned on all previous states d(ot|ot−1, . . . , o0). The loss function is a combination of distance Eq. 9 and VAE-based representation learning objectives from Eq. 7 and Eq. 8, detailed in the supplementary material. This combination of sequencebased losses assists in compressing the representation while ensuring intermediate representations are informative. The loss function used to train the distance model on a positive pair of sequences is:
LV IRL(oi, op, ·) =λ0LSN (oi, op, ·) + λ1[ 1
T T∑ t=0 LSN (oi,t, op,t, ·)]+
λ2[ 1
T T∑ t=0 LV AE(oi,t) + LV AE(op,t)]+
λ3[LAE(oi) + LAE(op)].
Where λ = {0.7, 0.1, 0.1, 0.1}. With a negative pair, the second sequence used in the VAE and AE losses would be the negative sequence.
The Siamese loss function remains the same as in Eq. 9 but the overall learning process evolves to use an RNN-based deep networks. A diagram of the full model is shown in Figure 2. This model uses a time distributed Long Short-Term Memory (LSTM). A single convolutional network conva is first used to transform images of the demonstration oa to an encoding vector eat . After the sequence of images is distributed through conva there is an encoded sequence < ea0 , . . . , e a t >, this sequence is fed into the RNN lstma until a final encoding is produced hat . This same process is performed for a copy of the RNN lstma producing hbt for the agent ob. The loss is computed in a similar fashion to (Mueller & Thyagarajan, 2016) using the sequence outputs of images from the agent and another from the demonstration. The reward at each timestep is computed as rt =
||hat −hbt ||+ ||eat − ebt || = ||lstma(conva(sat ))− lstma(conva(sbt))||+ ||conva(sat )− conva(sbt)||. At the beginning of each episode, the RNN’s internal state is reset. The policy and value function have 2 hidden layers with 512 and 256 units, respectively. The use of additional VAE-based image and Auto Encoder (AE)-based sequence decoding losses improve the latent space conditioning and representation.
Algorithm 1 Learning Algorithm Initialize model parameters θπ and θd Create experience memory D ← {} while not done do
for i ∈ {0, . . . N} do τi ← {} {st, oet , oat } ← env.reset() for t ∈ {0, . . . , T} do at ← π(·|st, θπ) {st+1, oet+1, oat+1} ← env.step(at) rt ← −d(oet+1, oat+1|θd) τi,t ← {st, oet , oat , at, rt} {st, oet , oat } ← {st+1, oet+1, oat+1}
end for end for D ← D ⋃ {τ0, . . . , τN} Update d(·) parameters θd using D Update policy θπ using {τ0, . . . , τN}
end while Unsupervised Data labelling To construct positive and negative pairs for training we make use of time information in a similar fashion to (Sermanet et al., 2017), where observations at similar times in the same sequence are often correlated and observations at different times will likely have little similarity. We compute pairs by altering one sequence and comparing this modified version to its original. Positive pairs are created by adding noise to the sequence or altering a few frames of the sequences. Negative pairs are created by shuffling one sequence or reversing it. More details are available in the supplementary material. Imitation data for 24 other tasks are also used to help condition the distance metric learning process. These include motion clips for running, backflips, frontflips, dancing, punching, kicking and jumping along with the desired motion. For details on how positive and negative pairs are created from this data, see the supplementary document.
Importantly the RL environment generates two different state representations for the agent. The first state st+1 is the internal robot pose. The second state ot+1 is the agent’s rendered view, shown in Figure 2. The rendered view is used with the distance metric to compute the similarity between the agent and the demonstration. We attempted using the visual features as the state input for the policy as well; this resulted in poor policy quality. Details of the algorithm used to train the distance metric and policy are outlined in the supplementary document Algorithm 1.
5 ANALYSIS AND RESULTS
The simulation environment used in the experiments is similar to the DeepMind Control Suite (Tassa et al., 2018). In this simulated robotics environment, the agent is learning to imitate a given reference motion. The agent’s goal is to learn a policy to actuate Proportional Derivative (PD) controllers at 30 fps to mimic the desired motion. The simulation environment provides a hard-coded reward function based on the robot’s pose that is used to evaluate the policy quality. The demonstration M the agent is learning to imitate is generated from a clip of mocap data. The mocap data is used to
animate a second robot in the simulation. Frames from the simulation are captured and used as video input to train the distance metric. The images captured from the simulation are converted to greyscale with 64× 64 pixels. We train the policy on pose data, as link distances and velocities relative to the robot’s Centre of Mass (COM). This simulation environment is new and has been created to take motion capture data and produce multi-view video data that can be used for training RL agents or generating data for computer vision tasks. The environment includes challenging and dynamic tasks for humanoid robots. Some example tasks are imitating running, jumping, and walking, shown in Figure 3 and humanoid2d detailed in the supplementary material.
3D Humanoid Robot Imitation In these simulated robotics environments the agent is learning to imitate a given reference motion of a walk, run, jump or zombie motion. A single motion demonstration is provided by the simulation environment as a cyclic motion. During learning, we include additional data from all other tasks for the walking task this would be: walking-dynamic-speed, running, jogging, frontflips, backflips, dancing, jumping, punching and kicking) that are only used to train the distance metric. We also include data from a modified version of the tasks that has a randomly generated speed modifier ω ∈ [0.5, 2.0] walking-dynamic-speed, that warps the demonstration timing. This additional data is used to provide a richer understanding of distances in space and time to the distance metric. The method is capable of learning policies that produce similar behaviour to the expert across a diverse set of tasks. We show example trajectories from the learned policies in Figure 3 and in the supplemental Video. It takes 5− 7 days to train each policy in these results on a 16 core machine with an Nvidia GTX1080 GPU.
Algorithm Analysis and Comparison To evaluate the learning capabilities and improvements of VIRL we compare against two other methods that learn a distance function in state space, GAIL and using a VAE to train an encoding and compute distances between those encodings, similar to (Nair et al., 2018), using the same method as the Siamese network in Figure 4a. We find that the VAE alone does not appear to capture the critical distances between states, possibly due to the decoding transformation complexity. Similarly, the GAIL baseline produces very jerky motion or stands still, both of which are contained in the imitation distribution. Our method that considers the temporal structure of the data learns faster and produces higher value policies.
Additionally, we create a multi-modal version of VIRL. Here we replace the bottom conv net with a dense network and learn a distance metric between agent poses and imitation video. The results of these models, along with the default manual reward function provided by the environment, are shown in Figure 4b. The multi-modal version appears to perform about equal to the vision-only modal. In Figure 4b we also compare our method to a non-sequence-based model that is equivalent to Time Contrastive Network (TCN). On average VIRL achieves higher value policies. We find that using the RNN-based distance metric makes the learning process more gradual. We show this learning effect in Figure 4b, where the original manually created reward with flat feedback leads to slow initial learning.
In Figure 4c we compare the importance of the spatial ||eat −ebt ||2 and temporal ||hat −hbt ||2 representations learned by VIRL. Using the recurrent representation (temporal lstm) alone allows learning to progress quickly but can have difficulty informing the policy of how to best match the desired example. On the other hand, using only the encoding between single frames (spatial conv) slows learning due to limited reward for out-of-phase behaviour. We achieved the best results by combining the representations from these two models. The assistance of spatial rewards is also seen in Figure 4b, where the manual reward learns the slowest.
Ablation We conduct ablation studies in Figure 5a to compare the effects of data augmentation methods, network models and the use of additional data from other tasks. For the more complex humanoid3d control problems the data augmentation methods, including Early Episode Sequence Priority (EESP), increases average policy quality marginally. The use of mutlitask data Figure 8c and the additional representational losses Figure 8a greatly improve the methods ability to learn. More ablation results are available in the supplementary material.
Sequence Encoding Using the learned sequence encoder a collection of motions from different classes are processed to create a TSNE embedding of the encodings (Maaten & Hinton, 2008). In Figure 5c we plot motions both generated from the learned policy π and the expert trajectories
πE . Overlaps in specific areas of the space for similar classes across learned π and expert πE data indicate a well-formed distance metric that does not sperate expert and agent examples. There is also a separation between motion classes in the data, and the cyclic nature of the walking cycle is visible.
In this section, we have described the process followed to create and analyze VIRL. Due to a combination of data augmentation techniques, VIRL can imitate given only a single demonstration. We have shown some of the first results to learn imitative policies from video data using a recurrent net-
work. Interestingly, the method displays new learning efficiencies that are important to the method success by separating the imitation problem into spatial and temporal aspects. For best results, we found that the inclusion of additional regularizing losses on the recurrent siamese network, along with some multi-task supervision, was key to producing results.
6 DISCUSSION AND CONCLUSION
In this work, we have created a new method for learning imitative policies from a single demonstration. The method uses a Siamese recurrent network to learn a distance function in both space and time. This distance function, trained on noisy partially observed video data, is used as a reward function for training an RL policy. Using data from other motion styles and regularization terms, VIRL produces policies that demonstrate similar behaviour to the demonstration.
Learning a distance metric is enigmatic, the distance metric can compute inaccurate distances in areas of the state space it has not yet seen. This inaccuracy could imply that when the agent explores and finds truly new and promising trajectories, the distance metric computes incorrect distances. We attempt to mitigate this effect by including training data from different tasks. We believe VIRL will benefit from a more extensive collection of multi-task data and increased variation of each task. Additionally, if the distance metric confidence is available, this information could be used to reduce variance and overconfidence during policy optimization.
It is probable learning a reward function while training adds additional variance to the policy gradient. This variance may indicate that the bias of off-policy methods could be preferred over the added variance of on-policy methods used here. We also find it important to have a small learning rate for the distance metric. The low learning rate reduces the reward variance between data collection phases and allows learning a more accurate value function. Another approach may be to use partially observable RL that can learn a better value function model given a changing RNN-based
reward function. Training the distance metric could benefit from additional regularization such as constraining the kl-divergence between updates to reduce variance. Learning a sequence-based policy as well, given that the rewards are now not dependent on a single state observation is another area for future research.
We compare our method to GAIL, but we found GAIL has limited temporal consistency. This method led to learning jerky and overactive policies. The use of a recurrent discriminator for GAIL may mitigate some of these issues and is left for future work. It is challenging to produce results better than the carefully manually crafted reward functions used by the RL simulation environments that include motion phase information in the observations (Peng et al., 2018a; 2017). However, we have shown that our method that can compute distances in space and time has faster initial learning. Potentially, a combination of starting with our method and following with a manually crafted reward function could lead to faster learning of high-quality policies. Still, as environments become increasingly more realistic and grow in complexity, we will need more robust methods to describe the desired behaviour we want from the agent.
Training the distance metric is a complicated balancing game. One might expect that the distance metric should be trained early and fast so that it quickly understands the difference between a good and bad demonstration. However, quickly learning confuses the agent, rewards can change, which cause the agent to diverge off toward an unrecoverable policy space. Slower is better, as the distance metric may not be accurate, it may be locally or relatively reasonable, which is enough to learn a good policy. As learning continues, these two optimizations can converge together.
7 APPENDIX
This section includes additional details related to VIRL.
7.1 IMITATION LEARNING
Imitation learning is the process of training a new policy to reproduce the behaviour of some expert policy. BC is a fundamental method for imitation learning. Given an expert policy πE possibly represented as a collection of trajectories τ < (s0, a0), . . . , (sT , aT ) > a new policy π can be learned to match this trajectory using supervised learning.
max θ EπE [ T∑ t=0 log π(at|st, θπ)] (5)
While this simple method can work well, it often suffers from distribution mismatch issues leading to compounding errors as the learned policy deviates from the expert’s behaviour.
7.2 INVERSE REINFORCEMENT LEARNING
Similar to BC, Inverse Reinforcement Learning (IRL) also learns to replicate some desired behaviour. However, IRL makes use of the RL environment without a defined reward function. Here we describe maximal entropy IRL (Ziebart et al., 2008). Given an expert trajectory τ < (s0, a0), . . . , (sT , aT ) > a policy π can be trained to produce similar trajectories by discovering a distance metric between the expert trajectory and trajectories produced by the policy π.
max c∈C min π (Eπ[c(s, a)]−H(π))− EπE [c(s, a)] (6)
where c is some learned cost function and H(π) is a causal entropy term. πE is the expert policy that is represented by a collection of trajectories. IRL is searching for a cost function c that is low for the expert πE and high for other policies. Then, a policy can be optimized by maximizing the reward function rt = −c(st, at).
7.3 AUTO-ENCODER FRAMEWORK
Variational Auto-encoders Previous work shows that VAEs can learn a lower dimensional structured representation of a distribution (Kingma & Welling, 2014). A VAE consists of two parts an encoder qφ and a decoder pψ . The encoder maps states to a latent encoding z and in turn the decoder transforms z back to states. The model parameters for both φ and ψ are trained jointly to maximize
LV AE(φ, ψ, s) = −βDKL(qφ(z||s)||p(z) + Eqφ(z||s)[log pψ(s||z)] (7)
, where DKL is the Kullback-Leibler divergence, p(s) is some prior and β is a hyper-parameter to balance the two terms. The encoder qφ takes the form of a diagonal Gaussian distribution qφ = N (µφ(s), σ2(s)). In the case of images, the decoder pψ parameterized a Bernoulli distribution over pixel values. This simple parameterization is akin to training the decoder with a cross entropy loss over normalized pixel values.
Sequence Auto-encoding The goal of sequence to sequence translation is to learn the conditional probability p(y0, . . . , yT ′ |x0, . . . , xT ), where x = x0, . . . , xT and y = y0, . . . , yT ′ are sequence Here we want to explicitly learn a latent variable zRNN that compresses the information in x0, . . . , xT . An RNN can model this conditional probability by calculating v =∏T t=0 p(yT |{x0, . . . , xT }) of the sequence x that can, in turn, be used to condition the decoding of the sequence y (Rumelhart et al., 1985).
p(y) = T∏ t=0 p(yT |{y0, . . . , yT−1}, v) (8)
, This method has been used for learning compressed representations for transfer learning (Zhu et al., 2016) and 3D shape retrieval (Zhuang et al., 2015).
7.4 DATA
The mocap used in the created environment come from the CMU mocap database and the SFU mocap database.
Data Augmentation and Training We apply several data augmentation methods to produce additional data for training the distance metric. Using methods analogous to the cropping and warping methods popular in computer vision (He et al., 2015) we randomly crop sequences and randomly warp the demonstration timing. The cropping is performed by both initializing the agent to random poses from the demonstration motion and terminating episodes when the agent’s head, hands or torso contact the ground. As the agent improves, the average length of each episode increases and so to will the average length of the cropped window. The motion warping is done by replaying the demonstration motion at different speeds. Two additional methods influence the data distribution. The first method is Reference State Initialization (RSI) (Peng et al., 2018a), where the initial state of the agent and expert is randomly selected from the expert demonstration. With this property, the environment can also be thought of as a form of memory replay. The environment allows the agent to go back to random points in the demonstration as if replaying a remembered demonstration. The second is EESP where the probability a sequence x is cropped starting at i is p(i) = len(x)−i∑ i , increasing the likelihood of starting earlier in the episode.
7.5 TRAINING DETAILS
The learning simulations are trained using Graphics Processing Unit (GPU)s. The simulation is not only simulating the interaction physics of the world but also rendering the simulation scene to capture video observations. On average, it takes 3 days to execute a single training simulation. The process of rendering and copying the images from the GPU is one of the most expensive operations with VIRL. We collect 2048 data samples between training rounds. The batch size for Trust Region Policy Optimization (TRPO) is 2048. The kl term is 0.5.
The simulation environment includes several different tasks that are represented by a collection of motion capture clips to imitate. These tasks come from the tasks created in the DeepMimic works (Peng et al., 2018a). We include all humanoid tasks in this dataset.
In Algorithm 1 we include an outline of the algorithm used for the method. The simulation environment produces three types of observations, st+1 the agent’s proprioceptive pose, svt+1 the image observation of the agent and mt+1 the image-based oberservation of the expert demonstration. The images are 64× 64.
7.6 DISTANCE FUNCTION TRAINING
Our Siamese training loss consists of
LSN (si, sp, sn) = y ∗ ||f(si)− f(sp)||+ ((1− y) ∗ (max(ρ− (||f(si)− f(sn)||), 0))), (9) where y = 1 is a positive example sp, pair where the distance should be minimal and y = 0 is a negative example sn, pair where the distance should be maximal. The margin ρ is used as an attractor or anchor to pull the negative example output away from si and push values towards a 0 to 1 range. f(·) computes the output from the underlying network. The distance between two states is calculated as d(s, s′) = ||f(s) − f(s′)|| and the reward as r(s, s′) = −d(s, s′). Data used to train the Siamese network is a combination of trajectories τ = 〈s0, . . . , sT 〉 generated from simulating the agent in the environment and the expert demonstration. For our recurrent model the same loss is used; however, the states sp, sn, si are sequences. During RL training we compute a distance given the sequence of states observed so far in the episode. This method allows us to train a distance function in state space where all we need to provide is labels that denote if two states, or sequences, are similar or not.
In Figure 6b we show the training curve for the recurrent siamese network. The model learns smoothly, considering that the training data used is continually changing as the RL agent explores. In Figure 6a the learning curve for the siamese RNN is shown after performing pretraining. We can see the overfitting portion the occurs during RL training. This overfitting can lead to poor reward prediction during the early phase of training.
It can be challenging to train a sequenced based distance function. One particular challenge is training the distance function to be accurate across the space of possible states. We found a good strategy was to focus on the beginning of episode data. When the model is not accurate on states it saw earlier in the episode; it may never learn how to get into good states later that the distance function understands better. Therefore, when constructing batches to train the RNN on, we give a higher probability of starting earlier in episodes. We also give a higher probability to shorter sequences. As the agent gets better average episodes length increase, so to will the randomly selected sequence windows.
7.7 DISTANCE FUNCTION USE
We find it helpful to normalize the distance metric outputs using r = exp(r2∗wd) wherewd = −5.0 scales the filtering width. Early in training the distance metric often produces large, noisy values. Also, the RL method regularly updates reward scaling statistics; the initial high variance data reduces the significance of better distance metric values produced later on by scaling them to small numbers. The improvement of using this normalize reward is shown in Figure 7a.
8 POSITIVE AND NEGATIVE EXAMPLES
We use two methods to generate positive and negative examples. The first method is similar to TCN, where we can assume that sequences that overlap more in time are more similar. For each episode two sequences are generated, one for the agent and one for the imitation motion. Here we list the methods used to alter sequences for positive pairs.
1. Adding Gaussian noise to each state in the sequence (mean = 0 and variance = 0.02) 2. Out of sync versions where the first state is removed from the first sequence and the last
state from the second sequence 3. Duplicating the first state in either sequence 4. Duplicating the last state in either sequence
We alter sequences for negative pairs by
1. Reversing the ordering of the second sequence in the pair. 2. Randomly picking a state out of the second sequence and replicating it to be as long as the
first sequence. 3. Randomly shuffling one sequence. 4. Randomly shuffling both sequences.
The second method we use to create positive and negative examples is by including data for additional classes of motion. These classes denote different task types. For the humanoid3d environment, we generate data for walking-dynamic-speed, running, backflipping and frontflipping. Pairs from the same tasks are labelled as positive, and pairs from different classes are negative.
8.1 ADDITIONAL ABLATION ANALYSIS
8.2 RL ALGORITHM ANALYSIS
It is not clear which RL algorithm may work best for this type of imitation problem. A number of RL algorithms were evaluated on the humanoid2d environment Figure 9a. Surprisingly, TRPO (Schulman et al., 2015) did not work well in this framework, considering it has a controlled policy gradient step, we thought it would reduce the overall variance. We found that Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015) worked rather well. This result could be related to having a changing reward function, in that if the changing rewards are considered off-policy data, it can be easier to learn. This can be seen in Figure 9b where DDPG is best at estimating the future discounted rewards in the environment. We also tried Continuous Actor Critic Learning Automaton (CACLA) (Van Hasselt, 2012) and Proximal Policy Optimization (PPO) (Schulman et al., 2017); we found that PPO did not work particularly well on this task; this could also be related to added variance.
8.3 ADDITIONAL IMITATION RESULTS
Our first experiments evaluate the methods ability to learn a complex cyclic motion for a simulated humanoid robot given a single motion demonstration, similar to (Peng & van de Panne, 2017), but using video instead. The agent is able to learn a robust walking gate even though it is only given noisy partial observations of a demonstration Figure 10. | 1. What is the main contribution of the paper regarding learning a distance function between observed and agent behaviors?
2. What are the strengths of the proposed approach, particularly in its performance compared to baselines?
3. What are the weaknesses of the paper regarding its writing quality and experimental results?
4. Do you have any concerns about the stability of the unsupervised data labeling process?
5. How does the reviewer assess the overall quality and novelty of the paper's content? | Review | Review
The idea of the paper is to learn a distance function between observed and the agent’s behaviors. Once they have the distance function, they can learn the agent’s policy efficiently given a single demonstration of each task. In their formulation, the distance function and the policy are jointly learned.
The idea is reasonable and the performance outperforms baselines like GAIL and VAE. However, the paper is not-well written with many relevant equations defined in the supplementary material. The unsupervised data labeling part seems Adhoc with many details in the supplementary material. I wonder if the process stable or not. How many lower than the average performance of the proposed method as shown in F.g 4 are caused by unsupervised data labeling?
In Fig. 4b, the manual performance is very strong once converged. Although the proposed method initially reaches high reward, after twice many iterations the manual performance even outperforms the proposed method on average many times. Hence, I am not very convinced about the proposed method will be the best-picked method in practice.
Overall, I think the idea is good. But the paper is poorly written and I concern the most about the stability of the unsupervised data labeling process. The experimental results are also not super convincing. Hence, I recommend for weak rejection. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.